Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Splunk 8.0.1 App for Unix and Linux

$
0
0
I'm doing a new install with 8.0.1 and want to install the Splunk App for Unix and Linux that is compatible with ver. 8.0.1. to collect data. I have a HF, SH Idx and Deployment servers. The document doesn't mentioned 8.0 anyone knows if it will work?

field in add-on PaloAlto

$
0
0
Hi Splunk Team! I recently found filed "dvc_host" in paloalto add-on has no data. I need to get back to that field data Thanks All

blocking specific input files

$
0
0
Hi Team, We are using Splunk Enterprise on AWS environment. So long back there is an Cloudtrial app configured on the same. Logs are directly getting pushed to splunk indexer through S3 bucket based on the inputs configured on the Coudtrial app. Since this App version is old, there is no option to configure the inputs through GUI. we are making changes through inputs.conf file itself. I've to block the Decrypt logs (.gz) getting indexed from the splunk. please suggest the work around for the same. Let us know if this cloud trial App has to be upgraded for the same and what will be the latest version of this.

Has anyone indexed Azure Devops audit log?

$
0
0
Hi. It seems Microsoft has exposed the audit log for Azure DevOps, https://docs.microsoft.com/en-us/rest/api/azure/devops/audit/audit%20log/query?view=azure-devops-rest-5.1 Has anyone tried to index this log and how did you do it? Kind regards las

How do I show full series name while mouse over on the legend?

$
0
0
Hello, I have a line chart with multiple series in my dashboard. The series names are quite long, so they cut in the legend by default. Is there any way to display the full series name while mouse over on the series on the legend? I know I can mouse over on the chart itself and then the full series name will appear, but I would like to get this effect (full series name) when moving my mouse over the legend. How would I do this? Kind Regards, Kamil

How to count top results in each column?

$
0
0
Hi everyone, Trying to find out the top 10 values from different host long_message index functionality.. So tried like index=* "error" OR "FAIL" OR "fatal"| stats values (functionality) values(correlatioid) values(loan_num) values(host) count by log_message | sort -count So it is showing top errors with functionality host loan_num details for each and every error. My requirement is i want achieve top errors count from particular host or fuctionality.. It is showing like Functionality: Abc Xyz 123 Let's say If the Abc functionality has more errors.. in the table it should give the count of Abc along with percentage among all the obtained errors.. Like this.. Functionality: Abc- 109 98% amoung Xyz - 1 1% 123 1 1% Any suggestions? Similarly i want see the top errors causing from different sources..

Splunk App for Infrastructure

$
0
0
I have installed V2.02 of the app and configured manual performance metrics inputs to Windows hosts with UF already installed. Problem is that the Overview dashboard panels are not working. | inputlookup em_entities is returning results for my hosts, but I notice that the metric_name fields are all small caps and the dashboard searches are looking for metric_names that is not all small caps: avg(Processor.%_Privileged_Time). If I change the metric names in the search to all small caps the searches run without issues. If I read the metrics index documentation it states that you can only use small caps in the metric names. Am I missing something when creating the manual inputs? Field alias also does not seem to be working either and I also cant find where to edit the dashboards to change the metric names. Any suggestions welcome? Short from recreating all the dashboards to my own Im out of ideas. Thanks Pieter

Recheck the alert, after the alert is raised

$
0
0
I have configured an alert to notify by Microsoft Teams when CPU threshold reached to 90%. The alert comes when it reaches to 90%. And immediately the CPU usage comes down to 80% within 5 minutes. Is there any settings where i can configure to recheck the CPU usage after the first alert is raised? and send another alert saying everything is OK now?

TA-MS-AAD - Daily billing data

$
0
0
Hi all, I'm trying understending how TA-MS-AAD add-on works. I configured a data input to collect data about billing and Consumption setting interval to 600 and Max days to query 4 on my local instance. I'm receiveng data about billing (sourcetype="azure:billing"); data are about every istances and I'receiving daily costs. However for some days I'm not receiving data (ex: I have data for 2020-02-02 - 2020-02-03 - 2020-02-05 but not for 2020-02-04). ![alt text][1] Is it normal? Does anyone know a guide to configure data input correctly? Thank you Giorgio [1]: /storage/temp/282601-capture.png

Is it possible to have multiple break_only_before regex for one sourcetype

$
0
0
I'm currently working through each of my companies Java apps and updating their sourcetypes using transforms and regexing each sourcetype. With a few exceptions, most apps will have an app, access and audit log. The one issue i've now run into is that one of the apps we use has several logs that would fall under the "app log" remit however, the log formatting is completely different so there is no way to use the standard regex we use for app logs. for example, a standard app log would have each entry prefixed with the following date/time: 2020-02-10T00:02:39,851 The app i'm currently working on has an app log of: Feb 10, 2020 10:40:03 AM GMT Is it possible to have multiple BREAK_ONLY_BEFORE regexes for a sourcetype in props.conf? i'm trying to avoid having to create a brandnew sourcetype just for one apps app log. i hope this question makes sense. please let me know if you need any more information.

Rising column not working as expected

$
0
0
Hello experts I have a DB Connect connection to my DB that validates. The query that I send to the DB is displayed here: WITH "dte" as (SELECT * FROM "T_AUDIT_LOG_HISTORY" UNION SELECT * FROM "T_AUDIT_LOG" ) select * from "dte" where "UN_ID" > ? ORDER BY "UN_ID" ASC I use a rising value on column 10 ("UN_ID") which is a integer unique identifier that increases for every new record. This table is not updated. Only inserts arrive. The first column has a timestamp that I link to the _time internal field. What I would expect is that every unique id is imported just once, but this is not the case. Every 15 minutes it imports a full copy of the whole table... Here is my config file for this connector: [AUDIT_LOG_HIST] connection = Production disabled = 0 host = XXX_PROD index = xxx index_time_mode = dbColumn input_timestamp_column_number = 1 interval = */15 * * * * mode = rising query = WITH "dte" as (SELECT * \ FROM "T_AUDIT_LOG_HISTORY"\ UNION\ SELECT * \ FROM "T_AUDIT_LOG"\ )\ select *\ from "dte"\ where "UN_ID" > ?\ ORDER BY "UN_ID" ASC query_timeout = 60 sourcetype = audit:log tail_rising_column_number = 10 I would only need the new ids so I don't see any doubles in my index. Thanks in advance P

Convert Date Timestamp in Lookup for Drill-down

$
0
0
I have a dashboard that queries a Lookup file. The Lookup file contains a column containing Date Timestamps in this format DD/MM/YY. The column name in the Lookup is Date. It is called "Date (DD/MM/YY)" in the dashboard statistics panel. I am converting that DD/MM/YY string to Unix time in the drill-down using something like this: | eval unixtime=strptime('Date',"%d/%m/%y") Which gives results like this: Date unixtime 06/02/20 1580947200.000000 1580947200.000000 Is equivalent to: 02/06/2020 @ 12:00am (UTC) That's a good start, but I want the drill-down search to search that entire 24 hour period. So all of 06/02/20, 24 hours. Something like this seems like it would work. strptime($row."Date (DD/MM/YY)"$,"%d/%m/%y")strptime($row."Date (DD/MM/YY)"$,"%d/%m/%y")+86400 86400 being the number of seconds in a day. But I can't quite get it working. Can anyone point me in the right direction?

Will the Extrahop App for splunk app work on splunk 7.3.0

$
0
0
Trying to setup the app on 7.3.0, I am able to see the device groups and Activity groups when entering the EH ip and api key during the configuration process within the Extrahop app, the Data Inputs are created in the add in, however there is nothing being logged.

reassigning ownership for large amount of knowledge objects

$
0
0
I see that when i reassigning ownership the schedule wont kick in (next_scheduled_time just reads none), for example until i open the search and manually hit save it seems like none of them will run on the original set time. anyone ever run into this before? is there a rest call i can do to change the ownership based off the old owner?

Combine rows with overlapping MV values

$
0
0
I have data from a couple different sources that I am trying to combine together into coherent results. The issue I am running into is that sometimes the data does not line up perfectly. Both data sources will report on a user and try to list all their email aliases but sometimes they are incomplete lists and only partially overlap. So we end up with multiple rows that represent the same user but and have most of the same values for the email field, but because they are not **exactly** the same, when I try to group by email address it doesn't work out how I would hope. I included some example SPL below to illustrate what the data looks like. There are also some other fields in results, but those cannot be used for merging results either as the email address of the user is the only field that is in both data sets. | makeresults | eval email =split("1@example.com,2@example.com;2@example.com,3@example.com;4@example.com;5@example.com", ";") | mvexpand email | eval email=split(email, ",") | streamstats count as orig_row ![alt text][1] So I am wondering if there is any way to combine rows #1 and 2 in the example results while leaving rows 3 and 4 intact? Thanks! [1]: /storage/temp/282602-capture.png

Why is a bash script running if I have disabled the input stanza?

$
0
0
I have been ingesting data from an Akamai WAF using the Akamai TA from SplunkBase. Once I have sorted all of the firewall issues and such with the team I have it working how I want it. I have the TA installed on the HF and Search Peers of my Index Cluster with the base stanza in default/inputs.conf set to disabled. I have then created a light weight TA which just has the inputs.conf setup with the appropriate tokens, URL's etc and have that only on the HF. The TA itself has a linux folder which contains a bash script that calls the Java app that makes the connection to the REST API. All good so far. However, when I deployed the SplunkBase TA to the Indexers, it still tries to run the Java app even though I have the inputs stanza disabled. Does Splunk run scripts in the linux folders (and I assume windows too) if it finds them? If so how do I disable them on the indexers but not on the HF? The SplunkBase TA also has props and transforms so I definitely want them on both the HF and Indexers. Hope this makes sense and any help greatly appreciated? Many thanks

how to extract a string before the @ symbol from an email adress?

$
0
0
I have the username filed extraction as follows in the props.conf which extracts the email address:- [sourcetype_X] EXTRACT-XYZ = username="(?[^+\"]*)" which extracts the field as follows x12345@abc-def-ghij-01.com y67891@klm-def-ghij-01.com z45787@abc-def-ghij-01.com ABC-DEF Now what would be regex stanza to extract the username as follows from the above x12345 y67891 z45787 ABC-DEF

How to trim everything from a field after a comma?

$
0
0
I have a field that contains: CN=Joe Smith,OU=Support,OU=Users,OU=CCA,OU=DTC,OU=ENT,DC=ent,DC=abc,DC=store,DC=corp I'd like to trim off everything after the first comma. This information can always be changing, so there are no set number of characters. Thanks.

Convert JSON into Specific Table format

$
0
0
This what we have in logs: ```index="xyz" INFO certvalidationtask ``` And this prints a JSON object which consists of a list of commonName + ExpirationDate > ```Stage.env e401a4ee-1652-48f6-8785-e8536524a317 [APP/PROC/WEB/0] - - 2020-02-10 16:09:01.525 INFO 22 --- [pool-1-thread-1] c.a.c.f.c.task.CertValidationTask : {commonName='tiktok.com', expirationDate='2020-05-21 17:50:20'}{commonName='instagram.com', expirationDate='2020-07-11 16:56:37'}{commonName='blahblah.com', expirationDate='2020-12-08 11:30:42'}{commonName='advantage.com', expirationDate='2020-12-10 11:41:31'}{commonName='GHGHAGHGH', expirationDate='2021-05-19 08:34:03'}{commonName='Apple Google Word Wide exercise', expirationDate='2023-02-07 15:48:47'}{commonName='some internal cert1', expirationDate='2026-06-22 13:02:27'}{commonName='Some internal cert2', expirationDate='2036-06-22 11:23:21'}``` I wanted a table which contains 2 columns -> Common Name & Expiration Date. Where if Expiration date is less than 30 days from current date we show that in RED color, for less than 90 days we show in Yellow, everything else in Green. Much much thanks in Advanced.

what would be the perfect props.conf for this event

$
0
0
Date=2020-02-10|StrtTime=09:56:08|EndTime=09:56:08|Duration=7|EvntType=MSG|UUID= props that i am using : TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d MAX_TIMESTAMP_LOOKAHEAD = 40 LINE_BREAKER = Date=\d+-\d+-\d+ TRUNCATE = 9999 SHOULD_LINEMERGE = false CHARSET=UTF-8 disabled=false can i use TIME_FORMAT = %Y-%m-%d OR do i have to use TIME_FORMAT = %Y-%m-%d|StrtTime=%H:%M:%S
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>