Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

I want to delete fields whose total value is less than the threshold on a timechart.

$
0
0
index=_internal | eventstats count by sourcetype | where count > 100 | timechart span=1m count by sourcetype note:earliest=-60m if the total is less than the threshold, I don't want to display the field. `eventstats` calculate all events , which is inefficient. Is there any other good way?

Field Extraction for different types of data

$
0
0
Hi Splunkers, Splunk suggests to extract fields at forwarders for structured data, why? and what if i have field names in the log / no filed field names in the log? I have a confusion that whether my license usage get affected by structured field extraction at index time/ at forwarders. I understand that splunk license counts against what you index , so if i do indexed field extractions then those field value pairs will be added to _raw and cause license usage, is that correct? For unstructured data Splunk suggests us to do extraction at search time?. I'm clear with these but sometimes not,. any advises will be appreciated.. Pramodh B

Timezone Query

$
0
0
I have a Deploy server application that I use to control my "SYSLOG" server that receives logs from various other sources. The SYSLOG server has a SPLUNK UF installed on it and it sends the data to configured indexes with the relevant source types. I have a range of data sources in this app to direct where my data goes. The UF is effectively monitoring for files in a directory structure. For example [monitor:///data/splunkforwarder/myfiles/app1/*/messages*] host_segment = 5 sourcetype = app1_sourcetype index = app1 [monitor:///data/splunkforwarder/myfiles/app2/*/messages*] host_segment = 5 sourcetype = app2_sourcetype index = app2 I have a monitor input that is using the standard JSON provided by SPLUNK for another directory [monitor:///data/splunkforwarder/myfiles/BIGAPP/*/messages*] host_segment = 5 sourcetype = json index = bigapp BIGAPP sends its logs via SYSLOG and this works as expected, however the time that is indexed in SPLUNK is out by 8 hours. The event arrives at say 8:45pm but SPLUNK indexes this at 12:45 (difference of 8 hours). I attempted to do the following: [monitor:///data/splunkforwarder/myfiles/BIGAPP/*/messages*] host_segment = 5 sourcetype = json index = bigapp TZ = Australia/Perth I reloaded my DS and resent a log but this made no difference. From reading the articles, it would seem to indicate that this must only be done in props.conf? Do I have to create a new sourcetype (effectively duplicating the JSON sourcetype) and then apply this props to my SYSLOG application? I don't want impact my app as all of the other monitored files are accurate from a time stamp perspective so I only need to change this one. The BIGAPP vendor does not have support for changing the time zone on the syslog so I have to resort to having SPLUNK fix this. Thanks for any assistance.

Convert Time Picker values in readable format..?

$
0
0
Based on the time picker & time modifier token i am displaying the time values in a human readable format in a label. For this command i am getting the proper results.![alt text][1] | makeresults | eval latest1="1583038799.000",earliest1="1567310400.000" | eval latest2=strftime(latest1,"%Y-%m-%d %H:%M:%S"),earliest2=strftime(earliest1,"%Y-%m-%d %H:%M:%S") But if i try it in time modifier i am not getting the same result i am not sure it is because of time zone ..? ![ |makeresults | addinfo$field1.earliest$$field1.latest$strftime($result.info_min_time$,"%Y-%m-%d %H:%M:%S")strftime($result.info_max_time$,"%Y-%m-%d %H:%M:%S")][2] Both the places i am using the same code getting different results. any thoughts.. Thanks in advance.... [1]: /storage/temp/284679-pic1.jpg [2]: /storage/temp/284680-pic2.jpg

Getting windows logs into splunk

$
0
0
Hi, I am very new to Splunk. I am looking for a way to get windows logs into Splunk. I downloaded the Splunk forwarder but the issue is that this gives me gibberish logs. Example: "--splunk-cooked-mode-v3--\x00\x00\x00\x00\x00\x00\x00\x00\" I understood this is due to it being TCP but not being recognized as such and it needing to be configured in splunk itself as receiving from a Splunk fowarder ? But this is not allowed with a free license ? If anyone has a link explaining this, that would be a massive help, i would love to understand it way better. I apologize up front if this is a really silly question and the answer is obvious.

repopulate a csv with data from a search using curl

$
0
0
Hi, what is the best way to repopulate a csv with data from a search using curl but without using a username and password as I want to cron the search? Thanks

substraction: | eval field1=mvfilter(match(field, "OUT$")) | eval field1=mvfilter(match(field, "IN$"))

$
0
0
Hello Community, I evaluate the values of a single field which comes with values such as: OUT; IN; DENIED and can get counters for each of those values. Now I want to subtraction "OUT" minus "IN" ( or maybe even minus "DENIED") index="application-license" sourcetype=application License_User_device=* License_feature_status=* License_user=* | fields _time,License_user,License_User_device, License_feature_status,License_feature,tag,eventtype, | eval User=(License_user) | eval LicenseTaken-OUT=mvfilter(match(License_feature_status, "OUT$")) | eval LicenseTaken-IN=mvfilter(match(License_feature_status, "IN$")) | eval LicenseTaken-DENIED=mvfilter(match(License_feature_status, "DENIED$")) | eval LicenseTaken=(License_feature_status) | eval LicenseTaken-AVG=mvfilter(match(License_feature_status, "OUT$") OR match(License_feature_status, "IN$") ) | eval License_feature=(License_feature) | eval Time=strftime(_time, "%d-%m-%Y %H:%M:%S") | bucket Time span=100d | timechart count(LicenseTaken-OUT) as "Application-LicenseTaken(OUT)" count(LicenseTaken-IN) as "Application-LicenseTaken(IN)" count(LicenseTaken-DENIED) as "Application-License-DENIED" count(LicenseTaken) as "Application-License Taken(sum)" count(LicenseTaken-AVG) as "License_avg" | predict License_avg algorithm=LLT upper40=high lower40=low future_timespan=45 holdback=3 in above sample ... I like to implement: | eval LicenseTaken=(License_feature_status) **-** | eval field1=mvfilter(match(field, "IN$")) or .... mvfilter(match(License_feature_status, "OUT$")) ***MINUS*** mvfilter(match(License_feature_status, "IN$")) Field substraction wasn't working; turned back always 0 (zero) Any ideas? thanks in advance Kai

Splunk forwarder on Linux - ./splunk "commands" just hang

$
0
0
It has been a while since I have worked with Linux, but doing my best to refresh my knowledge. Successfully installed the latest forwarder on Ubuntu and it has actually phoned home and the deployment server has pushed config to it. But now it has stopped working. I have configured it to run as user 'splunk', not as root. This has caused some issue, for instance when I just now did run splunk@myserver:~$ ./bin/splunk display deploy-client Pid file "/opt/splunkforwarder/var/run/splunk/splunkd.pid" unreadable.: Permission denied Pid file "/opt/splunkforwarder/var/run/splunk/splunkd.pid" unreadable.: Permission denied Operation "ospath_fopen" failed in /opt/splunk/src/libzero/conf-mutator-locking.c:337, conf_mutator_lock(); Permission denied Did sudo to root and /opt/splunkforwarder# chown -R splunk:splunk * Error did go away, but now when running the same command (as Splunk) nothing happens. I must CTRL+C to "get out of it" splunk@myserver:~$ ./bin/splunk display deploy-client ^C splunk@myserver:~$ Most likely more a basic Linux quesiton, but still, anyone who has an idea of what could be wrong? *Update* And now I did try splunk@myserver:~$ ./bin/splunk list forward-server Cannot initialize: /opt/splunkforwarder/etc/apps/learned/metadata/local.meta: Permission denied Cannot initialize: /opt/splunkforwarder/etc/apps/learned/metadata/local.meta: Permission denied Cannot initialize: /opt/splunkforwarder/etc/apps/learned/metadata/local.meta: Permission denied Since I did do the chown, this _should_ not happen, so quite sure that I've done something not totally correct when installing as root and then switching to Splunk as described here https://docs.splunk.com/Documentation/Splunk/8.0.2/Admin/ConfigureSplunktostartatboottime#Enable_boot-start_as_a_non-root_user - well, it is simply just the chown-command, but since splunk@myserver:~$ ls -la /opt/splunkforwarder/etc/apps/learned/metadata/local.meta -rw------- 1 root root 531 Mar 8 19:15 /opt/splunkforwarder/etc/apps/learned/metadata/local.meta Something is not correct on the server.

Display TV guide style UI in dashboard

$
0
0
I am new to Splunk and I need to display my data in typical TV guide format. X axis is list of channels Y axis is timeline with scroll bar to go left and right of the current time. Each row of the line is a variable length rectangles that show the program. What is the best way to achieve this in splunk dashboard?

How to merge multiple lookup lines into one

$
0
0
I have a table with formatted something like this: 1 John, Smith, a123, superuser, blah 2 John, Smith, a123, audit user, blah 3 Sally, Smith, a234, regular user, blah 4 Andy, Smith, a345, audit user, blah 5 Andy, Smith, a345, log user, blah 6 Andy, Smith, a345, super user, blah When you run the lookup for the user id (so like a123), you get both results on two lines within the same box in the table. I want one single line that has the user type concatenated. So instead of: a123, super user a123, audit user I want: a123, "super user, audit user" Is that possible?

splunk dashboard cant sent email

$
0
0
when i input related email address into the dashboard, it shows the error message "command="sendemail", 'rootCAPath' while sending mail to:" how to solve?

Why cant I see some data that I was able to see before 1 month? Even if retention policy of index is 3 years

$
0
0
Notes - Our retention policy is 3 years for that abc index. - When I exported the result of that query before 1 month, I was able to see that particular data - Today when I run exact same query, I can see some missing data. - To give you the detail, today I am seeing approx 20K less events out of 1L events.

Why past data is missing even if date range is inside my retention policy of that index?

$
0
0
SPL: "(index=3y OR index=3mon) (host=x OR host=y) name="RegisteredUserLog" actionType=egg pointGet=true (platform=0 OR platform=1) | eval earned_date=strftime(_time, "%Y-%m-%d") | stats count by event_id earned_date | rename event_id as easy_id | table easy_id earned_date" Notes - The data I am seeing today is different from when i saw and exported same data before 1 moth providing same date range. - To give you idea, I am seeing 20K less results as compared to 1L events before one month for exact SPL and exact time range. - Retention of index is not issue - Date range is not issue Please help Thanks

issues with Tab Focus in post Splunk 7.3.3 upgrade

$
0
0
we have created dashboard in splunk using tabs as per below URL and its perfectly working fine. https://www.splunk.com/en_us/blog/tips-and-tricks/making-a-dashboard-with-tabs-and-searches-that-run-when-clicked.html Dashboard was created in 7.0 version but recently splunk has been upgraded to 7.3. Since upgrade we have been facing issue with the tab focus. That means, when we click on any tab, a blue line is shown under the tab to know, user is on which tab but when user clicks on another tab, blue line remain on previous tab as well as comes on new tab. That means if user clicks on multiple tabs one by one then for each clicked tab there are blue lines which becomes confusing for user. Ideally, when user clicks on new tab, blue line(focus) should get removed from previous tab and should be always be on the latest clicked tab. Issue seems to be either with tabs.css or tabs.js but we are unable to identify it. Can some one look into these two files from the above link and suggest what can be modified to rectify this error. Thanks in advance!![alt text][1] [1]: /storage/temp/284686-2020-03-09-15-14-44-window.png

Okta Splunk Data collection error

$
0
0
After the configuration of Okta SPlunk TA app I see the following error in _internal HTTPError: 401 Client Error: Unauthorized for url: https://XXXYYZZZ.com/api/v1/logs I verified the token used is active .

Sending Alert email to the extracted user field

$
0
0
I have set up alerts in Splunk and usually I hard-code the recipients email id in the TO field, and it works flawlessly. But in this case , I cannot hardcode the user email id in the alert's TO field, because the user ID has to be extracted from the event (from the event that satisfies the alert condition). Example (sample event that will satisfy the alert query): 40.145.234.438 329x399740x1 **PERSON1** [09/Mar/2020:05:29:23 -0400] "DELETE /rest/api/2/issue/TES1-2/**butchers**?username=**PERSON2** HTTP/1.1" 204 - 40 "https://phutan-dev.mayhem.com/browse/RES1-2" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" "1k35v6f" IF the word butcher is identified then the event should be picked (I can handle until this) and from the event extract PERSON1 and PERSON2 field and trigger email to these two PERSONS as PERSON1@mayhem.com. PERSON2@mayhem.com through the alert.

kv store problem

$
0
0
hello i'm running splunk with Kubernetese and Ansible from time to time im getting this error :>> [SPLUNKD] Error in 'inputlookup' command: External command based lookup> 'kv_alerts_prod' is not available> because KV Store initialization has> failed. Contact your system> administrator.> {"message":"{\"response\":{\"headers\":{\"date\":\"Mon,> 09 Mar 2020 07:44:04> GMT\",\"expires\":\"Thu, 26 Oct 1978> 00:00:00> GMT\",\"cache-control\":\"no-store,> no-cache, must-revalidate,> max-age=0\",\"content-type\":\"application/json;> charset=UTF-8\",\"x-content-type-options\":\"nosniff\",\"content-length\":\"215\",\"vary\":\"Cookie,> Authorization\",\"connection\":\"Close\",\"set-cookie\":[\"splunkd_8089=S8owSsAcljUIFXeya8Nhkk9y^cqA^qGsZi2mnFodHbZzb51KqkZIsqrtkEp1RVvwejUi1ADnoVtJaqV859dCuoZX^WkIKg6ZWDWM_h0Ks1lhSMRXKgpZ323DKC;> Path=/; Secure; HttpOnly;> Max-Age=3600; Expires=Mon, 09 Mar 2020> 08:44:04> GMT\"],\"x-frame-options\":\"SAMEORIGIN\",\"server\":\"Splunkd\"},\"statusCode\":400},\"status\":400,\"data\":{\"messages\":[{\"type\":\"FATAL\",\"text\":\"Error> in 'inputlookup' command: External> command based lookup 'kv_alerts_prod'> is not available because KV Store> initialization has failed. Contact> your system> administrator.\"}]},\"error\":null}","level":"ERROR","logger":"argus:aviation-splunk-rest-apis:services:splunkService","timestamp":"2020-03-09T07:44:04.451Z"}> {"message":"{\"response\":{\"headers\":{\"date\":\"Mon,> 09 Mar 2020 07:44:04> GMT\",\"expires\":\"Thu, 26 Oct 1978> 00:00:00> GMT\",\"cache-control\":\"no-store,> no-cache, must-revalidate,> max-age=0\",\"content-type\":\"application/json;> charset=UTF-8\",\"x-content-type-options\":\"nosniff\",\"content-length\":\"215\",\"vary\":\"Cookie,> Authorization\",\"connection\":\"Close\",\"set-cookie\":[\"splunkd_8089=S8owSsAcljUIFXeya8Nhkk9y^cqA^qGsZi2mnFodHbZzb51KqkZIsqrtkEp1RVvwejUi1ADnoVtJaqV859dCuoZX^WkIKg6ZWDWM_h0Ks1lhSMRXKgpZ323DKC;> Path=/; Secure; HttpOnly;> Max-Age=3600; Expires=Mon, 09 Mar 2020> 08:44:04> GMT\"],\"x-frame-options\":\"SAMEORIGIN\",\"server\":\"Splunkd\"},\"statusCode\":400},\"status\":400,\"data\":{\"messages\":[{\"type\":\"FATAL\",\"text\":\"Error> in 'inputlookup' command: External> command based lookup 'kv_alerts_prod'> is not available because KV Store> initialization has failed. Contact> your system> administrator.\"}]},\"error\":null}","level":"ERROR","logger":"argus:aviation-splunk-rest-apis","timestamp":"2020-03-09T07:44:04.451Z"}>>>> KV Store process terminated abnormally> (exit code 100, status exited with> code 100). See mongod.log and> splunkd.log for details. removing mongod.lock fix the problem but it is happening again . im wondering if there is another way to solve it thanks !

SSL/TLS with requireClientCert in web.conf fails

$
0
0
Hi! I have worked for a while to make Splunk use TLS and PKI as much as possible. At present the system contains of version 8.0.1 components only. I have managed to get Splunk Indexer to require client certificate from the UFs, and it seems to work. The Splunk serverCert is in a file containing the certificate, private key, issuer certificate and the root-CA certificate (the issuer of the issuer certificate). The trusted root-CA -certificates are in a separate text file. For this connection requiring _requireClientCert = true_ works fine. With the web-UI things are not going as elegantly. My web.conf looks like this: [settings] enableSplunkWebSSL = 1 privKeyPath = etc/auth/splunkweb/splunk.pki.key serverCert = etc/auth/splunkweb/splunk.pki.txt requireClientCert = false sslVersions = tls1.2 loginBackgroundImageOption = none login_content = This is a Test installation. With the above, things are just fine. Changing _requireClientCert_ to _true_ breaks everything and the webGUI is not started. The splunkd.log gets populated with lines like: 03-09-2020 12:45:04.223 +0200 ERROR X509Verify - X509 certificate (CN=Root CA,O=X,C=Y) failed validation; error=19, reason="self signed certificate in certificate chain" I get exactly the same error message, if I connect using openSSL to the indexer port, where UFs connect: openssl s_client -connect splunk:9998 -state -prexit * Certificate chain 0 s:/C=Y/O=X/CN=Test Splunk Indexer i:/C=Y/O=X/CN=TestCA-1 1 s:/C=Y/O=X/CN=TestCA-1 i:/C=Y/O=X/CN=Root CA 2 s:/C=Y/O=X/CN=Root CA i:/C=Y/O=X/CN=Root CA * SSL-Session: Protocol : TLSv1.2 * Verify return code: 19 (self signed certificate in certificate chain) My point here is that the connection between UF and indexer still works fine. The question is: Is _requireClientCertificate_ simply not supported in the web-GUI, or is there something in the documentation I have not understood correctly? If it is possible to require a certificate from the client (i.e. a web browser), is there a way to define the trusted CA-certificates and should any intermediate CA certifictes be included as well? Another thing I have been wondering about is certificate validation and CRLs. Is there a way to make Splunk actually validate the certificates it is presented? Best regards, Petri

How to enable just one tag name from the CIM model?

$
0
0
How to enable just one tag name from the CIM model? eg. I just want to use network tag from Inventory model. But the data model gives error saying other tags names are not included.

problem with tab focus in dashboard in splunk 7.3

$
0
0
we have created dashboard in splunk using tabs as per below URL and its perfectly working fine. https://www.splunk.com/en_us/blog/tips-and-tricks/making-a-dashboard-with-tabs-and-searches-that-run-when-clicked.html Dashboard was created in 7.0 version but recently splunk has been upgraded to 7.3. Since upgrade we have been facing issue with the tab focus. That means, when we click on any tab, a blue line is shown under the tab to know, user is on which tab but when user clicks on another tab, blue line remain on previous tab as well as comes on new tab. That means if user clicks on multiple tabs one by one then for each clicked tab there are blue lines which becomes confusing for user. Ideally, when user clicks on new tab, blue line(focus) should get removed from previous tab and should be always be on the latest clicked tab. Issue seems to be either with tabs.css or tabs.js but we are unable to identify it. Can some one look into these two files from the above link and suggest what can be modified to rectify this error. Thanks in advance!
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>