After installing i get this error.
Checking: /opt/splunk/etc/apps/force_directed_viz/default/savedsearches.conf
Invalid key in stanza [default] in /opt/splunk/etc/apps/force_directed_viz/default/savedsearches.conf, line 2: display.visualizations.custom.force_directed_viz.force_directed.theme (value: light).
↧
Invalid key in stanza [default] (value: light)
↧
Why the alert did not trigger for below cron expression?
16-59/10 5-6 * * * cron was setup for more than 0 events.
We had an event at 5:15 Am. Any idea why the alert did not trigger?
The query used is for -5m@m
↧
↧
How to input an excel file from an email automatically
Hello,
everyday I have an email with an Excell file attached.
To input the data in Splunk, I have to save the file, convert the file in csv then add it to Splunk.
Is it possible to do this action automatically ?
↧
Use of alarm activation cond.
Hi guys,
i've a doubt regarding "activare alarm when" under **Condition of activation** in alarm editing window.
i add an img to explain better: (sorry for italian)
![alt text][1]
[1]: /storage/temp/251078-cattura.png
do you know how to find a reference guide on Splunk Docs? or have you any hint this?
Many thx
↧
How to retrieve password from storage/passwords?
I've created a setup page for my app. Code is below:text password
Through this code I'm able to store the password in encrypted format and I can see the passwords.conf in local folder. But whenever I'm opening the setup page again it's coming as blank and If I'm entering a new credentials it stores the new credential in passwords.conf. I want that after entering and saving the credentials whenever a user opens the setup page again it should display the saved username and password(password as *******).
↧
↧
len(_raw) vs |dbinspect rawSize
I use a simple query to determine the amount of data I've sent to splunk:
index=x
|eval esize=len(_raw)
|timechart sum(esize) span=1h
This is pretty expensive when ran over long timeranges. I also tried this:
|dbinspect index=x
|eval date=strftime(startEpoch,"%F")
|chart sum(rawSize) over date
|rename sum(*) -> *
The results are different, dbinspect reporting lower values than len(_raw).
Any ideas on a cheap way to get the right results?
↧
How to use Splunkweb alert to run a script on the forwarder to restart a service?
Hi I have a service that reports to Splunk and some times fell over, is there any chance I can automate this by telling Splunk to run a scripted input on the forwarder level to restart a service, log it and feed the event to Splunk?
Thank you for any answers
↧
Text input not filtering results correctly
So I have a dashboard with 3 different inputs. I noticed something really weird with my text input. If I enter the number 22 into my text input search, results on my dashboard stats table include 21, **22**, 24, 25, 27, 28, 29, 33, etc.
Why is my text input showing me results that don't match the text input exactly?
![alt text][1]
[1]: /storage/temp/251081-test.png
Below is my query for the stats table:
` (index=cms_vm) $arrayfield$ $lun$ $datacenter$
| dedup VM
| eval DatastoreName=replace(DatastoreName,".+_(\d+)$","\1")
| eval StorageArray=replace(StorageArray,"^[^_]*_[^_]*\K.*$","")
| eval VM=upper(VM)
| eval StorageArray=upper(StorageArray)
| join type=outer VM [search index="cms_app_server" | fields VM Application]
| table VM OperatingSystem_Code Datacenter StorageArray DatastoreName Application
| rename OperatingSystem_Code AS "Operating System", StorageArray AS "Storage Array", DatastoreName AS "LUN"`
Any help would be appreciated as I can't seem to pinpoint why this is occurring.
↧
Show full x axis labels in chart
Hi my x axis labels for a chart are really long.
E.g. 2017-19-18 22:33:22:10247392048 ABSSHEUVCBKSOWNMSKWOKSNKJWK
Because its long splunk shortens it like 2017-19.......NKJWK
How do I force splunk to show full x axis labels?
Thanks.
↧
↧
indexed time vs eventtime odd issue
We have a couple of splunk envs running is aws. We rehydrated(deployed a new AMI) one of the env last week and this week I have run into a strange issue with the timing of indexed data. Before the rehydrate data was indexed typically within 5-10 mins of the eventtime. Now it appears that exactly one hour has been added to the time it takes to get the data indexed. I am at a loss to explain this??? The events are being forwarded from cloudtrail. No cloudtrail changes have occurred .
Here is a sample of events prior the env rehydration (getting a new AMI)
e_time i_time
06/20/18 08:27:18 06/20/18 08:31:40
06/20/18 08:27:03 06/20/18 08:31:40
06/20/18 08:26:48 06/20/18 08:31:40
06/20/18 08:26:32 06/20/18 08:31:40
06/20/18 05:00:14 06/20/18 05:11:13
06/20/18 04:37:59 06/20/18 04:49:45
06/20/18 03:01:46 06/20/18 03:09:51
06/20/18 02:58:34 06/20/18 03:09:51
06/20/18 03:25:55 06/20/18 03:31:40
06/20/18 03:25:39 06/20/18 03:31:40
06/20/18 03:25:36 06/20/18 03:31:40
06/20/18 03:25:21 06/20/18 03:31:40
06/20/18 03:25:20 06/20/18 03:31:40
06/20/18 00:47:21 06/20/18 00:59:58
06/19/18 23:43:47 06/19/18 23:51:38
06/19/18 23:43:31 06/19/18 23:51:38
06/19/18 23:43:31 06/19/18 23:51:38
06/19/18 21:00:13 06/19/18 21:07:14
06/19/18 20:59:58 06/19/18 21:07:14
06/19/18 20:59:43 06/19/18 21:07:14
06/19/18 20:59:28 06/19/18 21:07:14
06/19/18 20:59:27 06/19/18 21:07:14
06/19/18 19:42:55 06/19/18 19:47:33
After you can see the one hour delay
e_time i_time
06/29/18 06:32:10 06/29/18 07:35:49
06/29/18 06:29:23 06/29/18 07:35:49
06/29/18 06:28:48 06/29/18 07:35:49
06/29/18 06:28:38 06/29/18 07:35:49
06/29/18 06:28:26 06/29/18 07:35:49
06/29/18 05:40:20 06/29/18 06:46:07
06/29/18 05:40:05 06/29/18 06:46:07
06/29/18 05:39:50 06/29/18 06:46:07
06/29/18 05:39:34 06/29/18 06:46:07
↧
Find duplicate events for a pattern that occurred in the same timestamp
My requirement is to find duplicate events for a pattern that occurred in the same 'second' of timestamp after stripping the millisecond value.
queries that I tried but didn't give me 100% success:
search_pattern | timechart span=1s count | where count >1
search_pattern | timechart span=1s count | where count >1 | table _time, _raw
Not sure if 'eventcount summarize=false' or 'eventstats' would be of any help here.
P.S. I've recently started on splunk hence my knowledge is limited but I can work with pointers and do hit n trial approach.
Any pointers are appreciated.
↧
How do I use DB Connect on Linux to a Win SQL server?
Hi all,
I'm using Splunk Enterprise version 6.5.1 with Splunk DB Connect version 2.4.0 on a heavy forwarder running Linux.
I'm trying to connect to an MS SQL DB on a Microsoft 2014 server with a backslash in the name (name\name:non-default port).
I have tried numerous combinations of driver types and credentials (Win authenticated and not). I was using the generic MS one having followed previous answers/documentation regarding downloading the correct version (I've got 4.1 showing in my driver list - there wasn't an older version available on the Microsoft site) and I thought I was getting somewhere with an error message saying the "Login failed for user 'domain\username'" (despite knowing the creds were valid) until I read an answer here...
https://answers.splunk.com/answers/556315/splunk-db-connect-3-why-am-i-unable-to-login-using.html
... saying that I need to be using the 'MS-SQL Server Using jTDS Driver'. Only problem is when I do this it tells me that the server host name is "Unknown" so then if I change the backslash to a forward-slash (thinking Linux would prefer this) it now tells me...
com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: Network error IOException: Connection refused (Connection refused)
I have also tried the accepted answer here...
https://answers.splunk.com/answers/228878/how-to-connect-splunk-db-connect-2x-to-ms-sql-usin.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev
...but to no avail. I did change the "MSSQLSERVER12" to my organisations domain which might have been a mistake but I'm running a 2014 server so would that change things?
Really starting to pull my hair out with this one but I'm determined to get it working. Help me splunk>answers, you're my only hope.
↧
Collect remote files using SFTP
Hi all,
Is there any native way of configuring splunk or forwarders to periodically collect files using SFTP ?
It seems that it does not exist but I'm very surprised.
Am I wrong ?
Thanks in advance
↧
↧
Need help Optimizing Search in HUNK
We are currently using MapRFS and with our restrictions on directory structure, we are having a hard time getting optimized searches with Hunk.
Basically, the search will find all the events and then just keep searching through all files.
Our restriction require us to a have a folder called current that our current hour logs go into and then at the top of the hour, it is rolled and we move the rolled file into the subdirectories based on date/time.
Our current directory structure looks like:
/mapr/mapr.oly.cequintecid.com/user/mapr/data/(sourcetype)/(host)/current/(year)/(month)/(day)/(hour)
The current hour goes into a log file in /mapr/mapr.oly.cequintecid.com/user/mapr/data/(sourcetype)/(host)/current
and then is moved at the top of the hour to the corresponding
...(year)/(month)/(day)/(hour)
folder
We had search optimization before when we were putting the current hour log file directly into the further down hour subdirectory but we cannot do this anymore due to internal restrictions.
Suggestions are welcome.
Here is our indexes.conf for the virtual index we are using:
[mapr-vol]
vix.description = MapR using volumes instead of subdirectories
vix.input.1.et.format = yyyyMMddHH
vix.input.1.et.regex = /user/mapr/.*?/.*?/.*?/(\d+)/(\d+)/(\d+)/(\d+)/.*
vix.input.1.lt.format = yyyyMMddHH
vix.input.1.lt.offset = 3600
vix.input.1.lt.regex = /user/mapr/.*?/.*?/.*?/(\d+)/(\d+)/(\d+)/(\d+)/.*
vix.input.1.path = /user/mapr/data/${sourcetype}/${host}/...
vix.provider = maproly
↧
,Guacamole Docker logs in Splunk
Hi,
I would like the Guacamole logs to get forwarded to the Splunk server and I added the log forwarding parameters I found on Splunk docs and ran:docker inspect -f '{{.HostConfig.LogConfig.Type}}' containerID
and the output was: Splunk,
But I checked on the splunk server, ran the query on the docker host and searched for guacamole, it did not return anything.
[/CODE]
--log-opt splunk-token=******************* \
--log-opt splunk-url=https://splunk aws server:8089 \
--log-opt splunk-insecureskipverify=true \
--log-opt splunk-caname=SplunkServerDefaultCert \
I did the same thing on another host using the same splunk token and was able to see the docker logs on the splunk server.
[/CODE]
Can someone please help me with that?
Thank you
,Hi,
I would like the Guacamole logs to get forwarded to the Splunk server and I added the log forwarding parameters I found on Splunk docs and ran docker inspect -f '{{.HostConfig.LogConfig.Type}}' containerID
and the output was: Splunk, but I checked on the splunk server, ran the query on the docker host and searched for guacamole, it did not return anything.
--log-opt splunk-token=******************* \
--log-opt splunk-url=https://splunk aws server:8089 \
--log-opt splunk-insecureskipverify=true \
--log-opt splunk-caname=SplunkServerDefaultCert \
I did the same thing on another host using the same splunk token and was able to see the docker logs on the splunk server.
Can someone please help me with that?
Thank you
↧
System wide search concurreny (Adhoc + Scheduled)
Hello Splunkers,
I'am trying to understand the concept of Search head concurrency.
I have a SHC with three search head each having 10CPU (splunk 6.4 running)
| rest splunk_server=splunksearchhead101 /services/server/status/limits/search-concurrency gives me
max_hist_searches = 16 (10*1 +6); max_hist_scheduled searches= 8 (0.5* max_hist_searches) and max_auto_summary_searches =4 (0.5* max_hist_scheduled_searches)
But sometime SH throws error "system wide concurrency reached limit=24 reached=24"
So my question is
Is system wide concurrency = max_hist_searches + max_hist_scheduled_searches?
I'am of the opinion that system wide concurrency = max_hist_searches(16)
Also what would be my cluster wide search concurrency.
Thanks & Regards,
Ankith
↧
Using values from one search as 'earliest' and 'latest' values in another
I have sequence of events from a VPN session. The last message in the sequence contains a field for duration of the session and I have constructed a search which accurately calculates the start time. I would like to use the start time and the end time as **earliest** and **latest** constraints to search for and display the value of the **src_ip** field which is found somewhere in the middle of the session. I've tried lots of different things with varying success. In the end, I want a single row that contains the start, end, duration, user, and src_ip fields.
Here's what I expected to work. My methodology was to find the last event in the session, calculate the session start time, and then pass those values along with some other fields to another search to pull out the value of the **src_ip** field. In this particular version of my search, I'm getting errors about the values for **earliest** and **latest**, though I'm pretty sure this entire approach is wrong anyway. I realize dashboards allow you to use tokens like this, but it's unclear to me how to use field values outside of a dashboard.
index=vpn user=ab12345 Cisco_ASA_message_id=113019
| table index, _time, user, duration
| eval earliest=(_time-duration), latest=_time, session_start=strftime(_time-duration,"%Y-%m-%d %H:%M:%S"), session_end=strftime(_time,"%Y-%m-%d %H:%M:%S"), session_duration=tostring(duration,"duration")
| table index, user, earliest, latest, session_start, session_end, session_duration
| append [search index=$index$ user=$user$ Cisco_ASA_message_id=722051 earliest=$earliest$ latest=$latest$ | table src_ip]
↧
↧
Quotes around first word in lookup table value
I am using an input lookup to exclude results from a search (e.g. index=main NOT [| inputlookup test_lookup.csv | fields value]]. The searches I am trying to exclude contain values with quotes, such as **"foo" bar bat**.
It seems that if the first word in a lookup table value is surrounded in quotes, it will take the word surrounded in quotes as the value for that field and ignore the rest. A lookup of the example above returns only **foo**. Quotes appear to work find around words, so long as they are not the first word in the value.
I've cruised around looking for the answer, and came across a number of posts suggesting triple quoting, using hex char value for quotes, etc and I've also tried a number of things on my own without any success. Thus I have come here.
The lookup result I am trying to get is: **"foo" bar bat**
Here is the contents of my lookup file:
value,comment
"foo" bar bat, double quotes around first word
foo "bar" bat, double quotes around second word
foo bar "bat", double quotes around third word
"""foo""" bar bat, triple-double quotes around first word
\"foo\" bar bat, backslash escaped double quotes around first word
'"foo" bar bat', single quotes around the whole field
and here are the results of the lookup table:
![lookup_results][1]
[1]: /storage/temp/252097-lookup-with-quotes.jpeg
Thanks in advance for any assistance.
↧
Quotes around first word in inputlookup value
I am using an input lookup to exclude results from a search (e.g. index=main NOT [| inputlookup test_lookup.csv | fields value]. The searches I am trying to exclude contain values with quotes, such as **"foo" bar bat**.
It seems that if the first word in a lookup table value is surrounded in quotes, it will take the word surrounded in quotes as the value for that field and ignore the rest. A lookup of the example above returns only **foo**. Quotes appear to work find around words, so long as they are not the first word in the value.
I've cruised around looking for the answer, and came across a number of posts suggesting triple quoting, using hex char value for quotes, etc and I've also tried a number of things on my own without any success. Thus I have come here.
The lookup result I am trying to get is: **"foo" bar bat**
Here is the contents of my lookup file:
value,comment
"foo" bar bat, double quotes around first word
foo "bar" bat, double quotes around second word
foo bar "bat", double quotes around third word
"""foo""" bar bat, triple-double quotes around first word
\"foo\" bar bat, backslash escaped double quotes around first word
'"foo" bar bat', single quotes around the whole field
and here are the results of the lookup table:
![input_lookup][1]
Thanks in advance for any assistance.
[1]: /storage/temp/251083-lookup-with-quotes.jpeg
↧
Mimecast for splunk v2 ip whitelist?
I'm hosting my splunk instance in AWS and I'm trying to whitelist the specific IPs that the app will use when its calling api.mimecast.com over https.
Unfortionatly AWS doesn't allow you to use hostnames on security groups. Does anyone know the specific IP's that need to be whitelisted in order for this app to work?
↧