I have to search for two logs from same index using different time range. For example one eventtype is "login" and the other eventtype is "breach". In a single search i need to search for both eventtypes. But when i do a search for last 4 hrs, it should search eventtype "breach" for last 4 hrs and eventtype "login" for last 8 hrs. Anyone can help me in this?
↧
How can I search for one eventtype for 4 hours and a second eventtype for 8 hours in the same search?
↧
Palo Alto Networks App for Splunk: How to regenerate the lookup table from disk?
We are getting the following error when we run queries:
The lookup table 'pan_vendor_info_lookup' does not exist. It is referenced by configuration 'pan:newapps'.
Looks like someone deleted the lookup table in the Splunk instance but it still exists on the disk. I just do not know how to have it generate the table and definition.
Note: I tried to reinstall the add on but no luck
↧
↧
After upgrading Splunk DB Connect the date fields disappeared. How do I configure a fix?
In DBConnect 2.0 when I created an input, it was created the fields like date_mday, date_month, date_week, etc..
Now I updated to the version 3.1.1 and it isn't creating this fields anymore. This a bug or something that I have to configure to fix it?
↧
How can I create a table of my search results with a count of each matching dest_ip value?
I have this search of events:
eventtype=cisco-firewall src_ip="*" (dest_ip="192.168.1.2" OR dest_ip="192.168.2.2" OR dest_ip="10.10.1.1" )
For each src_ip, I'd like to list the dest_ip and the count of src_ip so it'd like look
src_ip | dest_ip | count
212.123.123.123 | 192.168.1.2, 10.10.1.1 | 123
215.123.123.123 | 192.168.1.2, 10.10.1.1 | 55
214.23.23.23 | 192.168.2.2 | 894
211.45.55.55 | 192.168.1.2, 192.168.2.2, 10.10.1.1 | 235
↧
How can I include the sequence sunburst chart in visualization picker of search app?
Hello , I am trying to add sequence sunburst chart in visualization picker of search app. Could anybody please help me with that?
↧
↧
Has anyone used Palo Alto Networks MineMeld to send logs to Splunk? Can you help with configuration?
Has anyone ever sent logs to Splunk using MineMeld? If so how? I currently have access to MineMeld, but I was looking for away to set up the config to send the logs to Splunk.
↧
Problem with command "map"
Hey guys!
So, I am having issues with the command map and was hoping someone can help me with this..
I have a Choropleth Map that displays number of events per country according to a search string. What I am trying to do is: drilldown the country name and the user name (from a multiselect input used to populate the map) once the person clicks on it to a Statistic Table. Problem is, in this table, I am also using the map command.
Here is an example of how my search looks like, more or less:
index=myindex | iplocation ip_address | search user="$UserDD$" AND Country="$PEC$" | map search="search index=myindex hash=$$hash$$" maxsearches=100 | stats ....."
I keep getting the error message "Error in 'map': Did not find value for required attribute 'hash'."
I already tried "$hash$", doesn't work..
PS: it works just fine if I put a country name (example: Brazil) and "hash" with only one two $ ("$hash$").
Can anyone please help me?
Thank you very much!!
↧
Why do our forwarders siappear from the dedicated '/opt/splunk' file system?
Occasionally, our forwarders disappear from the dedicated `/opt/splunk` file system. Where can we find out information about the handling of this file system? re-mounting, etc...
How can we find out when the filesystem got mounted?
↧
Help rebuilding subsearch that keeps timing out
So here's my issue. We are creating a chart that shows each user and which desktops they use. The desktops are divided into two categories. I need counts of users for category 2 that are NOT in category 1. I have created a query that uses a subsearch and it works great with up to 7 days worth of data. However they're asking for 30 days worth of data and when I bump it up the subsearch is timing out.
I've been trying to re-build this without a subsearch but I haven't been able to figure it out yet so I'm asking for some help from the Splunk world.
Here's my search:
index=apache_logs host="prod" GET ("URL1" OR "URL2")
| rex field=_raw " - (?.*?) \?desktop=(?\w+)"
| search NOT
[ search index=apache_logs host="prod" GET ("URL1" OR "URL2")
| rex field=_raw " - (?.*?) \?desktop=(?\w+)"
| eval DesktopName=upper(DesktopName)
| search DesktopName=*CAT1
| stats count by UserID
| fields - count]
| stats count by UserID DesktopName
| chart count over UserID by DesktopName
↧
↧
Why do our forwarders "disappear" from the dedicated '/opt/splunk' file system?
Occasionally, our forwarders disappear from the dedicated `/opt/splunk` file system. Where can we find out information about the handling of this file system? re-mounting, etc...
How can we find out when the filesystem got mounted?
↧
I am getting "failed to fetch data" error under Access Control when trying to map a Splunk role to LDAP group?
Why am I getting "failed to fetch data" error under Access Control when trying to map a Splunk role to LDAP group?
↧
JSON Field Extraction names with curly brackets
Hello
I'm currently searching over a collection of events that contains some JSON structure, when applying SPATH over the field contaning the JSON, the resulting fields for a specific node of the JSON structure may vary according to the arrays on the message. I need to do some aritmetic operations over this particular node in order to sum all the values from all the events. These are the resulting fields when using SPATH:
agreementsGroup.agreements.agreementParticipants.proportionalClaimAmount
agreementsGroup.agreements.agreementParticipants{}.proportionalClaimAmount
agreementsGroup.agreements{}.agreementParticipants{}.proportionalClaimAmount
agreementsGroup{}.agreements.agreementParticipants{}.proportionalClaimAmount
As you can see they all have different names but refer to the same data I need to sum. Here is an example of my search and the respective result:
index="idx_cuadre_core_gw" sourcetype="rbt_cuadre_gw_src_type" | spath input=msg_body | stats sum("agreementsGroup.agreements.agreementParticipants{}.proportionalClaimAmount") by "referenceIdSAP" "policy.currencyCode" | rename "referenceIdSAP" as ID_SAP "policy.currencyCode" as MONEDA sum("agreementsGroup.agreements.agreementParticipants{}.proportionalClaimAmount") as CLAIM_AMOUNT
![alt text][1]
How can I treat these "different" named fields as one in order to sum and display the table without missing any data?
[1]: /storage/temp/216696-splunk-search.png
↧
statistics table sorting
I have a search from which I get the below result one of the columns in the statistics table :
Sat Oct 07 2017 07:30:00 GMT-0400 (EDT)
Sat Oct 07 2017 12:00:00 GMT-0400 (EDT)
Thu Oct 05 2017 08:00:00 GMT-0400 (EDT)
Tue Oct 03 2017 10:00:00 GMT-0400 (EDT)
Tue Oct 03 2017 18:00:00 GMT-0400 (EDT)
Wed Oct 04 2017 13:00:00 GMT-0400 (EDT)
Wed Oct 04 2017 22:30:00 GMT-0400 (EDT)
Wed Oct 04 2017 17:30:00 GMT-0400 (EDT)
Wed Oct 04 2017 16:15:00 GMT-0400 (EDT)
Wed Oct 04 2017 08:00:00 GMT-0400 (EDT)
I am trying to sort the complete table based on the above field which is the date filed, but the sort for the above comes up in alphabetically order of the days and not the dates in the above result.
SO I want to sort them based on the dates in the above table and show only the future dates from whatever time onwards and not the past results
↧
↧
Feature Suggestion: Panel-local variables for prebuilt panels
Prebuilt panels would be more useful if they allowed local variables. This would parallel the way macros allow arguments.
Local variables at the panel (and possibly row) levels would allow multiple instantiations of the same prebuilt panel to be used on the same page. Right now these must be provided inline.
Suggested syntax:
some-value
This would save many lines of code for me in my current Splunking endeavors. Row-level scope might be nice as well, but panel-level would be enough.
↧
How can I sort by date? Example time format is: Sat Oct 07 2017 07:30:00 GMT-0400 (EDT)
I have a search from which I get the below result one of the columns in the statistics table :
Sat Oct 07 2017 07:30:00 GMT-0400 (EDT)
Sat Oct 07 2017 12:00:00 GMT-0400 (EDT)
Thu Oct 05 2017 08:00:00 GMT-0400 (EDT)
Tue Oct 03 2017 10:00:00 GMT-0400 (EDT)
Tue Oct 03 2017 18:00:00 GMT-0400 (EDT)
Wed Oct 04 2017 13:00:00 GMT-0400 (EDT)
Wed Oct 04 2017 22:30:00 GMT-0400 (EDT)
Wed Oct 04 2017 17:30:00 GMT-0400 (EDT)
Wed Oct 04 2017 16:15:00 GMT-0400 (EDT)
Wed Oct 04 2017 08:00:00 GMT-0400 (EDT)
I am trying to sort the complete table based on the above field which is the date field, but the sort for the above comes up in alphabetically order of the days and not the dates in the above result.
SO I want to sort them based on the dates in the above table and show only the future dates from whatever time onward and not the past results.
↧
VPN user drop tracking for 5 minute window using delta function
Hi,
Here is the query I have `index=vpn sourcetype=vpn_prod srauserid1=* earliest=-10m |timechart span=5m dc(srauserid1) AS all_user | delta all_user as diffuser | search diffuser < -20 | rename diffuser as "Users Dropped" | table _time,"Users Dropped"`
I want this alert to trigger if the user count drops by 20 in last 5 minutes
Time Usercount Users Dropped
1:25 PM 100 0
1:30 PM 50 -50
In that case trigger at 1:30 PM. Every time it drops by 20 in 5 minute window we need to be alerted. I am not sure if I am using earliest=-10m correctly. The way I think I have it now is to look back in last 10 mins and in "Trigger Condition" I have "Trigger alert when Number of results is greater than 0 in 5 minutes for each result". Is my assumption correct ? or is there a better way to do this?
↧
How can I make my search trigger if the user count drops by 20 in a 5-minute window?
Hi,
VPN user drop tracking for 5 minute window using delta function
Here is the query I have `index=vpn sourcetype=vpn_prod srauserid1=* earliest=-10m |timechart span=5m dc(srauserid1) AS all_user | delta all_user as diffuser | search diffuser < -20 | rename diffuser as "Users Dropped" | table _time,"Users Dropped"`
I want this alert to trigger if the user count drops by 20 in last 5 minutes
Time Usercount Users Dropped
1:25 PM 100 0
1:30 PM 50 -50
In that case trigger at 1:30 PM. Every time it drops by 20 in 5 minute window we need to be alerted. I am not sure if I am using earliest=-10m correctly. The way I think I have it now is to look back in last 10 mins and in "Trigger Condition" I have "Trigger alert when Number of results is greater than 0 in 5 minutes for each result". Is my assumption correct ? or is there a better way to do this?
↧
↧
Splunk DB Connect v2 with MongoDB error
Having a problem connecting DB Connect v2 with a MongoDB.
Using the following stanza in db_connection_types.conf
[mongo]
displayName = Mongo
serviceClass = com.splunk.dbx2.DefaultDBX2JDBC
jdbcUrlFormat = jdbc:mongodb://*IP*:*port*/events
jdbcDriverClass = mongodb.jdbc.MongoDriver
port = 27017
Have Unity JDBC driver and Mongo java driver in /opt/splunk/etc/apps/splunk_app_db_connect/bin/lib
When creating a connection in Web GUI, receive the following error when Validating:
java.lang.NullPointerException: null value in entry: db_identifier_quote_string=null
Any help would be appreciated!
JB
↧
Help with the buckets and hot/cold data settings
Need 12 months hot data, 3 months cold, nothing else
I put the following in /opt/splunk/etc/system/local/indexes.conf:
[main]
frozenTimePeriodInSecs = 39312000
That setting is supposed to remove anything over 1.25 years old in my data.
Then I restarted splunk, but the size of the indexes did not go down and I still have less than 500MB remaining in my partition, so that server is not accepting input from forwarders. The files taking up 60% of that space are in /local/splunk/hot/named_application/db* files, and did not change after restarting the server.
Shouldn't the setting added to indexes.conf have removed anything over 39312000 seconds (1.25 years) old from my indexes? I am using Splunk 6.5.2.
The documentation from Splunk is a convoluted mess. Please don't answer by saying "Read this" and pointing me to a user manual.
Thanks for your help,
George
↧
Can I reduce my common user role configuration stanzas?
I was wondering if there was a clean way that I could reduce my stanzas in authorize.conf? I was hoping that similar to indexes.conf I could really do some cleanup work by taking something like this:
[role_infomgmtprd_user]
srchIndexesAllowed = app_infomgmtprd
srchIndexesDefault = app_infomgmtprd
importRoles = user
srchJobsQuota = 5
cumulativeSrchJobsQuota = 10
rtsearch = disabled
schedule_rtsearch = disabled
[role_infomgmtprd_power]
srchIndexesAllowed = app_infomgmtprd
srchIndexesDefault = app_infomgmtprd
importRoles = power
srchJobsQuota = 5
cumulativeSrchJobsQuota = 10
rtsearch = disabled
schedule_rtsearch = disabled
[role_owa_power]
srchIndexesAllowed = app_owa
srchIndexesDefault = app_owa
importRoles = power
srchJobsQuota = 5
cumulativeSrchJobsQuota = 10
rtsearch = disabled
schedule_rtsearch = disabled
[role_owa_user]
srchIndexesAllowed = app_owa
srchIndexesDefault = app_owa
importRoles = user
srchJobsQuota = 5
cumulativeSrchJobsQuota = 10
rtsearch = disabled
schedule_rtsearch = disabled
and turning it into something like this:
[role_user]
srchJobsQuota = 5
cumulativeSrchJobsQuota = 10
rtsearch = disabled
schedule_rtsearch = disabled
[role_power]
srchJobsQuota = 5
cumulativeSrchJobsQuota = 10
rtsearch = disabled
schedule_rtsearch = disabled
[role_infomgmtprd_user]
srchIndexesAllowed = app_infomgmtprd
srchIndexesDefault = app_infomgmtprd
importRoles = user
[role_infomgmtprd_power]
srchIndexesAllowed = app_infomgmtprd
srchIndexesDefault = app_infomgmtprd
importRoles = power
[role_owa_power]
srchIndexesAllowed = app_owa
srchIndexesDefault = app_owa
importRoles = power
[role_owa_user]
srchIndexesAllowed = app_owa
srchIndexesDefault = app_owa
importRoles = user
But that didn't seem to work.
↧