I want to make a search that match for a event, than get the next event.
Example:
Event1 _time event_hash status_label
Event2 _time event_hash status_label
Event3 _time event_hash status_label
Event4 _time event_hash status_label
Match:
Event2 _time event_hash status_label
Event3 _time event_hash status_label
Match:
Event1 _time event_hash status_label
Event2 _time event_hash status_label
↧
How can i get the first event after match a event?
↧
Calculate %CPU By process on windows Machine
Hello,
I am trying to calculate average CPU% utilized by top 10 processes on a windows machine. When I do the search I see more Values with 100. I am not clear what they mean. I tried reading many articles but did not find anything related. can someone please help me on this. I am trying this
index=perfmon host=$Host$ object=Process counter="% Processor Time" (instance!="_Total" AND instance!="Idle" AND instance!="System") | timechart avg(Value) by instance useother=f limit=0
I see a lot of Values with value 100 and I am lost. Not sure what those values really mean.
Can some one please help me get top 10 processes with highest CPU% utilization by process
↧
↧
Compare two search result and generate mismatch event with all the fields
i have two search results like search1 produce table with 15 columns and search2 produce table exactly same column(name). but i'm not sure about the row values. so i want compare these both search results and generate mismatched output with all the columns like below
search1 output like
c1, c2, c3,.......c15 ---->column
r10, r20, r30,.......r150 ------>row1
r11, r21, r31,.......r151 ------>row2
r12, r22, r33,.......r152 ------>row3
search2 output like
c1, c2, c3,.......c15 ---->column
r10, r20, r30,.......r150 ------>row1
r11, r55, r31,.......r151 ------>row2
r12, r22, r20,.......r152 ------>row3
expected output like
r11, r21, r31,.......r151 ------>row2
r12, r22, r33,.......r152 ------>row3
i tried below query didn't helped me.
|set diff[search eventtype="e1" | fields f1,f2, ...][search eventtype="e2" | fields f1,f2, ...]
any help much appreciated!
Thanks in Advance,
M
↧
Can I trigger a saved search more frequently than 1 minute
Hey guys,
Can I trigger a saved search more frequently than 1 minute?
I have two servers configuration: an indexer and a search head.
The problem is that the schedule can get only * * * * * Cron as the most frequent, if I'm not wrong.
↧
REST API trigger limits for search head
Hey guys,
What are the REST API trigger limits for search head per 1 minute e.g.?
I'm gonna call my search head from 3rd system and wanna know the limits.
Thanks in advance.
↧
↧
How to run Splunk query for Field with brackets
It might be very simple answer, however I am not able to find it so far .
My splunk query has a field name "Size(MB)" , I can not get around with escape character , eval or Rex to run the query with this type of field .
index=dbx ServerName="bestserver" sourcetype=stats | timechart span =1d **avg(Size(MB))** by DBname
↧
Not able to obtain search results via Splunk REST API
Hello,
I am using Splunk Python SDK to connect to Splunk via REST API.
However the code I have written does not return any results:
query = 'search index = test | stats sum(bytes) as bytes by ip'
kwargs_oneshot = {"earliest_time": "-2m","latest_time": "now","enable_lookups":True,"rf":["ip"]}
job = splunk_connection_service.jobs.create(query, **kwargs_oneshot)
while not job.is_done():
time.sleep(.2)
for result in results.ResultsReader(job.results()):
print (result)
If I remove "stats" command from the query, then I only get "_raw" data. What should i change in my code to get the "stats" command output?
Thank you.
↧
How can i trigger a whole dashboard with bunch of panels as a email alert.
I have different panels in one dashboard how can I create alert to send whole dashboards in an alert
Thanks In Advance.
↧
Non English characters are not indexed ?
Hey guys,
It seems that if a field in Splunk index contains Non English characters - the search is very slow.
I would say it's not indexed.
How so?
Thanks in advance.
↧
↧
Splunk supporting add-on for Active directory
Level=ERROR, Pid=15184, File=search_command.py, Line=373, Abnormal exit: # host: aa.bbb.cc.ddd: Could not access the directory service at ldap://aa.bbb.cc.ddd:389: socket connection error: [Errno 110] Connection timed out
I am getting the above error when i am trying to connect to the AD using the add-on.
↧
Convert ISO8601 to another date time format
Hello Community,
I have certain field values extracted by using rex command. The timestamp format of the field value is in ISO 8601.
For example -
> lgk":"2018-09-24T04:41:54Z"
I need to convert it in the following format
> %m-%d-%Y %H:%M:%S %p
I tried using strftime but that doesn't work. Gives blank results. Can somebody please help me with some pointers?
Thanks,
-Ameya
↧
Apply coloring based on a column value string to the complete row
Hi All,
Context X Y Z
ABC 98 97 67
DEF 50 45 23
GHI 3 2 1
So, if Context is ABC, i have to apply color coding for X,Y,Z & if the context is DEF, another color coding need to be applied and so on.....
Also the X,Y&Z are not only three columns, i have n number of columns like that and sometimes these are dynamic and not static.
Regards,
BK
↧
Splunk Add on for Netapp ontap
Hello Team ,
I am in process to integrate splunk with netapp and while i read both the app and add on documents . I have some abouts on DCN set up.
for vmware splunk provides ova for DCN setup , for netapp also we require DCN ... what kind of DCN is required for Netapp
I have 2 Heavy forwaders where i am collecting other logs such as firewall ips AV etc , can i use this as DCN , what steps would be required
OR
do we require separate Virtual host acting as DCN
2ndly we do collect logs using API from Netapp
why we require syslog as well to get complete filer logs
Quick response is would be helpful . Thanks
↧
↧
Pie chart lebels rename field
How to rename the value "other(n)" to "OTHERS" in the pie chart after the stats command
↧
How do you add an field/column to an existing kvstore?
We have a kvstore thats been used for about a year.
Now we need to add a new field/column to the kvstore, but cant find any info on how to do this or if its even possible.
So my question is, is this possible? if so, how?
Or is the only option to create a completely new kvstore?
↧
Implementing Condition to set up an alert
Here is my SPL
host=sjk0s0jqw.aprhem.com "Abend Error" | head 3
| rex max_match=0 field=_raw "\'(?[\w\s-]+?)\'\s+(?i)IS\serror_Code\sFOR"
| stats values(error_name) as error by host
The output of that command is like this
host error
------- --------
host01 SB32
SB33
host02 SB32
SB33
host03 SB32
SB33
SB32 or SB33 are harmless error codes.
So, I want to put a condition so that I can be alerted if there is any error other than SB32 or SB33.
I can't think how to proceed with it.
↧
Sourcetype override based on host is not working
Hi All,
I have some switch logs which are configured to Splunk from 3 UF's into one index. Based on host values, I renamed the sourcetype by configuring props and transforms. I am able to see new sourcetypes in the index but now the issue is when I search for that particular sourcetype it is not giving results.
index = index1 ----giving results and able to see sourcetypes in the field values as expected
index = index1 sourcetype = sourcetype1 ----- no results
props.conf
[orig_sourcetype]
TRANSFORMS-rename = index1_host1,index1_host2,index1_host3
transforms.conf
[index1_host1]
REGEX = host1
SOURCE_KEY = MetaData:Host
DEST_KEY = MetaData:Sourcetype
FORMAT = sourcetype1
WRITE_META = true
[index1_host2]
REGEX = host2
SOURCE_KEY = MetaData:Host
DEST_KEY = MetaData:Sourcetype
FORMAT = sourcetype2
WRITE_META = true
[index1_host3]
REGEX = host3
SOURCE_KEY = MetaData:Host
DEST_KEY = MetaData:Sourcetype
FORMAT = sourcetype3
WRITE_META = true
Did I miss any configurations? Could any one please help? Thanks in advance.
↧
↧
Compare weekly values and show percentage difference.
Hi,
I would like to compare 1 weeks tabled data to the previous weeks and calculate the percentage difference for each field value for field note_label
Initial search:
search... | stats count by note_label
note_label count
abc 10
abcd 20
abcde 30
I would like to show the data as:
note_label count (week1) count(week2) %Change
abc 10 20 100%
abcd 20 5 -75%
abcde 40 60 50%
I may be following the wrong route as i tried this but had no luck, and may need to use a different method? This search only give me the note_label field value names not the values.
earliest=-1w latest=now my_search | stats earliest(note_label) as e_status_label latest(note_label) as l_note_label | eval 1w=(l_note_label-e_note_label)/e_note_label*100
| appendcols [ search earliest=-2w latest=now my_search | stats earliest(note_label) as e_note_label latest(note_label) as l_note_label | eval 2w=(l_note_label-e_note_label)/e_note_label*100 ]
| fields note_label 1w 2w
thanks
↧
How to get the count for multiple field results?
Hi All,
I was figuring out for some time how to get the count of the events for each individual fields and get it out displayed by the events as the row and the fields stay the same as columns but the result is the count for each event.
I have tried using stats count for each field name but did not get any results.
Please take the work example below to explain.
Work Example: How do i get from the first table on the left to the table on the right as shown?
![alt text][1]
[1]: /storage/temp/256058-test.png
Thanks!
zovin
↧
Windows Security logs - using transaction but unable to get proper results
sourcetype="WinEventLog:Security" host=PC* (EventCode=5059 OR EventCode=4648) | transaction maxspan=5s startswith=eval(EventCode=5059) endswith=eval(EventCode=4648) keeporphans=false | table _time,host,EventCode,Account_Name
Im trying to query for all computers and find event code 5059 followed with an event 4648 within 5 seconds from the same computer, however the search results finds events from 2 different computers and matches them to same transaction. How can I improve this search query?
![alt text][1]
[1]: /storage/temp/256059-capture.jpg
↧