How do you delete a dashboard which is in private since the user left the company? let me the process please.
↧
How do you delete a private dashboard?
↧
How do you assign a value to a field if it is missing the event?
I have the sample data which has all the fields like below
[11/07/2018 09:59:00] CAUAJM_I_40245 EVENT: ALARM ALARM: JOBFAILURE JOB: HYGIEIA_EC2_LOAD_ROOT **MACHINE: hexx.com** EXITCODE: 110
The below is the event with machine field missing
[11/07/2018 09:17:13] CAUAJM_I_40245 EVENT: ALARM ALARM: JOBFAILURE JOB: FADB_OUT_CROSSINVEST_PFX_BOX EXITCODE: 9
Below is the search I am using
index=abc |rex field=_raw "MACHINE\:\s(?[^ ]+).*"
| eval time=strftime(_time,"%Y/%m/%d %H:%M:%S")
| eval node=host
| eval resource="Auto"
| eval type="Alarm"
| eval severity=1
| eval Machine_Name=case(isnull(Machine_Name),"NONE",isnotnull(Machine_Name),Machine_Name,1=1,"unknown")
| eval description="CAUAJM:" .CAUAJM ." STATUS:" . STAT . " JOB:" . JOB_Name . " MACHINE:" .Machine_Name. " with ExitCode:" .EXITCODE. " at:" . time . " Environment:AWP"
| table node resource type severity CAUAJM job_event JOB_Name Machine_Name time description
The description part shows blank for the second event as there is no machine in it. How can I populate that so that the description part is not empty? I have attached a screen shot for better understanding
![alt text][1]
[1]: /storage/temp/255372-auto.png
Thanks,
↧
↧
sparkline not show trends
I have the following query, however, the sparkline didn't show the trend instead one single value everything else is a straight line. what did I miss
sourcetype=metrics name=http*://* rename "data.avg_response_time_ms" as avg_response_time_sec , "data.avg_error_rate" as avg_error_rate | eval avg_response_time_sec=avg_response_time_sec/1000 | stats sparkline(avg(avg_response_time_sec),5m) as Trend, by name,avg_response_time_sec ,avg_error_rate
↧
How do you convert a month number to a month string?
Some timestamps use month numbers like "11" rather than strings like "Nov".
I'm using this eval to make the conversion:
| eval month=if(isnotnull(MM),if(MM="01","Jan",if(MM="02","Feb",if(MM="03","Mar",if(MM="04","Apr",if(MM="05","May",if(MM="06","Jun",if(MM="07","Jul",if(MM="08","Aug",if(MM="09","Sep",if(MM="10","Oct",if(MM="11","Nov",if(MM="12","Dec","INV")))))))))))),MM)
Is there a better way?
↧
Help with Query for alert
Hello experts,
I am new to splunk. I have a file with below values .I have Indexed time as well. I need to write a query to alert if any id has text=started and consequent 2 other texts ( it can be anything) for the same id in < 5 minutes.
id text
123 started
123 in progress
123 halted
213 started
213 finished
456 started
456 running
456 in progress
Kindly help.
I tried
index=test text="started" |stats count by id. But that is showing only the started ones and does not have the other texts . The other texts are random so cannot specify in a search.
Thanks a lot,
Cheers,
Naomi
↧
↧
Search Results for User Incorrect Password
Hello, I have a user that occasionally experiences lack of connectivity over VPN into one of my servers. He can connect most of the time, but there are instances where he's unable to remote in with RDP.
How can I search for the user/Active Directory (already set up in Splunk environment) to see if there are any incorrect logins? It's a simple query, but I'm new to the system.
Thank you in advance.
↧
search requires stitching together two distinct events from a single sourcetype
i require some assistance in my search query where i need to search a mail log to extract the highest recipients by message size based upon a unique common id.
as i am able to search events by the *size* field to see the values of message size from the **senders** addresses but i am unable to search this by the recipients address including to show the unique ID.
so i need to combine these two events first showing the message size and then the recipient addresses based upon a common queue ID. i know the stats function is more beneficial than the transaction command as it is costly. Also i believe i am able to chart it through xyseries but i'm unsure how to put this together as i have tried a various types of stats commands trying to put this together but i have a strong feeling i'm not executing it correctly
↧
How to avoid data loss on HF on restart
I have service now add on, db connect in Heavy Forwarder. So i cant use multiple instances of HF to avoid data duplication and licensing. My both apps Service Now and DB connect are in real time sync, also I need to do changes in props & transforms frequently. so in this case how to avoid data loss. Just using indexer ack will resolve?
↧
How to extract multiple fields and values
I have raw information as follows: Two times Kaspersky output within one 'section'
------------------------------------------------------------ snip of one section --------------------------------------------------------------------
08/11/2018
07:43:58.000
kaspersky output:
Scanned objects : 19
Total detected objects : 0
Infected and other objects : 0
Disinfected objects : 0
Moved to backup : 0
Removed objects : 0
Not disinfected objects : 0
Scan errors : 0
Corrupted objects : 0
Password protected objects : 0
Skipped : 0
*Between the above/below output are many lines with all kind of information that is not really relevant*
kaspersky output:
Scanned objects : 1
Total detected objects : 0
Infected and other objects : 0
Disinfected objects : 0
Moved to backup : 0
Removed objects : 0
Not disinfected objects : 0
Scan errors : 0
Corrupted objects : 0
Password protected objects : 0
Skipped : 0
*And then there are many lines in the bottom that is not really relevant as well*
------------------------------------------------------------ snip of one section --------------------------------------------------------------------
Target is to have e.g. a time table with the values of each line, e.g. field value would be e.g. "Scanned objects" and its value would be 19 and 1 (in this case) *-- and then similar approach for all the other lines --*
I tried to extract the fields using the Regular Expression but it seems it does not select every value (of e.g. Scanned objects), meaning I have blanks in the output itself
Please advise how to actually get this done
↧
↧
"transaction" command returns different results by search mode(fast,smart,verbose) OR whether using optimization.
My environment : splunk stand-alone ver7.1.4
*I found same phenomenon in ver7.1.3
I executed search below by using two `lookup tables`.(*I attached them to this page.)
| inputlookup test_lookup_2.csv
| lookup test_lookup_1.csv key OUTPUT service as a
| mvexpand a
| eval a=if(a=service, null(), a)
| eval _time=now()
| transaction row_num
| stats count by a
Then when I search it in `fast mode` and `smart mode`, splunk returns `No results found.`
But when I search it in `verbose mode`, splunk returns results normally!
Also, if I add `| noop search_optimization = false` to the last line of this and do a search, the result will be returned normally regardless of the search mode!
Why is this difference caused?
This behavior is too weird as specification, so I think that it is issue.
If someone know about it, please tell me.
[1]: /storage/temp/257571-lookups.zip
↧
ERROR ExecProcessor
Hi there,
I can see this issue in Splunk that looks like this:
ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/SA-Utils/bin/app_imports_update.py" No handlers could be found for logger "splunk.rest"
I suspect its some kind of auth issue..
Has anyone ever come across this before?
Thanks!
↧
Date Values as Column Names
Hello ,
I am writing one query in Splunk to retrieve the events from json log file. I am getting one value of table as mentioned in image capture.png.
But I want to take date values as column name. Please refer capture 1 image. Can you please help me as early as possible.
I look forward to hearing from you.
Thank you in advance.
![alt text][2]
[1]: /storage/temp/256573-capture1.png
[2]: /storage/temp/256572-capture.png
↧
How to implement future proof customization for Splunk elements using CSS and JS?
Hello.
I am developing an app for Splunk and I am facing an issue that possibly many of you are facing too, the changes in the classes and DOM of Splunk elements such as dropdown boxes, for example, between Splunk versions.
Since I changed Splunk elements in my app via CSS or JS, some of my customized changes disappear from version to version (already happened from version 7.0 to 7.1 and from 7.1 to 7.2) forcing me to redo the customizations.
Can someone share with me how are you addressing this issue? How can I ensure that my customizations will persist when a new Splunk version is released?
Thanks in advance.
Best Regards,
Fábio Lourenço
↧
↧
not able to modify the alert. Getting "server Error" while updating
I am not able to modify the error whenever I click on save to save changes it shows server error. Please suggest
↧
Adding a row to a table containing text and eval value
I am creating a table that tallies each type of request per day. Table is as follows.
Day | Assigned | Resolved | Open
Jan 1 | 13 | 2 | 12
Jan 2 | 6 | 2 | 12
My code:
bin _time span=day
| stats count(eval(request="queue")) as Assigned count(eval(request="resolved")) as Resolved count(eval(current_ticket_state="open")) as Open by _time
| eval _time = strftime(_time, "%d-%b-%y")
| rename _time as Day
What I need now is a row that will have a text of 'carryover' on column Day and an eval carryover = Resolved - Assigned from the previous month for its value on the 'Assigned' column. Here is the supposed output.
Day | Assigned | Resolved | Open
Carryover | 5 | |
Jan 1 | 13 | 2 | 12
Jan 2 | 6 | 2 | 12
How should I achieve this?
↧
How do i configure splunk cold data path for separate indexers(peers) in a indexer cluster
I currently have 4 indexers. I have a new mount drive i am trying to send splunk cold data to.
[volume:cold]
coldpath = /mnt/splunk_cold
Please can anyone explain how i can set this stanza in the cluster master (master app) for each individual indexers. Since the slave app has a higher precedence the system/local
↧
search for all fields which have some some string in field
Hello
How can I get only results for specific fields where field name is like something ?
fx.
get all fields which have "status" in their field name.
I tried this but It doesnt work:
sta*
I want also to do later this:
sta* OR STA* OR Sta*
Thank you in advance
↧
↧
Different searches based on the inputfield value
Hello everybody
In my dashboard i have two input fields
**Primary_field =\*
Secondary field=\***
my current search looks like
*index=* ip=$primary_filed_value$*
I want extend it wirth the secondary field.
But if write my search like
*index=\* ip=$primary_filed_value$ user=$secondary_filed_value$*
and the **$secondary_filed_value$ = ***
i get ONLY the results where user != NULL
But i need everything
ip
1.1.1.1 alex
1.1.1.1 bill
1.1.1.1 NULL
Any ideas?
↧
How to prevent indexing duplicatd events
How to prevent indexing duplicate events forwarded from different forwarders, the monitored log files is are recording the same events but in different servers. The requirement is due to maintain the availability of the monitored events when even when one of the servers is powered off. Thank you.
↧
sort command seems to change statistics count
My problem is that I cannot understand why I get different statistics number depending of where do I place dedup command, before or after sort command.
1. query:
host="web_application" status=200 action=purchase* file=succ*
| table JSESSIONID action status
| rename JSESSIONID as "UserSessions"
| sort "UserSessions"
| dedup "UserSessions"
Results:
Statistics: (3569)
query 2
host="web_application" status=200 action=purchase* file=succ*
| table JSESSIONID action status
| rename JSESSIONID as "UserSessions"
| dedup "UserSessions"
| sort "UserSessions"
statistic count: (5726)
why difference between the two queries when only difference is the location of dedup ..
↧