Does anyone out there have experience with having Splunk send search alert information directly to a ticketing system using Connect Direct or DMaap? I've been asked to look into the possibility of doing just this and am not aware of any applications or process that does this. Any insight would be helpful on the issue.
↧
Splunk and Connect Direct or DMaap
↧
I want to know how can i get the data of a record in a tabular form output in SPlunk
Hello guys, I am trying to find a way to get the data in a record in the following manner to get i a table output in splunk . The data in the record is as follows
os_repo_status: last-update-result: [2018-01-25 18:53:14] {success} System update completed successfully
: remote-trigger: 1_per_hour __UCMPROXY__/tomorrowland-patching/os-update-trigger/__FQHOST__
: reboot-eligible: true
: next-update: NO_SCHEDULE
: update-eligible: true
: updates-available: false
I want to get the Fields "upadtes-available" field under the "os_repo_status" in the above list in a table output
my basic search is " | inputlookup mdbstaticlooup Where ....... | table os-repo-satus| "
i am not able to see anything in the os-repo-field in the table section.
Can any one Please help me in getting this solved .
↧
↧
Is there a list of unusable field names?
We recently ran into this case:
- A user logged a message that included the text `tag="some stuff"`
- User tried to search by that field, but gets an error like `unable to find tag "some stuff"`
`tag` appears to be a reserved word, but I was unable to find a list of any other cases like this. It's unfortunate that the tags functionality (which isn't in use) uses the same syntax as field matching here.
We'd like to add some code to warn on this kind of case, is there a list of all such keywords which, when searching `keyword=foo`, would not actually match the field name `keyword`?
↧
What do do about blocked messages...
I noticed a lot of "blocked" messages coming from one of my HFW today, and unsure what to do about it. The HFW in question processes a lot of netflow and stream events.
5/15/18 05-15-2018 18:27:23.109 +0000 INFO Metrics - group=queue, name=httpin,
6.27.23.109 PM host = bos-flow01 source = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
blocked =truef max_size_kb=500, current_size_kb=499, current_size=1234, largest_size=1237, smallest_size=0
5/15/18 05-15-2018 18:27:23.109 +0000 INFO Metrics - group=queue, name=httpin,
6.27.23.109 PM host = bos-flow01 source =/opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
blocked =true, max_size_kb=500, current_size_kb=499, current_size=1234, largest_size=1237, smallest_size=0
5/15/18 05-15-2018 18:26:52.108 +0000 INFO Metrics - group=queue, name=typingqueue,
6.26.52.108 PM host = bos-flow01 source =/opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
blocked =true, max_size_kb=500, current_size_kb=499, current_size=1234, largest_size=1262, smallest_size=199
5/15/18 05-15-2018 18:26:52.108 +0000 INFO Metrics - group=queue, name=httpin,
6.26.52.108 PM host = bos-flow01 source =/opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
blocked =true, max_size_kb=500, current_size_kb=499, current_size=1233, largest_size=1350, smallest_size=0
5/15/18 05-15-2018 18:26:52.108 +0000 INFO Metrics - group=queue, name=typingqueue,
6.26.52.108 PM host = bos-flow01 source =/opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
blocked =true, max_size_kb=500, current_size_kb=499, current_size=1234, largest_size=1262, smallest_size=199
5/15/18 05-15-2018 18:26:52.108 +0000 INFO Metrics - group=queue, name=httpin,
6.26.52.108 PM host = bos-flow01 source =/opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
blocked =true, max_size_kb=500, current_size_kb=499, current_size=1233, largest_size=1350, smallest_size=0
5/15/18 05-15-2018 18:26:21.108 +0000 INFO Metrics - group=queue, name=typingqueue,
6.26.21.108 PM host = bos-flow01 source = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
blocked =true, max_size_kb=500, current_size_kb=499, current_size=1233, largest_size=1243, smallest_size=250
5/15/18 05-15-2018 18:26:21.108 +0000 INFO Metrics - group=queue, name=typingqueue,
6.26.21.108 PM host = bos-flow01 source = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
blocked =true, max_size_kb=500, current_size_kb=499, current_size=1233, largest_size=1243, smallest_size=250
5/15/18 05-15-2018 18:25:50.108 +0000 INFO Metrics - group=queue, name=indexqueue, b
6.25.50.108 PM host = bos-flow01 source =/opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
locked= true, max_size_kb=500, current_size_kb=499, current_size=1160, largest_size=1291, smallest_size=175
5/15/18 05-15-2018 18:25:50.108 +0000 INFO Metrics - group=queue, name=httpin,
6.25.50.108 PM host = bos-flow01 source =/opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
blocked =true, max_size_kb=500, current_size_kb=499, current_size=1232, largest_size=1293, smallest_size=0
5/15/18 05-15-2018 18:25:50.108 +0000 INFO Metrics - group=queue, name=indexqueue, b
6.25.50.108 PM host = bos-flow01 source =/opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
locked= true, max_size_kb=500, current_size_kb=499, current_size=1160, largest_size=1291, smallest_size=175
5/15/18 05-15-2018 18:25:50.108 +0000 INFO Metrics - group=queue, name=httpin,
6.25.50.108 PM host = bos-flow01 source =/opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
blocked =true, max_size_kb=500, current_size_kb=499, current_size=1232, largest_size=1293, smallest_size=0
5/15/18 05-15-2018 18:25:19.108 +0000 INFO Metrics - group=queue, name=typingqueue,
6.25.19.108 PM host = bos-flow01 source =/opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
blocked =true, max_size_kb=500, current_size_kb=499, current_size=1233, largest_size=1241, smallest_size=0
5/15/18 05-15-2018 18:25:19.108 +0000 INFO Metrics - group=queue, name=httpin,
6.25.19.108 PM host = bos-flow01 source =/opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
blocked =true, max_size_kb=500, current_size_kb=499, current_size=1233, largest_size=1238, smallest_size=0
5/15/18 05-15-2018 18:25:19.108 +0000 INFO Metrics - group=queue, name=typingqueue,
6.25.19.108 PM host = bos-flow01 source = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd
blocked =true, max_size_kb=500, current_size_kb=499, current_size=1233, largest_size=1241, smallest_size=0
↧
Single Value Viz - These results may be truncated. This visualization is configured to display a maximum of 1000 results per series, and that limit has been reached.
Hi friends,
I'm facing the error "These results may be truncated. This visualization is configured to display a maximum of 1000 results per series, and that limit has been reached." when try to use a **Single Value** viz, the **Sparkline** resource.
I've tried some different approaches shared here on the answer.splunk without success. Do you have any clue?
Regards.
Marcos Freitas
↧
↧
What is the most efficient way to use dashboard tokens for filtering?
I have a dashboard where I am using tokens to filter the results of the individual panels. The use case for the filters are:
Token=anything (*)
Token=specific_value
Token=anything BUT specific_value
I have the first two tested and working, but can't seem to figure out the best way to account for the 3rd scenario. I have been incorporating the token into my searches by using:
| fillnull value=NULL field (this ensures value will always be equal to something, even when not in an event) | search field=$token$
This works great for scenario 1 and 2 but obviously there is no way (I think?) to leverage field=value when in the last case I want to do the opposite (!=). Is there a better way to leverage the token in my search so I will be able to filter based on all three scenarios? All values, specific value, everything NOT specific value?
Thanks!
↧
How to configure a receiver to listen on port 9997 when trying to receive data from a remote machine on the local network?
Hello Team Splunk!
I am trying to receive data from a remote machine on the local network. In order to do so I configured a receiver to listen on port 9997. This is shown below in Figure 1. However, when I check *netstat* I see that the port is not actually listening for incoming connections, Figure 2. Does anyone know what is going wrong?
Also, I should I mention that I am using *Splunk 6.0* on Windows 7 operating system (OS).
![alt text][1]
*Figure 1*: Splunk set to listen on 9997
![alt text][2]
*Figure 2*: Ports with 999 not open
[1]: /storage/temp/250691-splunk-receivingdata-portnotlistening.png
[2]: /storage/temp/250694-portsin999snotopen.png
↧
How to pivot certain fields in a set of logs?
I need to pivot only certain fields in a set of logs. Sample data below. Can you tell me how to go about doing this?
**Sample Data**
_time Id Server DetailId Name Value
5/15/18 11:00 1 Server1 1 Name1 Value1
5/15/18 11:00 1 Server1 2 Name2 Value2
5/15/18 11:00 1 Server1 3 Name3 Value3
5/15/18 11:01 2 Server1 4 Name1 Value4
5/15/18 11:01 2 Server1 5 Name2 Value5
5/15/18 11:01 2 Server1 6 Name3 Value6
**Desired Results**
_time Id Server Name1 Name2 Name3
5/15/18 11:00 1 Server1 Value1 Value2 Value3
5/15/18 11:01 2 Server1 Value4 Value5 Value6
↧
Does anyone have any experience with having Splunk send search alert information directly to a ticketing system using Connect Direct or DMaap?
Does anyone out there have experience with having Splunk send search alert information directly to a ticketing system using Connect Direct or DMaap? I've been asked to look into the possibility of doing just this and am not aware of any applications or process that does this. Any insight would be helpful on the issue.
↧
↧
timechart for multiple, but similar, itemnames
I am attempting to grab data from a set of Items that all have relatively similar names, i.e.:
ItemName = LocX_VarY.DataTypeZ
Where the individual words are descriptors of where the data point was taken from, such as:
Location0001_Windspeed.10M
Now, say that I want to create a timechart that plots multiple different items, like:
Location0001_Windspeed.Below10M
Location0001_Windspeed.10M
Location0001_Windspeed.100M
Location0038_Windspeed.Below10M
etc.
**How can I structure my search function** in such a way that I don't have to manually enter in all of the locations/datatypes to get all applicable ItemNames and the data that corresponds to them.
Note that the examples provided were just examples, not representative of what the data looks like.
↧
Does Splunk support 3rd party certs without passwords?
I have a customer with an internal CA. The private key generated for each host does not have a password. Does Splunk support this? If I leave sslPassword blank, it'll pickup the value from /opt/splunk/etc/system/default/server.conf (which is "password").
Can I just use "sslPassword =" in my server.conf? Will Splunk come up with some kind of encrypted value for a null password?
Thx.
C
↧
StatsD line metric protocol with dimensions extension
I'm trying to find a statsd client that supports "StatsD line metric protocol with dimensions extension".
I don't even see this as a proposed format. Is this something splunk just came up with? All the client libraries I can find use a prefix.
↧
Tenant / index on Overview page tanking searches
I am not sure why but on the main page, none of the searches return results - unless i remove references to the indexes on the tstats search. I assume this is related to the multi tenancy support that is breaking it.
For example:
| tstats values(nodename) AS nodename count FROM datamodel=Cisco_IOS_Event WHERE Cisco_IOS_Event.product="IOS" index=* BY host index | search nodename=Cisco_IOS_Event | stats sum(count)
returns nothing, but:
| tstats values(nodename) AS nodename count FROM datamodel=Cisco_IOS_Event WHERE Cisco_IOS_Event.product="IOS" BY host | search nodename=Cisco_IOS_Event | stats sum(count)
Works right.
The Tenant search properly hides as the macro probably comes back with nothing; i have assigned the tenant_index token with a value of "" to partially fix this issue..
Is this a known problem? or, is it something I have not set up correctly?
↧
↧
How does DUO authentication for SPLUNK work?
WE performed a test this morning with DUO\SPLUNK and it worked fine, however it also forced our local splunk accounts to use DUO. We are also not sure on how this would impact those local accounts that are used in scripts that access the API.
Is there a way around this or is it all or nothing?
Thanks!
↧
How to report to see how much time a user spends on a PC?
I have a search that captures when a user logs in and logs out of his PC:
`index=win* (EventCode=4800 OR EventCode=4801) Account_Name=Bat`man
The results show the below consecutive events: (from top to bottom)
EventCode=4801 The workstation was unlocked.
EventCode=4800 The workstation was locked.
EventCode=4801 The workstation was unlocked.
EventCode=4800 The workstation was locked.
EventCode=4801 The workstation was unlocked.
EventCode=4800 The workstation was locked.
Basically, I want to run a report each day (last 24 hours) where I can subtract the _time of first, second, third pair of events (duration) and then add the duration values together so it will show how long a user has not been on the computer.
Current search I have, finds the difference of the consecutive events. In the results I see the right time difference values but it also include wrong data as well which I cannot remove.
| delta _time p=1| rename delta(_time) AS timeDeltaS | eval timeDeltaS=abs(timeDeltaS) | eval "Duration"=tostring(timeDeltaS,"duration") | table Account_Name,_time, "Duration"
↧
What is a better way of comparing EPOCH times?
I have the query below that checks for the expiration date of a certificate, converts it to epoch time, and then basically changes the value of the result as it 'degrades' (gets closer to expiration). I have a feeling this is really messy and could be improved on, so I'm just looking for general recommendations on a better way of doing it. It works, but to me it looks excessive.
index=test host=mycertificateauthority| rex field=Line "(?\d{1,2}\/\d{1,2}\/\d{4})" | stats count by _time,host,date | eval dateepoch=strptime(date,"%m/%d/%Y") | eval thirtydays=(relative_time(dateepoch,"-30d")) | eval fifteendays=(relative_time(dateepoch,"-15d")) | eval fivedays=(relative_time(dateepoch,"-5d")) | eval result=case((now()<=thirtydays),"0",(now()>=thirtydays) AND (now()<=fifteendays) AND (now()<=fivedays) AND (now()<=dateepoch),"1",(now()>=thirtydays) AND (now()>=fifteendays) AND (now()<=fivedays) AND (now()<=dateepoch),"2",(now()>=thirtydays) AND (now()>=fifteendays) AND (now()>=fivedays) AND (now()<=dateepoch),"3",(now()>=thirtydays) AND (now()>=fifteendays) AND (now()>=fivedays) AND (now()>=dateepoch),"4")
↧
How can detect change using userswithloginprivs?
All,
I have a stock install of Splunk for Nix running on 3k hosts or so. What I want to do in reasonable speed is compare to see if any users have been added with login privs locall to the Linux boxes.
The base search is this
index=main sourcetype="userswithloginprivs"
I am just not sure how on a host by host basis compare the results of this search to find change.
Any help here?
↧
↧
Error in 'eval' command: The expression is malformed. An unexpected character is reached at
Hi,
I'm getting this error ***
> Error in 'eval' command: The> expression is malformed. An unexpected> character is reached at '“time < 1> sec”, actualTime>1000 AND> actualTime<5001, “1 < time < 5”, 1=1,
>“time > 15”)'.
***
for below eval funtion,
| convert num(actualTimeExtract) as actualTime
|eval buckt=case (
actualTime<1001, “time < 1 sec”,
actualTime>1000 AND actualTime<5001, “1 < time < 5”,
1=1, “time > 15”)
please help me with a solution.
↧
Inject click.value to DateTimeRange from chart drilldown
Hi, I am 1 week old to splunk. Using web-version. Need any generous help please.
I have a chart1 that drills down to a table1, upon clicking, passes the $click.value$ from chart1.
An example of my goal: from chart1, the $click.value$=15, then I want to set the dateTimeRange picker for table1 to 15:00:000 to 16:00:000. Then table1 will filter and only show events between 15-16:00 of a day.
Is above possible please? If yes, could anyone share more information about it please?
If not, then we might have to use query. I checked this example with question id 438520, sorry no enough points to post links..., but since I do not have input boxes as the example does, how do I convert those fields into query please?
Thanks a lot for any help.
↧
Splunk maintenance
Is there maintenance procedure that Splunk Enterprise/deployment/instance requires periodically to ensure high performance?
Is there an app that can be used to this?
Please share. thanks!
↧