I am trying to find all devices not reporting into splunk via a qualys scan of our DMZ and searching against all indexes. I want to have only the hosts that are not reporting in to show in the results. Here is my search
index=*
[ inputlookup dmzhosts.csv
| table IP
| rename IP AS host
| format] OR
[ inputlookup dmzhosts.csv
| table hostname
| rename hostname AS host
| format]
| eval host=upper(host)
| stats count by host
| append [inputlookup dmzhosts.csv | eval count=0, hostname=upper(hostname)|rename hostname as host | fields host, count]
| stats sum(count) AS Total by host
| where Total=0
There are only 160ish hosts being reported in the qualys search that generates the dmzhost.csv outputs file. I created a search to look for IP or hostname and assign values to the total number of events to each host and compare it back with the original output file where all hosts were assigned a 0 value. Then only report hosts that have a stats count of 0. Does my search look ok?
Thanks
Ed
↧
Comparison Search Help
↧
Radar chart visu with some indexed data example
Hi!
I am looking for example on Custom Radar Chart visualization. The example on download page of SplunkBase shows some evals with static key values. Let's think that I have index with few details:
**SourceIP, DestIP, protocol** - pretty much purely non numerical values.
Can you give example how to attach those on chart visualization as keys being evaluated? Obviously I need to have them as counts (like amount of SourceIP's and so on) - how would you add them as evaluated keys?
This is the example on SplunkBase:
| makeresults
| eval key="current", "Business Value"=.37, Enablement=8.64, Foundations=2.56, Governance=1.68, "Operational Excellence"=4.992, "Community"=9.66
| untable key,"axis","value"
| eval keyColor="magenta"
Let's take SourceIP - I do have them some 5 on the list, DestIP's there's 3 and protocol - well its either UDP or TCP, but count of each matter. How would you add them on the example to make it work?
In addition - is there possibility to have say eval key="CURRENT", then "PAST" and then "OBJECTIVE" - per what I've tested I am only able to have 2 evaluations on same chart.
Thanks in advance!
↧
↧
Splunk integration with ServiceNow case management?
I have integrated the Splunk with Service Now using Add on. Now I have 2 questions here:
1. I'm able to bring the desired cases data in to Splunk. I'm only able to create but cannot delete the record in Splunk when I delete the same case in Service now , so what should I do?
2. When trying push the data to ServiceNow from splunk, I'm able to push the data to only incident and event table, not my desired table, is there a way to do that?
↧
Sparklines for each day of the month
I am doing a search for month and want to display a sparkline for each day. Amy ideas?
TIA!
David L. Crooks
↧
Splunk web keeps auto refreshing
Has anyone experienced continuous automatic refreshing of Splunk (web console) dashboard or apps including searching.
When I visit Splunk console and try to navigate to any app or dashboard or even management console. It keeps refreshing every 2/3 seconds. I started noticing this behavior suddenly. I tried following
- Restart Splunk
- Restart VM
- Verify CPU/Disk/Network usage, all seems within acceptable range in fact way under control
Splunk deployment
- Single Node on big vertically scaled instance (32GB RAM, 8vCPU cores)
- Splunk v7.0.4 on linux x64 Apps
- Splunk App for AWS, AWS add on
- Daily ingestion ~35GB
- Browser - Chrome
↧
↧
How do I use a comparison search to find all devices not reporting to Splunk?
I am trying to find all devices not reporting into splunk via a qualys scan of our DMZ and searching against all indexes. I want to have only the hosts that are not reporting in to show in the results. Here is my search
index=*
[ inputlookup dmzhosts.csv
| table IP
| rename IP AS host
| format] OR
[ inputlookup dmzhosts.csv
| table hostname
| rename hostname AS host
| format]
| eval host=upper(host)
| stats count by host
| append [inputlookup dmzhosts.csv | eval count=0, hostname=upper(hostname)|rename hostname as host | fields host, count]
| stats sum(count) AS Total by host
| where Total=0
There are only 160ish hosts being reported in the qualys search that generates the dmzhost.csv outputs file. I created a search to look for IP or hostname and assign values to the total number of events to each host and compare it back with the original output file where all hosts were assigned a 0 value. Then only report hosts that have a stats count of 0. Does my search look ok?
Thanks
Ed
↧
Radar chart visualizations with some indexed data example
Hi!
I am looking for example on Custom Radar Chart visualization. The example on download page of SplunkBase shows some evals with static key values. Let's think that I have index with few details:
**SourceIP, DestIP, protocol** - pretty much purely non numerical values.
Can you give example how to attach those on chart visualization as keys being evaluated? Obviously I need to have them as counts (like amount of SourceIP's and so on) - how would you add them as evaluated keys?
This is the example on SplunkBase:
| makeresults
| eval key="current", "Business Value"=.37, Enablement=8.64, Foundations=2.56, Governance=1.68, "Operational Excellence"=4.992, "Community"=9.66
| untable key,"axis","value"
| eval keyColor="magenta"
Let's take SourceIP - I do have 5 of them on the list, DestIP's (there are 3) and protocol - well its either UDP or TCP, but count of each matter. How would you add them on the example to make it work?
In addition - is there possibility to have say eval key="CURRENT", then "PAST" and then "OBJECTIVE" - per what I've tested I am only able to have 2 evaluations on same chart.
Thanks in advance!
↧
How do I display sparklines for each day of the month?
I am doing search for a month and want to display a [sparkline][1] for each day. Any ideas?
TIA!
David L. Crooks
[1]: https://docs.splunk.com/Documentation/Splunk/7.1.2/Search/Addsparklinestosearchresults
↧
Why does my Splunk web keep auto-refreshing?
Has anyone experienced continuous automatic refreshing of Splunk (web console) dashboard or apps including searching.
When I visit Splunk console and try to navigate to any app or dashboard or even management console. It keeps refreshing every 2/3 seconds. I started noticing this behavior suddenly. I tried following
- Restart Splunk
- Restart VM
- Verify CPU/Disk/Network usage, all seems within acceptable range in fact way under control
Splunk deployment
- Single Node on big vertically scaled instance (32GB RAM, 8vCPU cores)
- Splunk v7.0.4 on linux x64 Apps
- Splunk App for AWS, AWS add on
- Daily ingestion ~35GB
- Browser - Chrome
↧
↧
What's the performance impact of high-frequency performance sampling for Windows perfmon
For the Windows perfmon input, I see that there's a setting that can enable high-frequency performance sampling. From the spec:> Enables high-frequency performance sampling. The input collects performance data every sampling interval. It then reports averaged data and other statistics at every interval.> The minimum legal value is 100 ...> Defaults to not specified (disabled)
I assume it's disabled by default because it generates an additional load on the forwarder, but I can't find anything that talks about what that load might be. Does anyone have an idea what kind of impact enabling this would have?
↧
Qualys TA error : Request failed with SSLError
I have not enabled any certificate for Qualys but still i got below error
Please help on the same.
8/28/18 11:35:05.000 AM
TA-QualysCloudPlatform: 2018-08-28 11:35:05 PID=21248 [MainThread] INFO: TA-QualysCloudPlatform [host_detection] - Request failed with SSLError, Retrying: https://qualysapi.qualys.com/api/2.0/fo/asset/host/vm/detection/ with params={'action': 'list', 'show_igs': 1, 'vm_processed_after': '1999-01-01T00:00:00Z', 'status': 'New,Active,Fixed,Re-Opened', 'truncation_limit': 5000, 'show_results': 0}. Retry count: 1
8/28/18 11:35:05.000 AM
TA-QualysCloudPlatform: 2018-08-28 11:35:05 PID=21248 [MainThread] ERROR: TA-QualysCloudPlatform [host_detection] - SSLError
↧
Cannot retrieve vnx file data from EMC VNX 5200 when using Splunk Add-on for EMC VNX
I could retrieve VNX Block Data except VNX File Data. I found error in ta_vnx.log.
How do I fix this issue.
2018-08-28 11:18:35,830 INFO 140323441321728 - Start vnx_data_loader://sf_vnx_file. Metric=vnx_file_sys_performance
2018-08-28 11:18:35,831 INFO 140323451811584 - Start vnx_data_loader://sf_vnx_file. Metric=vnx_file_nfs_performance
2018-08-28 11:18:35,844 ERROR 140323441321728 - platform=Vnx File,ip=172.30.114.102,cmd=export NAS_DB=/nas;echo "<>"; /nas/bin/nas_xml -info:; echo "< >";echo "<>"; /nas/bin/nas_storage -query:* -fields:serial -format:'%s,'; echo ""; echo "< >";,reason=ssh_exchange_identification: read: Connection reset by peer
2018-08-28 11:18:35,844 ERROR 140323451811584 - platform=Vnx File,ip=172.30.114.102,cmd=export NAS_DB=/nas;echo "<>"; /nas/bin/nas_xml -info:; echo "< >";echo "<>"; /nas/bin/nas_storage -query:* -fields:serial -format:'%s,'; echo ""; echo "< >";,reason=ssh_exchange_identification: read: Connection reset by peer
2018-08-28 11:18:35,855 ERROR 140323441321728 - platform=Vnx File,ip=172.30.114.103,cmd=export NAS_DB=/nas;echo "<>"; /nas/bin/nas_xml -info:; echo "< >";echo "<>"; /nas/bin/nas_storage -query:* -fields:serial -format:'%s,'; echo ""; echo "< >";,reason=ssh_exchange_identification: read: Connection reset by peer
2018-08-28 11:18:35,855 INFO 140323441321728 - End vnx_data_loader://sf_vnx_file. Metric=vnx_file_sys_performance
2018-08-28 11:18:35,856 ERROR 140323451811584 - platform=Vnx File,ip=172.30.114.103,cmd=export NAS_DB=/nas;echo "<>"; /nas/bin/nas_xml -info:; echo "< >";echo "<>"; /nas/bin/nas_storage -query:* -fields:serial -format:'%s,'; echo ""; echo "< >";,reason=ssh_exchange_identification: read: Connection reset by peer
2018-08-28 11:18:35,856 INFO 140323451811584 - End vnx_data_loader://sf_vnx_file. Metric=vnx_file_nfs_performance
↧
lookup file to use as input to pass value to search
i have a list of server in lookup file.
i want to create an alert...
the list of server names in the lookup file(around 90 servers). i need to pass the value in main query from the lookup file.
where column server has an value with around 90servers. so i need to pass the 90 servers values in the search.
↧
↧
Bar chart with colors based on external field
Im trying to create bar chart base don table and to color the columns by field that is not part of the table.
For example:
**my_search... | eval risk_order=case(app_risk=="High",0, app_risk=="Critical",1) | stats count as "Logs" by appi_name ,risk_order | sort 10 -risk_order -"Logs" | table appi_name , "Logs"**
if I visualize it, I see that every bar as the same color (which is based on field "Logs").
**I would like to change the color of the bar based on app_risk field.**
each value of app_risk should use different color.
How can I do it?
↧
Need help on TIME_FORMAT and TIME_PREFIX
I have props that is not working for TIME_FORMAT and TIME_PREFIX for the below log structure. Trying the break the LINE_BREAK from the first line. Please help.
Error when i try to upload the log( "Could not use strptime to parse timestamp from Token TOKEN = DD215569A74FB06F5BC0C966CF60AD86:2018-08-27 14:28:06,382 , Failed to parse timestamp defaulting to file modtime)
Log:-
INFO:SESSION TOKEN = DD215569A74FB06F5BC0C966CF60AD86:2018-08-27 14:28:06,382
INFO:REQUEST:2018-08-27 14:28:15,000
INFO:
Props.conf
[ wsa:splunkalert:log ]
CHARSET=UTF-8
LINE_BREAKER=([\r\n]+)(\w+\:\w+\s\w+\s\=\s\w+\:\d+\-\d+\-\d+\s\d+\:\d+\:\d+\,\d+)
MAX_TIMESTAMP_LOOKAHEAD=30
NO_BINARY_CHECK=1
SHOULD_LINEMERGE=false
TIME_FORMAT= %H-%m-%d %H:%M:%S,3N
TIME_PREFIX=\s
disabled=false
pulldown_type=true
↧
How to get non-abbreviated time-zone?
Is there a way to display the full timezone and not just the abbreviation? The SPL I am currently using is:
| eval zone=strftime(time(),"%Z %z")
However this just gives me the abbreviation (i.e. "AEST +1000"). I would like it to display "Australian Eastern Standard Time +1000".
↧
How to find difference between two times which is off two different format and set alert if the difference is more than 10mins ?
Hi,
I have a query which returns two columns Time1 which is _time and one more column Time 2 which is user calculated time available in the event as below,
![alt text][1]
Query used
index=data |eval GetdateTime = date + " " + gettime | timechart span=5m last(GetdateTime) as Time2 by server
Query returns the last date logged in event by server. I have to identify difference between those two time fields which is _time & Time2. How do i find the difference since the format of the fields are different for hours section. _time uses : as separator where in the field available in the column is using . as separator. How do i replace it and then convert it to epochtime in order to find the difference ?
I need to define an alert in real time to check if there is a difference in field is more than 10 mins
Please let me know.
[1]: /storage/temp/255826-screen-shot-2018-08-28-at-45345-pm.png
↧
↧
How to split a field into multiple fields via sourcetype?
I have a a huge message field with the format: field1=value1,field2=value2......fieldn=valuen. This field is not getting extracted by Splunk automatically.
Is there a solution on how to get this field extracted into multiple fields with these values? I tried to edit sourcetype for my message field with this regex (\w+)=([^,]+)* but it didnt work.
I want to write a regex that captures value1 and names it as field1 and so on for all fields. I want the field names to be dynamically used as I do not know the names of all fields.
I also looked if I can do some things from transforms.conf. So far no luck. :(
↧
Syslog UDP to local universal forwarder: host is always 127.0.0.1
Hi all,
I've just stumbled across this issue. I have a linux host running rsyslogd. When I forward my events to the local non-priviledged splunk forwarder via TCP, everything is fine. However when I change the stream to UDP, the hostname in the events will always be set to 127.0.0.1 in Splunk. Eventhough I can see the proper hostname in the raw events.
The strange thing is: when I open the udp:1514 directly on the indexer and forward the events to this data input, it works just fine again.
To make it short:
TCP -> u.forwarder: hostname set properly
UDP -> u.forwarder: hostname 127.0.0.1
TDP -> indexer: hostname set properly
UDP -> indexer: hostname set properly
Here's my inputs.conf on universal forwarder
[default]
host = webserver.labcorp.lan
[udp://1514]
sourcetype=syslog
disabled=false
index = unix
The host will always be ignored until I switch the stanza to TCP.
This is the inputs.conf on the indexer:
[udp://1514]
connection_host = dns
index = network
sourcetype = pfsense
disabled = 0
[splunktcp://9997]
connection_host = ip
Is there something I'm missing?
Thanks!
↧
How can I change the legend values for timechart to a different format?
How can I change the values in the legend for a timechart? I use:
index=indexone sourcetype=sourceone | timechart count by X usenull=0 span=1h |timewrap 1day
Some of the results look like: X_22days_before. I want to have just the X.
↧