I have scans (from nessus add-on). Some host was scanned more times. When I select severity="critical" I see old vulnerabilities. For example:
IP, plugin-id, timestamp
10.0.0.1, 90315, 1537252785
10.0.0.1, 90316, 1537252785
10.0.0.1, 90317, 1537252785
10.0.0.1, 90318, 1537252785
10.0.0.2, 90421, 1537187491
10.0.0.2, 90422, 1537187491
10.0.0.2, 90423, 1537187491
10.0.0.2, 90424, 1537187491
10.0.0.1, 90316, 1537624344
10.0.0.1, 90318, 1537624344
10.0.0.1, 90319, 1537624344
10.0.0.2, 90422, 1537538233
10.0.0.2, 90428, 1537538233
As you see, for 10.0.0.1 max timestamp is 1537624344 and for 10.0.0.2 max timestamp is 1537538233.
How to select only events with max timestamps by IP:
10.0.0.1, 90316, 1537624344
10.0.0.1, 90318, 1537624344
10.0.0.1, 90319, 1537624344
10.0.0.2, 90422, 1537538233
10.0.0.2, 90428, 1537538233
And how to select only new plugin-id for max timestamp:
10.0.0.1, 90319, 1537624344
10.0.0.2, 90428, 1537538233
Thanks!
↧
Nessus scan - select last (not closed) vulnerabilities
↧
I want to see Number of hits from an IP address on a particular url in a minute
I am looking for result which will show, number of hits on a URL from a particular IP address in a minute.
For example, number of hits on google.com from a particular IP address in one minute
Something like this, I want to see:
[1]: /storage/temp/256084-splunk-question.png
↧
↧
Regex for url having diffrent types of parameters
I have below 2 log sets having different activities, i want two different regex for Set1(1a,1b) and Set2 (2a,2b)
Set 1:
1a. index="abc_xyz" activity=GET->/cirrus/v2.0/payloads/96a-d3f-4fb/HELLO_WORLD/fd078jkkj24342kljlce989dadc7abc56c28
1b. index="abc_xyz" activity=GET->/cirrus/v2.0/payloads/f4a-8ef-8cb/abcpayld/thfd078jkkj24342kljlce989dadc7vfc56c28
Set 2:
2a. index="abc_xyz" activity=GET->/cirrus/v2.0/payloads/rt3-v5f-4rw/abcpayld
2b. index="abc_xyz" activity=GET->/cirrus/v2.0/payloads/96a-d3f-4fb/HELLO_WORLD
I have tried with below , but No luck
index="abc_xyz" |regex "GET->\/cirrus\/v2.0\/payloads\/([[:alnum:]-]{10,40})\/([[:alpha:]_]{10,40})"
Could you please resolve my query
↧
Removing gap in charts
Hi splunkers,
I was able to plot a graph that, whilst it shows all the info I need, it also contains massive gaps that makes it less appealing.
Is it possible to eliminate those gaps? I'm not concerned about keeping the timeframe consistent.
MY search is as follows:
> index=crypto CurrencyB="CND" OR> CurrencyS="CND" | timechart> sum(eval(if(CurrencyB="CND",Buy,Sell> *-1))) as Total, sum(eval(if(CurrencyB="CND",round(Sell/Buy,8),null)))> as UnitPrice span=d cont=FALSE |> streamstats sum(Total) as Gtotal
Cheers
↧
csv data upload: event summary: no lines and events found
Hi I try to do a .csv file upload and create an index.
I am afraid I can not upload the file due to lack of karma points.
So here is the start of it:
a002,a003,a005,a006,a007,a008,a101,a001,a009,a010,a012,a034,a046,a054,a055,a058,a072,a073,a075,a077,a078,a098,a100,a102,a103,a104,a105,d003,d004,d005,d006,d041,d042,d043,d044,d046,d047,d048,d049,d050,d051,d052,d053,d059,d115,time,mac_nwpm
46.599998,48.299999,49.400002,32.900002,-999.900024,-999.900024,0,16.6,0,0,0,5,20,31.4,21.4,52,157.100006,156,0,113,0,0,0,0,11,0,0,0,0,0,0,0,1,1,0,0,0,0,1,0,1,0,0,1,0,27.09.2018 10:44,00:0a:5c:10:70:ca
39.700001,50,39.099998,15.9,13.2,37.400002,10.8,12.1,32,-999.900024,0,7,20,29.9,22.700001,50,204.300003,0,0,71.099998,0,29.9,17.1,32.700001,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,1,1,27.09.2018 10:44,00:0a:5c:1f:86:3b
27.6,51.299999,27.700001,-999.900024,-999.900024,-999.900024,0,13.6,0,0,0,29.299999,20,22.299999,22.299999,50,254.600006,172.199997,7.4,55,15,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,1,0,27.09.2018 10:44,00:0a:5c:10:70:f0
29.9,52.900002,29.1,12.9,12.5,36.900002,10.5,14.8,29.799999,29.9,0,7,20,29.799999,29.799999,50,165,164.800003,26,27.5,44.5,19,7.1,32.700001,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,27.09.2018 10:44,00:0a:5c:10:71:20
25.4,46.700001,25.700001,0,0,10.2,10.2,11.2,25.4,0,0,50,20,26.1,23,49,251.600006,0,6.6,24.4,0,22.200001,11.3,32.900002,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,1,0,27.09.2018 10:44,00:0a:5c:10:8d:ca
24.5,48.400002,22.6,19.1,19.299999,-999.900024,0,13.3,0,0,0,99,20,22.299999,22.299999,50,2.7,0,0,0.9,1.6,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,27.09.2018 10:44,00:0a:5c:10:f7:99
22.200001,54.599998,22,58.400002,14.3,43.099998,0,9.7,-999.900024,-999.900024,0,7,20,23.6,23.6,56,255.100006,202.399994,0.6,256.600006,52.599998,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,27.09.2018 10:44,00:0a:5c:10:f6:75
35.200001,48.200001,32.900002,32.5,11.5,32.299999,0,10.1,0,0,0,99,20,31,31,50,222.800003,0,7.1,62.700001,77.5,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,27.09.2018 10:44,00:0a:5c:10:92:3d
Basically, it hasa header line with the column names, consisting of register names (axyzs), a time field called "time", and a identifier field called "macnwpm".
I can upload the file but in the 2nd step, "set source type" seems it can not identify the file's structure.
I also set parameter that "header" is in line 1 and that the "time format" is %d.%m.%Y %H:%M
and that the time fied is called "time".
But still, it seems it seems it can not identify the structure correctly, bc. the "event summary" says there are 0 lines and 0 events found.
Are there any other parameters I need to do define?
Think the .csv itself is ok bc. I can do a .csv-import in excel and columns look ok there.
Best
Florian
↧
↧
too many Invalid key in stanza db_connections.conf
good morning
Splunk reports too many errors in the configuration file of the db_connect app, these configurations are made via web and validates that the connections are working correctly.
Does anyone know if this is normal or if there really is a problem defining the connection to the database?
Invalid key in stanza [conex] in /home/splunk/splunk/etc/apps/splunk_app_db_connect/local/db_connections.conf, line 563: jdbcUrlFormat (value: jdbc:oracle:thin:@::).
Invalid key in stanza [conex] in /home/splunk/splunk/etc/apps/splunk_app_db_connect/local/db_connections.conf, line 564: jdbcUrlSSLFormat (value: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=)(PORT=))(CONNECT_DATA=(SERVICE_NAME=)))).
[conex]
connection_type = oracle
cwallet_location = /home/oracle/cwallet.sso
database = database_conex
host = 1.1.1.1
identity = conex_user
jdbcUrlFormat = jdbc:oracle:thin:@::
jdbcUrlSSLFormat = jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=)(PORT=))(CONNECT_DATA=(SERVICE_NAME=)))
port = 1521
↧
Master Node - forward OS logs
Reading OS logs from a cluster indexer node is controlled by the master node $SPLUNK_HOME/etc/master-apps/_cluster/local/inputs.conf , but that only affects the indexer nodes, not the master node itself.
If I configure outputs.conf in $SPLUNK_HOME/etc/system/local/ on the master node, will it then forward everything from the master node or only the monitored paths specified in inputs.conf ?
The thing is that I only want to forward OS logs (under /var/log or any other specified file), not the internal stuff from splunk on the master node itself.
↧
Regex for url having diffrent types of parameters
I have below 2 log sets having different activities, i want two different regex for Set1 and Set2 seperately in 2 different panels
Set1
log1:
index="abc_xyz"|activity=GET->/cirrus/v2.0/payloads/96a-d3f-4fb/HELLO_WORLD|eventEndTime=2018-09-26
log2:
index="abc_xyz"|activity=GET->/cirrus/v2.0/payloads/f4a-8ef-8cb/abcpayld|eventEndTime=2018-09-26
Set2
log3:
index="abc_xyz"|activity=GET->/cirrus/v2.0/payloads/96a-d3f-4fb/HELLO_WORLD/fd078jkkj24342kljlce989dadc7abc56c28|eventEndTime=2018-09-26
log4:
index="abc_xyz"|activity=GET->/cirrus/v2.0/payloads/f4a-8ef-8cb/abcpayld/thfd078jkkj24342kljlce989dadc7vfc56c28|eventEndTime=2018-09-26
I have tried with below , but No luck
index="abc_xyz" |regex "GET->\/cirrus\/v2.0\/payloads\/([[:alnum:]-]{10,40})\/([[:alpha:]_]{10,40})"
Could you please resolve my query
↧
How do I run a query in Splunk to pull web traffic for an IP Address
Hi,
Can someone show me the query in splunk search head to pull web traffic for an ip address.
Thanks
Cosmo
↧
↧
Use a field as a filter in dashboard with Sum function
Hey splunkers,
This problem is Haunting me. So I created a query to find a percentage on a RGU value that remains constant for the calculation of error_ rate and hence I wrote this Query
(index=calls sourcetype="tc_detail_enriched") OR (index="calls" sourcetype="RGU" (LoB="CDV" OR LoB = "HSD" OR LoB = "VIDEO" OR LoB = "XH"))
| eventstats sum(RGU) AS RGU_SUM
| bin _time span=1d as day
| convert timeformat="%F" ctime(day)
| eventstats count(ACCOUNT_NUMBER) AS TC_CALLS by day
| eval error_rate = (TC_CALLS/RGU_SUM) * 100
| stats values(error_rate) by day
However I want to add a filter to the dashboard on the field LoB. Now the problem is that since I have only selected the field RGU_SUM as Sum of all RGU fields when I try to filter with LoB it doesn't work.
↧
Can we manually execute a scheduled report before its scheduled time and have results available for saved searches in dashboard?
Can we manually execute a scheduled report before its scheduled time and have results available for saved searches in dashboard?
↧
Universal Forwarder on Domain Controller, High Memory Usage
One of our administrators noticed that memory is spiking on the domain controllers and seems to have pin-pointed it to the Splunk Universal Forwarders installed on them.
Powershell is being run and it is having an impact on memory. This is one line he noticed in the event logs:
C:\Windows\system32\WindowsPowerShell\v.1.\powershell.exe -executionPolicy RemoteSigned -command, 'C:\Program Files\SplunkUniversalForwarder\etc\apps\TA-DomainController-NT6\bin\powershell\ad-health.ps1'
OS: Windows Server 2012 R2
Splunk Universal Forwarder Version: 7.0.3
Has anyone dealt with this? Thanks!
↧
Addon for Tenable Error
I have a nessus pro box (linux) that i am connecting to and after having used these forums to get through a cert issue (thank you), i am getting a new issue in the logs that i haven't been so lucky to figure out how to resolve. Specifically i am seeing these entries in the log:
2018-09-27 11:50:11,105 INFO pid=6624 tid=MainThread file=nessus.py:main:264 | Start nessus TA
2018-09-27 11:50:11,178 ERROR pid=6624 tid=MainThread file=nessus.py:get_nessus_modinput_configs:160 | Failed to setup config for nessus TA: 'NoneType' object is not iterable
2018-09-27 11:50:11,180 ERROR pid=6624 tid=MainThread file=nessus.py:get_nessus_modinput_configs:161 | Traceback (most recent call last):
File "C:\Program Files\Splunk\etc\apps\Splunk_TA_nessus\bin\nessus.py", line 140, in get_nessus_modinput_configs
config.remove_expired_ckpt()
File "C:\Program Files\Splunk\etc\apps\Splunk_TA_nessus\bin\nessus_config.py", line 149, in remove_expired_ckpt
for data_input in inputs)
TypeError: 'NoneType' object is not iterable
Does anyone know where the issue is/how to fix?
↧
↧
Why is my JavaScript file not being picked up by my XML?
I have included a JavaScript file in my XML. I put them with the script="functions.js" and I placed the JavaScript file under appserver/static.
I tested if the JavaScript file was being linked with the XML by just displaying "hello world" as a pop up.
I cleared the cache, also I restarted my Splunk instance, yet it is still not working.
What could be the reasons for why my JavaScript file is not being linked with my XML?
Thanks!
↧
Change message values if the condition matches
I am checking status code off http response , in one condition when http code is 411 , i don't get a message . So in that condition i want to have some default message in the table . But i am unable to set a default message.
Below query i am using
basic query |eval errMsg=if(status == "411", "Length missing",errMsg) | table errMsg,status
For other cases i want to have the errMsg same what i am getting back in serach.
↧
Before a report's scheduled time, can we manually execute a scheduled report and have results available for saved searches in a dashboard?
Can we manually execute a scheduled report before its scheduled time and have results available for saved searches in dashboard?
↧
Why is memory spiking on our Universal Forwarder on a Domain Controller?
One of our administrators noticed that memory is spiking on the domain controllers and seems to have pin-pointed it to the Splunk Universal Forwarders installed on them.
Powershell is being run and it is having an impact on memory. This is one line he noticed in the event logs:
C:\Windows\system32\WindowsPowerShell\v.1.\powershell.exe -executionPolicy RemoteSigned -command, 'C:\Program Files\SplunkUniversalForwarder\etc\apps\TA-DomainController-NT6\bin\powershell\ad-health.ps1'
OS: Windows Server 2012 R2
Splunk Universal Forwarder Version: 7.0.3
Has anyone dealt with this? Thanks!
↧
↧
WIll you help me with the following error in the Splunk Add-on for Tenable?
I have a Nessus pro box (linux) that i am connecting to and after having used these forums to get through a certification issue (thank you), i am getting a new issue in the logs that i haven't been so lucky to figure out how to resolve. Specifically, i am seeing these entries in the log:
2018-09-27 11:50:11,105 INFO pid=6624 tid=MainThread file=nessus.py:main:264 | Start nessus TA
2018-09-27 11:50:11,178 ERROR pid=6624 tid=MainThread file=nessus.py:get_nessus_modinput_configs:160 | Failed to setup config for nessus TA: 'NoneType' object is not iterable
2018-09-27 11:50:11,180 ERROR pid=6624 tid=MainThread file=nessus.py:get_nessus_modinput_configs:161 | Traceback (most recent call last):
File "C:\Program Files\Splunk\etc\apps\Splunk_TA_nessus\bin\nessus.py", line 140, in get_nessus_modinput_configs
config.remove_expired_ckpt()
File "C:\Program Files\Splunk\etc\apps\Splunk_TA_nessus\bin\nessus_config.py", line 149, in remove_expired_ckpt
for data_input in inputs)
TypeError: 'NoneType' object is not iterable
Does anyone know where the issue is/how to fix?
↧
Can you use a field as a filter in a dashboard with the Sum function?
Hey splunkers,
This problem is haunting me. So I created a query to find a percentage on a RGU value that remains constant for the calculation of error_ rate and hence I wrote this Query:
(index=calls sourcetype="tc_detail_enriched") OR (index="calls" sourcetype="RGU" (LoB="CDV" OR LoB = "HSD" OR LoB = "VIDEO" OR LoB = "XH"))
| eventstats sum(RGU) AS RGU_SUM
| bin _time span=1d as day
| convert timeformat="%F" ctime(day)
| eventstats count(ACCOUNT_NUMBER) AS TC_CALLS by day
| eval error_rate = (TC_CALLS/RGU_SUM) * 100
| stats values(error_rate) by day
However, I want to add a filter to the dashboard on the field LoB. Now the problem is that, since I have only selected the field RGU_SUM as Sum of all RGU fields, I'm unable to filter with LoB.
↧
Is juniper QFX series logs manageable on splunk?
Hi All,
I am not familiar with juniper network devices, what I want to know is if the logs of Juniper QFX series manageable using Splunk? Is the Splunk add on for Juniper applicable to this?
Any help is very much appreciated, thanks in advance :)
Regards,
↧