I have installed the Suricata TA on my Splunk box. I am verifying that the data is flowing into the Intrusion Detection data model correctly.
The Suricata TA has the following field alias:
FIELDALIAS-suricata_global = proto AS transport src_ip AS src dest_ip AS dest
The following search shows the values of the "src" field correctly, but the "dest" field has thousands of events where "dest" is "unknown":
| datamodel Intrusion_Detection Network_IDS_Attacks search
But if I run this search on the raw events, I only see events that don't have the "dest" field in them:
sourcetype=suricata NOT dest=*
Can anyone think of a reason why two fields defined in the same FIELDALIAS- command would only have one of them populate with the values correctly? Both the src_ip and dest_ip fields are in the events, but the data model can't see the values for dest/dest_ip for some reason...
↧
Data model not picking up field alias
↧
db connect does not load my data
Hello,
I connected my Oracle database with Splunk using db connect, but it does not recover the data.
Someone can help me please.
↧
↧
Charting how many events greater and less than a field's value
I'm trying to get a chart that displays the number of events where ProcessingTime was less than 1 second, between 1 and 2 seconds, and greater than 2 seconds within a certain time frame, and displaying that as 3 separate lines on a chart.
I can search for these stats individually:
search command ProcessingTime<1 | timechart span=10s count by _count
search command ProcessingTime>1 ProcessingTime<2 | timechart span=10s count by _count
etc
But what I want is multiple lines in one chart depending on which "bucket" the field value falls into. Unfortunately I'm not sure what I should be searching for to find what I want to do. Any tips?
↧
Help regarding splunk queries ?
Hi Splunk members,
How Can I get some metrics to indicate things like search concurrency, search queue depth, cancelled/timed out searches, etc by search head and by indexer?
↧
Search Regex Help
I'm unable to create a regex that captures the first 6 characters of a mac address and removes the hyphen characters.
Here is the source data 00-2b-73-ab-1e-75
I need to change the source to 002B73
Here's my search | rex field =ClientId "(?) "
I'm getting stuck finding a regex statement that matches.
↧
↧
How to change graph legend labels for real time?
I have the same situation as the link below.
https://answers.splunk.com/answers/423906/how-to-change-the-graph-legend-labels-for-a-trendl.html
But I have many labels to rename which not able to rename one by one. May I know is there any easier way to rename the graph legend labels?
↧
How to get the xml data into splunk from the https url
i have the https url and how to pull the xml data using the url from Splunk.
Below is the sample url
https://10.100.100.100:10000/admin?module=Publisher&method=query_saved_report&format=xml&id=987438264
↧
where is TA cisco cdr app- To create dashboards for Cisco call manager
Could not find the TA for cisco cdr app in the base apps
↧
Splunk_TA_mcafee-wg fields wrong?
Is it me or are the extractions in Splunk_TA_mcafee-wg next to totaly wrong?
To take an example Log entry from some of my activity the log looks like this:
Jul 18 08:44:27 xxx_hostname_xxx mwg: McAfeeWG|time_stamp=[18/Jul/2018:08:44:27 +0200]|auth_user=cn=XXXX,ou=XXX,ou=XXXX,ou=XXX,o=XXX|src_ip=10.9.16.6|server_ip=172.217.22.100|host=www.google.com|url_port=443|status_code=200|bytes_from_client=958|bytes_to_client=426|categories=Search Engines|rep_level=Minimal Risk|method=GET|url=https://www.google.com/searchdomaincheck?format=domain&type=chrome|media_type=text/plain|application_name=Google|user_agent=Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.79 Safari/537.36|referer=|block_res=0|block_reason=|virus_name=|hash=|filename=searchdomaincheck|filesize=426|
for src, src_ip,user,user_agent the value is "unknown"
There is the field auth_user containing the dn but i think user should contain the cn...
What am i doing wrong?
↧
↧
capacity disk
hello
I use this code `in order to calculate the free disk space
but i also need to know the capacity disk in MB but i dont find counter for this
the value has to be display in the "x" variable
could you help me please??
| join type=outer host [search index="perfmon" sourcetype="perfmon:logicaldisk" instance=c: counter="Free Megabytes" | eval Disk_Available_Space =round(Value, 0). " MBytes /x" ]
↧
Issue with received syslog packets
I have the following setup:
- Distributed Splunk Enterprise deployment with 2 clustered indexers, 1 cluster master, 1 search head.
- Separate server configured with an instance of Kiwi Syslog server (listening on UDP 514). Syslogs are being successfully written to disk based on sending device category (e.g switch, firewall etc). This server also has an instance of Universal Forwarder installed, which is monitoring the log file and forwarding this data on to the Index cluster.
The above seems to be working ok. I can see syslogs being received by the syslog server and being written to log file successfully. I can also log into my Splunk Search head under the basic "Searching & Reporting" app and I can search on the custom index which I am sending these syslogs to and can see the syslogs appearing on the Indexer.
My issue however is three-fold:
- Firstly, the Splunk Indexers don't seem to be getting the host field correct. Without any sourcetype defined on my Universal Fowarder, it was giving the host field as the Facility and severity level (Local6.Notice) of the syslog message. I changed the sourcetype defined for this syslog file monitor on the Universal forwarder to cisco:ios. Now it has labelled the host field as the hostname of the syslog server rather than the hostname of the device originating the syslog message. How do I get it to correctly pick out the correct hostname out of the syslog message?
- Secondly, in a bid to solve this issue I installed the Cisco Networks Add-on (TA-cisco_ios) on my Indexers and my Search Head and then installed the Cisco Networks App (cisco_ios) on my search head. I believed that the add-on would help to interpret the incoming cisco syslog messages, so that the syslog fields would get interpreted correctly, however the syslogs are still being displayed with the host = the syslog hostname.
- Finally the newly installed Cisco Networks app, although it appears to have installed correctly, is not showing any received data, even though I can see the syslog messages using the basic "Search & Reporting" app. (if my syslogs are being placed in a custom index, do I need to tweak the app to be looking into the correct index?).
Thanks in advance :-)
↧
Source type Timestamp settings.
Hi,
I'm trying to set up a source type that parses the date from an inner field (message.date in the below example) however the `_time` value is always set as the processed time. Any guesses as to what i might be doing wrong?
The configuration is as follows:
![alt text][1]
This is a sample event.
```
{"message":{"_id":"some_value","date":"2018-07-18T04:40:58.071Z","type":"fsa","description":"Login required","client_id":"some_value","client_name":"some_value","ip":"some_value","user_agent":"Chrome 67.0.3396 / Windows 10 0.0.0","details":{"body":{"tenant":"some_value"},"qs":{"client_id":"some_value","response_type":"id_token","response_mode":"web_message","redirect_uri":"some_value","scope":"openid email profile","audience":"some_value","leeway":"60","state":"some_value","nonce":"some_value","prompt":"none","auth0Client":"some_value","tenant":"hirer"},"connection":null,"error":{"message":"Login required","oauthError":"login_required","type":"oauth-authorization"}},"hostname":"a","session_connection":null,"session_connection_id":null,"audience":"o","scope":["openid","email","profile"],"isMobile":false},"severity":"info"}
```
[1]: /storage/temp/251212-screen-shot-2018-07-18-at-33605-pm.png
↧
the dashboard of cisco ise app show no results while the indexer or cisco ise app has received log file from ise?
the dashboard of cisco ise app show no results while the indexer for cisco ise app has received log file from ise?
↧
↧
Linux DHCP new regex,New dhcp log format
We've upgraded our DHCP server to debian 9. Now the dhcp log is written differently then on our old server
log lines contains now: dhcpd[1569]: instead of dhcpd:
Jul 18 09:12:43 dhcp-server-name dhcpd[1569]: DHCPACK on ip-address to 00:00:00:00:00:00 (modem-type) via ip-address
the app: **Linux DHCP** can not handle this
can we change the regex in transform.conf
from:
\s(dhcpd)\:\s
to:
\sdhcpd(\:\s|\[\d+\]\:\s)
or should we change more? I tested it, but this change to transform.conf does not work
↧
Nessus app error
Morning all,
since replacing the certs on our Nessus box with self signed ones, we're getting a weird error. Initially the issue was down to not having added the chain into the cacerts.txt file but, having fixed that, we're now getting another error...
2018-07-18 09:39:45,703 INFO pid=2308 tid=MainThread file=nessus.py:main:264 | Start nessus TA
2018-07-18 09:39:45,769 ERROR pid=2308 tid=MainThread file=nessus.py:get_nessus_modinput_configs:160 | Failed to setup config for nessus TA: 'NoneType' object is not iterable
2018-07-18 09:39:45,770 ERROR pid=2308 tid=MainThread file=nessus.py:get_nessus_modinput_configs:161 | Traceback (most recent call last):
File "C:\Program Files\Splunk\etc\apps\Splunk_TA_nessus\bin\nessus.py", line 140, in get_nessus_modinput_configs
config.remove_expired_ckpt()
File "C:\Program Files\Splunk\etc\apps\Splunk_TA_nessus\bin\nessus_config.py", line 149, in remove_expired_ckpt
for data_input in inputs)
TypeError: 'NoneType' object is not iterable
It's been broken for a while, so it's not certain that this is down to the change in the certs. The app has updated at least once in the time period.
Details....
Splunk_TA_nessus v5.1.4
Nessus Professional Version 7 Version7.1.2 (#118) WINDOWS
inputs.conf...
[nessus://Nessus]
access_key = ********
batch_size = 100000
index = nessus
interval = 3600
metric = nessus_scan
secret_key = ********
start_date = 2018/01/01
url = https://our nessus server:8334
Any ideas gratefully received.
Tom
↧
[osquery App v1.0] Sending results via TCP (Forwarder) in JSON format but Dashboard has no data for visualization
Following is the sample data:
{"hostIdentifier": "8AF4BC60-D83D-11DD-B08C-10BF487F7CD8", "created": "2018-07-18T10:22:47.767220", "action": "added", "@timestamp": "2018-07-18T10:22:42", "@version": 1, "log_type": "result", "columns": {"uid": "544", "pid": "30176", "resident_size": "28192768", "sgid": "-1", "suid": "-1", "total_size": "2203514335232", "state": "", "gid": "544", "cwd": "c:\\programdata\\osquery\\osqueryd\\osqueryd.exe", "user_time": "1", "nice": "8", "parent": "4504", "start_time": "1531909358", "threads": "26", "euid": "-1", "pgroup": "-1", "path": "c:\\programdata\\osquery\\osqueryd\\osqueryd.exe", "system_time": "0", "name": "osqueryd.exe", "cmdline": "c:\\programdata\\osquery\\osqueryd\\osqueryd.exe --flagfile osquery.flags", "on_disk": "1", "disk_bytes_written": "", "egid": "-1", "wired_size": "15138816", "root": "c:\\programdata\\osquery\\osqueryd\\osqueryd.exe", "disk_bytes_read": ""}, "name": "polylogyx"}
Is there any document on what exactly the format should be? Like the date and time format of "created" attribute.
↧
Ssytem Indexes All Disabled
Hi,
Not sure when this occurred exactly however all of the indexes with an _ prefix are currently disabled on my indexer (non clustered distributed environment, 1 indexer + 1sh). I did reduce the size of the _internal index a while back which may be related, I have since changed this back and restarted to no avail.
splunkd.log does not show any related warnings or errors on restart as far as i can see. see below for end of splunkd.log after restart.
The indexes.conf does not specify any disabled params on any of the indexes, how can i re-enable these indexes?
07-18-2018 11:48:36.338 +0100 INFO ProcessTracker - (child_12__Fsck) Fsck - (bloomfilter only) Rebuild for bucket='/opt/splunk/var/lib/splunk/_internaldb/db/db_1531911675_1531910532_8926' took 42.81 milliseconds
07-18-2018 11:48:37.213 +0100 INFO DatabaseDirectoryManager - idx=_internal Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/_internaldb/db', pendingBucketUpdates=0 . Reason='Buckets were rebuilt or tsidx-minified (bucket_count=1).'
07-18-2018 11:48:37.214 +0100 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/_internaldb/db
07-18-2018 11:48:38.176 +0100 INFO IndexerIf - Asked to add or update bucket manifest values, bid=_internal~8926~620B4469-3CF8-4AF9-B52F-F77683DD529A
07-18-2018 11:48:38.205 +0100 INFO DatabaseDirectoryManager - idx=_internal Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/_internaldb/db', pendingBucketUpdates=1 . Reason='Updating manifest: bucketUpdates=1'
07-18-2018 11:48:38.205 +0100 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/_internaldb/db
07-18-2018 11:48:40.896 +0100 INFO IndexWriter - Creating hot bucket=hot_v1_8927, idx=_internal, event timestamp=1531910771, reason="suitable bucket not found, number of hot buckets=0, max=3"
07-18-2018 11:48:40.896 +0100 INFO DatabaseDirectoryManager - idx=_internal Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/_internaldb/db', pendingBucketUpdates=0 . Reason='Adding bucket, bid=_internal~8927~620B4469-3CF8-4AF9-B52F-F77683DD529A'
07-18-2018 11:48:40.897 +0100 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/_internaldb/db
↧
↧
How to Trigger alert for first 3 times and then suppress the consecutive alerts
Hi,
I have scheduled a Splunk alert to be executed for every 1 minute, if it matches my search condition for last 10 minutes events, then this will trigger an alert once. Throttle time is set as 8 minutes.
I would like to trigger 3 consecutive alerts and then throttle for 8 minutes, currently it triggers alert once and throttling for 8 minutes. Please let me know if there is a way achieve this.
↧
How to use mstats with post-processing
Hello,
I'm facing a strong issue with using a mstats command, working in a post-processing components on a dynamic webpage.
Here is the thing, my basesearch is :
var basesearch = new SearchManager({
"id": "basesearch",
"earliest_time": "$field9.earliest$",
"latest_time": "$field9.latest$",
"search": "|mstats",
"status_buckets": 0,
"sample_ratio": 1,
"cancelOnUnload": true,
"app": utils.getCurrentApp(),
"auto_cancel": 90,
cache: true,
"runWhenTimeIsUndefined": false
}, {tokens: true, tokenNamespace: "submitted"});
The search is :
var search_metrics=" avg(_value) where metric_name=\"*."+rows[i][4].toString()+"\" span=5s by metric_name";
new PostProcessManager({
"search": mvc.tokenSafe(search_metrics),
"managerid": "basesearch",
"id": searchID
}, {tokens: true, tokenNamespace: "submitted"});
As you can see, I there is a dynamic variable rows[i][4] in the search, preventing me to put it in the basesearch. BUT, as Splunk adds automatically a pipe between the basesearch and the subsearch, it can't work. Do you have an idea on how to deal with it ? The basesearch can't be empty
Thanks in advance
↧
How do you remove part of a field value?
I am trying to remove the +'s in between words for my table (i.e. **stainless+steel** to be just **stainless steel**) and my field name is SearchTerm. I tried the eval replace command method but it keeps saying *Regex quantifier does not follow repeatable item*; I do not know what to do. Any help would be appreciated.
My eval command:
| eval SearchTerm=replace(SearchTerm,"+"," ")
Edit: Spelling
↧