Hi,
I create a chart using the following query which basically combines three fields and plots their count on a chart.
When I hover the mouse on any column I can see the phase name and count(as expected).
index=“app_event”
| eval myFan=mvrange(0,3)
| mvexpand myFan
| eval time=case(myFan=0,$$payload.beginVal$$, myFan=1,$$payload.endVal$$, myFan=2,$$payload.anotherVal$$)
| eval phase=case(myFan=0,"Start", myFan=1,"End", myFan=2,"Other")
| eval Time= strftime (time, “%F %T.%9Q”)
| chart count by Time phase
I now want to add an extra label($$payload.eventID$$) to every column such that when I hover over a column I am also able to see this label. How do I do this?
(PS I first tried concatenating this label to phase but then the chart starts counting by 'phase+payload.eventID' which I do not want. I want the chart to look the same, just with the new added label to each column.)
Thanks.
↧
Adding extra labels to columns in charts
↧
eliminate unnecessary values when indexing
good morning
I want to ignore certain elements of a log when indexing them, for example:
field0 | x | x | x | x | x | field6 | field7 | field8 | x | x | x | field12 | field13 | field14 | field15 | field16 | field17 | x | field19 | field20 | x | x | x | x | x | x | field27 | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | field48
I have many values in this line of events and I just want the FIELDXX values to be indexed, and the values between | x | do not. I know that a whole line of events can be ignored using the transform.conf, but in this case I only want certain values. Is this possible?
regards
↧
↧
Help with db connect custom jdbc connection
Hi,
I'm trying to add a new custom jdbc connection to DB Connect, version 3. When I go into the sql explorer and choose my connection, it says "invalid database connection". I got the jdbc driver class from the vendor, and for serviceClass, I entered "com.splunk.dbx2.DefaultDBX2JDBC", based upon another Splunk Answers entry. Is that entry valid? Anything else that I can look for? I am able to retrieve data using curl from this host.
↧
btool app line breaking issues
any one else having issues when testing the btool app on a UF where the events are signal line and not merged by stanza? I am having no luck using `BREAK_ONLY_BEFORE = \[`
Current default props.conf:
[source::*/bin/btool.sh*]
DATETIME_CONFIG = CURRENT
BREAK_ONLY_BEFORE = ^.*?\/etc\/(apps|system|slave-apps)\/(?:(.*?)\/)?(default|local)\/(?\w+\.conf)\s+\[(?.+?)\]$
[splunk:config:btool:app]
EXTRACT-btool = (?.*?)/etc/(?apps|master-apps|slave-apps)/(?[^/]*)/(default|local)/(?\w+\.conf)\s+\[(?.+)\]
# hack for sourcetype wildcards
# c.f https://answers.splunk.com/answers/8505/is-it-possible-to-use-wildcards-in-sourcetype-props-conf-stanzas.html
# c.f. SPL-117030
[(?::){0}splunk:config:btool:*]
EXTRACT-btool = etc/((apps|master-apps|slave-apps)/)?[^/]+/(default|local)/(?\w+\.conf)\s+\[(?.+?)\]
↧
Find time difference between different types of log in and log out events with no shared field values
I have log items that have event messages but no IDs indicating that the log in and log out belong to the same session. However, obviously a log in will happen before a log out so on and so forth.
The logs look something like this:
{TIME} "eventMessage":"Timeout is detected for Standard user" {userID}
{TIME} "eventMessage":"User login successful" {userID}
{TIME} "eventMessage":"Timeout is detected for SAML user" {userID}
{TIME} "eventMessage":"SSO user login successful" {userID}
{TIME} "eventMessage":"User logged out successfully" {userID}
{TIME} "eventMessage":"SSO user login successful" {userID}
I want to calculate all the time that the user was logged in, but have no shared field values I can do it by. As you can see, the user will sometimes log in via SSO and sometimes normally. They can also either log in, or the system can time them out. The logs reflect that.
How would I go about calculating the time between logins and logouts/timeouts?
Bonus question: How would I take 60 minutes off each time there was a timeout? (the users have to be inactive for 60 minutes before they are timed out.)
2nd Bonus question: How would I do this for multiple users whose logs might be mixed up with other users (i.e. the login/logout would not be in a direct line because other users logins/ logouts might be in the mix)?
↧
↧
Why can't I search by Source using HUNK?
We currently use HUNK and have a virtual index to search a MapRFS. When I run the search I can clearly see that source kpis are created showing where the file is. When I click on it and choose Add to Search, it doesn't find any results - which makes no sense at all.
Anyone else seen this behavior?
↧
7.1.1 Index Clustering
Hello!
I'm having a frustrating time attempting to set up a test environment with Index Clustering and I've reached a tipping point! I've searched online for answers but I'm not finding anything substantial that's been able to fix my problem. The VM network that I set up has one Deployment Server (DS), a Master Node (MN), a Search Head (SH), 3 Indexers, and 2 Forwarders. I set the Replication Factor to 3, and the Search Factor to 2. I followed the following steps to set up the network and create the index cluster:
1. Created VMs, installed Splunk on each box, pinged entire network to ensure connectivity between every VM.
2. On the DS I configured some Apps, created some server classes, and organized the forwarders all nice and neat-like.
3. On the MN I enabled indexer clustering via UI and set everything to default values and created a simple password for the cluster.
4. I enabled each indexer as a peer node and connected them to the MN via UI - I received an error saying they couldn't communicate with the MN or the Replication Factor hadn't been met yet.
5. Finally, I enabled the SH via UI.
This is where I'm running into some problems. I haven't begun sending data from my forwarders yet but the _audit and _internal aren't being replicated fully, there's only one replicated and searchable copy between all three. I've waited for over an hour while I worked on other projects but the replication has stayed the same. There's a few buckets that were replicated to other indexers but after a brief period of time they stopped, so 4/10 buckets would become 5/11, then 6/12, etc...
So far I have tried:
1. Checked that all relevant ports were being used by Splunk.
2. Navigated to the "Bucket Status" page to try and find a manual solution.
3. Uninstalling and reinstalling Splunk entirely. (yes)
These are some of the error messages I've received on the MN:
**Search peer 'indexer1_name' has the following message: Indexer Clustering: Too many bucket replication errors to target peer='indexer2_ip_address'8080. Will stop streaming data from hot buckets to this target while errors persist. Check for network connectivity from the cluster peer reporting this issue to the replication port of target peer. If this condition persists, you can temporarily put that peer in manual detention.**
**06-28-2018 14:27:08.061 -0400 INFO CMMaster - event=handleReplicationError bid=_internal~7~9EB230C3-F26E-4110-A543-1C5DBB249AAC tgt=E106836F-8C34-4AAF-8922-8E859E898E62 peer_name='indexer2_name' msg='target doesn't have bucket now. ignoring'**
**06-28-2018 14:27:08.061 -0400 INFO CMMaster - replication error src=A6FBB117-781D-4AD8-B620-8981371DE05F tgt=E106836F-8C34-4AAF-8922-8E859E898E62 failing=tgt bid=_internal~7~9EB230C3-F26E-4110-A543-1C5DBB249AAC**
**06-28-2018 14:27:08.056 -0400 INFO CMMaster - postpone_service for bid=_internal~8~E106836F-8C34-4AAF-8922-8E859E898E62 time=150.000**
I'm wondering if anyone has a hunch about what the happy heck could be going on that I'm overlooking. I've set up a cluster before in a separate Splunk Lab so this is extra weird to me - I thought I had most of the basics down, but apparently not! Any thoughts or advice would be greatly appreciated. Thanks,
-James M
↧
Splunkd Attempting to Terminate McAfee Processes
Hello,
I'm using McAfee VirusScan Enterprise and Host Intrusion Prevention (HIPS), and HIPS is reporting that Splunkd is triggering the following signature: "Prevent termination of McAfee processes".
It's attempting to "open with terminate" and "open with modify" the McAfee Process Validation Service (mfevtps.exe). It does this tens of thousands of times and is creating a lot of noise in the logs.
Is this normal behavior for Splunk? Does anyone know what it's actually trying to do to the McAfee service? Is it possible to make it stop?
Thanks.
↧
Can Splunk read a file in JSON format?
We are trying to pull in slack data using function1 which is not work as we are using the new api. We had a call with slack and they suggested to create a custom app. In the interim what we would like to is create a script that fetches the slack events and writes to a file and then use a file monitor to retrieve the events.
Slack returns the data in json, so how would I setup the file monitor to read json? Or would I just format the data in the script that retrieves slack?
Thanks!
↧
↧
Different legend name to column name
When we plot a chart like this
| chart count time phase
Lets say the legend appears as
Foo
Bar
Hey
Day
When I hover over the columns, I see popups like
time: ....
Foo: 1
time: ....
Bar: 3
I want to change the names in the popup while keeping the legend names the same(which means replace would not work). How do I go about this?
Legend names still stay Foo Bar Hey Day
But names in popup look like Foo-1 Bar-1 Hey-1 Day-1
Thanks.
↧
Blacklisting Windows event logs on a deployment server - not working
I tried following the documentation for blacklisting Windows event logs in Splunk 6.3.1 without success. I tried editing Splunk/etc/system/local/inputs.conf as well as Splunk/etc/apps/Splunk_TA_windows/local
↧
Splunk not reading the new file created after 2 months
Hello Splunkers,
I have a situation where in a log file is created by the application after a long duration of 2 months.
I found no error in splunkd log for this specific file. Neither I found "WatchedFile" event for this file.
I'm sure that the issue is not due to initcrclen or crcSALT as the log file is new and splunkd log does not have any information on this.
After restarting the agent I finally get the following splunkd log info
06-28-2018 15:20:24.560 -0400 INFO WatchedFile - Checksum for seekptr didn't match, will re-read entire file='XXX.log'
However the old data is still not indexed and I do not have new data flowing in to the log file.
Can some one explain this situation.
Regards,
Ankith
↧
Evaluate only certain eventtypes by tag
I am trying to do a search in Splunk that applies only eventtypes that are owned by my account to the events found. The best way I found to do this so far was to tag each eventtype and filter that using `tag::eventtype="my_eventtype"` which appears to successfully apply only what I asked. The problem is that now the search will filter out any events that the eventtypes do not apply to. I still want to see all of the events that the search finds, but only apply the eventtypes I want to improve efficiency (we have a lot of eventtypes at my company). Is there any way to have search only evaluate certain eventtypes but still show all events found for a search?
↧
↧
Network downtime breaks A3Sec app ingestion
I have a different (and mislabeled) post about this that is labeled answered, but the issue is not.
I don't know if I found a bug- or more likely I'm really bad at hunting down a simple issue, but I found if my router/switch goes down (APC took a dump) for a few hours, upon restoring the network the A3Sec app (for pfsense logs) will start to get the following type of errors in the splunkd.log:
06-27-2018 19:23:58.543 -0700 WARN DateParserVerbose - A possible timestamp match (Sat Setp 8 18:46:43 2001) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Contex: source=udp:514 | host xxx.xxx.x.x | pfsense_syslog |
06-26-2018 20:51:26.834 -0700 WARN DateParserVerbose - Failed to parse timestamp in the first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to time stamp of previous event (Tue June 26 08:27:00 2018). Context: source=udp:514 | host =xxx.xx.x.x | pfsense_syslog
This is on a CentOS7 VM, ESXi 6.4. The CentOS7 box gave itself an IPv6 address when the network went down, had to get it back to it's IPv4 when the network got back up.
I reviewed the app's props.conf, transforms, multiple reboots, deleted the gw_pfsense index, made it again, clean indexes (I guess this is clearing the fishbucket on 7.x?), turn pfsense syslog output on and off, reboots etc to no avail.
Ultimately I went back to an earlier snapshot and now gw_pfsense is indexing firewall events again.
↧
Label Encoding in Machine Learning Toolkit
I believe by default the Machine Learning Toolkit utilizes one hot encoding when converting categorical variables to numerical. Is there an easy way to utilize label encoding? For example - I want to assign a risk score based on country. So China may map to a 5 and US may map to a 1, where 5 is riskier than 1.
I imagine I could do this with a bunch of eval commands in the query or alternatively an additional field extract, but is there a "prettier" way to do this?
↧
Why is Splunk dbconnect v3 batch input unable to fully ingest all the rows from MSSQL?
Am working on a project and realise that the dbconnect app (with batch input setting), is unable to fully ingest all the results queried on MSSQL. The difference in the row generated on MSSQL is so much more than what is ingested. the setting for my db_input.conf is as below:
[myQuery]
host = sampleHost
connection = connectionName
disabled = 0
index = database
index_time_mode = current
interval = 36 * * * *
mode = batch
query = myQuery
source = dbx
sourcetype = sql_db
fetch_size = 150000
query_timeout = 1800
max_row = 150000
tail_rising_column_number = 1
input_timestamp_column_number = 8
Thanks in advance guys!
↧
Splunk Anomaly Detection for Logs
Team,
Are there any working sample to create a POC on Splunk Anomaly Detection using Logs messages.
In our senario we need to notify admin if any login failure /Error received we need to notify an alert.
Thanks
Uma
↧
↧
Hi, graph to populate to time duration in timechart
I have start time and end time for 5 rows with duration, i need a graph which populates from start_time till the duration ends.
Graph for all 5 rows.
↧
Running Splunkd as a gMSA
**Has anyone had success with running the Splunkd service on a HeavyForwarder using a gMSA (Group Managed Service account)?**
I am using the gMSA account on the HeavyForwarder already for other services and this is working, but when i try run the Splunkd service as a gMSA i get various issues.
*The gMSA is also in the local Administrator group.*
The service starts but there are permission issues... Mongo fails to start, KVStore issues etc...
***splunkd.log***
06-29-2018 10:06:18.557 +0100 ERROR Logger - Failed opening "C:\Program Files\Splunk\var\log\introspection\disk_objects.log": Access is denied.
06-29-2018 10:06:18.557 +0100 ERROR Logger - Failed opening "C:\Program Files\Splunk\var\log\introspection\http_event_collector_metrics.log": Access is denied.
06-29-2018 10:06:18.557 +0100 ERROR Logger - Failed opening "C:\Program Files\Splunk\var\log\introspection\kvstore.log": Access is denied.
06-29-2018 10:06:18.557 +0100 ERROR Logger - Failed opening "C:\Program Files\Splunk\var\log\introspection\resource_usage.log": Access is denied.
06-29-2018 10:08:22.999 +0100 ERROR KVStoreConfigurationProvider - Could not get ping from mongod.
06-29-2018 10:08:22.999 +0100 ERROR KVStoreConfigurationProvider - Could not start mongo instance. Initialization failed.
06-29-2018 10:08:22.999 +0100 ERROR KVStoreBulletinBoardManager - Failed to start KV Store process. See mongod.log and splunkd.log for details.
***mongod.log***
2018-06-29T09:08:12.347Z I CONTROL [initandlisten] options: { net: { port: 8191, ssl: { PEMKeyFile: "C:\Program Files\Splunk\etc\auth\server.pem", PEMKeyPassword: "", allowInvalidHostnames: true, disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireSSL", sslCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." } }, replication: { oplogSizeMB: 200, replSet: "E98BF268-F1CB-4CF1-945B" }, security: { javascriptEnabled: false, keyFile: "C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0" }, storage: { dbPath: "C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo", mmapv1: { smallFiles: true } }, systemLog: { timeStampFormat: "iso8601-utc" } }
2018-06-29T09:08:12.348Z I STORAGE [initandlisten] exception in initAndListen: 98 Unable to create/open lock file: C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\mongod.lock errno:5 Access is denied.. Is a mongod instance already running?, terminating
What permissions are necessary for a gMSA?
This [page][1] talks about service accounts but not specifically about gMSA's
Any help would be appreciated.
[1]: http://docs.splunk.com/Documentation/Splunk/7.1.0/Installation/ChoosetheuserSplunkshouldrunas
↧
Is have any software certification for "Splunk Enterprise" ?
As subject, have any software certification for Splunk product - "Splunk Enterprise" ? e.g., CMMI, CASQ, CAST, ISO ... etc
↧