I have three event types:
eventtype="windows_login_failed"
eventtype="duo_login_failed"
eventtype="sremote_login_failed"
I am trying to run a search in which I rename the event types to a common name:
Windows = eventtype="windows_login_failed"
DUO = eventtype="duo_login_failed"
Sremote = eventtype="sremote_login_failed"
I run the following search, but I keep getting an error message stating, 'Error in 'eval' command: The expression is malformed. Expected ).'
eventtype="windows_login_failed" OR eventtype="duo_login_failed" OR eventtype="sremote_login_failed" [| inputlookup xxx_xxx ] OR [| inputlookup yyy_yyy] | eval Source = (eventtype == windows_login_failed, "Windows"), (eventtype == sremote_login_failed, "SRemote"), (eventtype == duo_login_failed, "DUO") | stats count by myuser,Source| sort -count
Any help would be greatly appreciated
Thx
↧
Eval with multiple values
↧
Can we set a time range from today 00:00:00 AM to real time now?
Hello,
I would like to set a search for the 24H of the current day: a time range from today 00:00:00 AM to real time now?
Is it possible?
If yes, could you explain to me how to do that?
Thanks,
Chris
↧
↧
Is there an operator similar to the SQL 'in' operator?
I've been looking through the search documentation to see if Splunk has an operator similar to the SQL 'in' operator. I'm not seeing anything so my hunch is it does not exist, but I thought I would just ask. I know I can just add on a bunch of 'or' clauses but an 'in' operator would just be a bit more concise. Any thoughts?
↧
Error when gathering "metrics" in Add OPC UA input screen.
Hello I can browse my local OPC UA server (Siemens simatic net OPC) using a third party opc browser. However I am unable to connect using the SPLUNK "add OPC UA Input" screen. When attempting to browse, I get error "In handler 'ta_opcua_address_address': Unable to xml-parse the following data: %sSplunk_TA_opcua".
In one of the pre-release versions, I had to enter the "read nodes" manually, but I don't see this option anymore.
↧
lookup failure after upgrade
Hello,
I just upgraded the App to the latest version and is giving an error when doing Indicator lookup as seen in the image![alt text][1]
the indicator has been obtained from custom search, so I know is a valid indicator.
Any help is well appreciated
[1]: /storage/temp/165205-tcerror.png
↧
↧
How to display the 2nd through n-1 values of a field?
I have some Windows event log data that shows the ID when a user logs in and logs out. In addition, it shows me the audited actions taken by the user throughout their session. The generated table always starts with the login and always ends with the logout. Since I already know the login/logout messages, I don't want to see them in the audited actions.
How can I display the `2nd to n-1` values of the audited actions?
**Current search**
index=win user=testcase | transaction user startswith="EventCode=4624" endswith="EventCode=4647" mvlist=t | eval loginid=mvindex(id,0) | eval logoutid=mvindex(id,-1) | eval user=mvdedup(user) | table loginid, logoutid, user, audit_action
**Current output**
loginid logoutid user audit_action
5073518 2519740 testcase An account was successfully logged on
A new process has been created
A new handle to an object was requested
A privileged service was called
An account was logged off
User initiated logoff
I would like to see everything above, except the first and last audit actions. How do I hide/remove them? There is no `mvindex(audit_action, n-1)`
**SOLVED - Final Working Search**
index=win user=testcase | transaction user startswith="EventCode=4624" endswith="EventCode=4647" mvlist=t | eval loginid=mvindex(id,0) | eval logoutid=mvindex(id,-1) | eval user=mvdedup(user) | eval audit_action=mvindex(audit_actions,1,mvcount(audit_action)-2) | table loginid, logoutid, user, audit_action
↧
How can I pull and alert on a value found from a search?
I am trying to pull data from Splunk via a search and send it to Netcool OMNIbus. Right now I am just sending it via an Alert Action to my email to figure this out. In doing so, I cannot seem to find a way to lock on to the actual message in the recorded log event itself. I hope this makes sense. It seems like it is difficult to actually pull and send out the actual result of a search. Passing all the information used for the search seems easy. Am I missing something here? I am really new to Splunk.
For example, if you look at the screen below from my search in Splunk, it finds and returns the log event I was looking for but within the Alert Trigger I send out from Splunk via email, I want to actually send the log event which is...
"[2016-10-14T13:14:57]:WARNING:HEMDP0173W:[WebContainer : 3]:No translation for severity 'P3-Low' could be found. Using the data source conversion instead."
Is this possible?
![alt text][1]
I do see that you can pass the following arguments...
**Arg Environment Variable Value
0 SPLUNK_ARG_0 Script name
1 SPLUNK_ARG_1 Number of events returned
2 SPLUNK_ARG_2 Search terms
3 SPLUNK_ARG_3 Fully qualified query string
4 SPLUNK_ARG_4 Name of report
5 SPLUNK_ARG_5 Trigger reason
For example, "The number of events was greater than 1."
6 SPLUNK_ARG_6 Browser URL to view the report.
7 SPLUNK_ARG_7 Not used for historical reasons.
8 SPLUNK_ARG_8 File in which the results for the search are stored.**
But none of these contain the actual value of the search result. The log entry which is what I want to send from Splunk via an Alert. So basically I guess I am looking for a way to actually send returned data of the search result.
[1]: /storage/temp/165206-screen.png
↧
How to get Linux OS logs off a Splunk server, where Splunk is started as a non root account, to index in an indexer cluster?
I have a Splunk indexer cluster that is using a service account (non-root) to start Splunk. How do I get the OS logs, like /var/log/messages, /var/log/secure etc... into the cluster indexes? I know that I could stream this to a syslog server and grab it there, but is there an easier way?
Any thoughts are welcome!
↧
How do I validate the connection again after disabling and re-enabling Splunk DB Connect?
Splunk DB Connect was disabled in an effort to find out why we were exceeding our license based indexing quota. When DB Connect was enabled again and Splunk was restarted, the ID and Connection were present, but the Connection reported an Internal Service Error. I am unable to create a new connection or delete the old one. Settings reports:
In handler 'rpcserver': Unexpected error "" from python handler: "[HTTP 403] Client is not authorized to perform requested action; https://127.0.0.1:8089/servicesNS/-/splunk_app_db_connect/data/inputs/rpcstart/default". See splunkd.log for more details.
Any help would be greatly appreciated.
↧
↧
Splunk's bin/.cache is growing out of proportions. Is there a configuration setting to limit the size?
I have a python script which returns all kinds of images via REST interface by going for some external sites to fetch them first. Apparently, the results of all such requests to the external sites are cached in `$SPLUNK_HOME/bin/.cache/`. Unfortunately, our requests are constantly changing, so, on one hand, we don't really need that cache much, and, on the other hand, that folder is growing because each request is a new one.
At some point, Splunk stops all searches because the root partition has less than 5G free space. That's how we discovered bin/cache - by running around the whole system with `du` and looking for the offending folder.
Are there any configuration settings allowing us to cap the size of bin/.cache somehow? I couldn't find anything in Splunk Admin manual.
↧
How to write a search to list roles and their capabilities in a Splunk environment?
Hello Guys,
Can someone help me with a search to list the roles and their capabilities in a Splunk environment?
↧
Can I extend the purpose of the Deployment Server for general software distribution, such as making a Splunk app perform a scripted install?
Can I piggy-back (insert) a Win32 setup.exe (windows program) onto a Splunk App, and use Splunk Deployment Server to deploy the Splunk app, and have the deployed Splunk app run a script that performs an unattended installation of the setup.exe on the target system, using SYSTEM privileges?
↧
Are there any Splunk training materials for new users?
I've been tasked with creating training sessions for new Splunk users in our organization. The training will need to include recorded classes that will be hosted on our SharePoint site. I will also need to produce slides and real life exercises with solutions along with cheatsheets of the commands. I have until 11/15 to make this happen..
Obviously this is a pretty time consuming thing to do and would rather use something that's established as a framework and build something around that rather than developing something from scratch. So my question is, does anyone know of anything I described which can help me meet my deadline?
↧
↧
Filtering on lookup field values using multiple values on a few field
Searching for events which match any of multiple values for the same field times several fields in a lookup using the subsearch filter or the mv_append eval function.
↧
How to get a token to only display on a label if set via a drilldown on a form?
I have a dashboard which includes a grid, that when a row is clicked, displays info in graphs below that based on what you clicked. What I want to do is label those graphs below with the token value that is being passed in so that they have a label for what they are looking at. Let's say the token is $Name_tok$. I set the label to have $Name_tok$ in the name, and while it does work fine for when I click the drilldown options, initially it just displays literally as $Name_tok$ which is not what I'd like to see, I'd rather just have no text in that space. I've tried doing the following at the top of the page but it doesn't appear to work:
Does anyone know of a way to accomplish this?
I'm on Splunk 6
↧
Report to monitor logon/logoff time and duration on Windows
I was using the following question/answer:
How can I use windows events to monitor logon sessions
https://answers.splunk.com/answers/127012/how-can-i-use-windows-events-to-monitor-logon-sessions.html
But I need to create a report that lists Logon time, Logoff time, and Duration by User and Computer. Do you know how to alter this search string to achieve this?
Thank you.
↧
Forwarding text file to destination TCP or syslog server
Requirement:
Have a log file that is always appended with data. I wish to send the log file details as it is appended, to a destination server which is either run as a typical TCP server or a syslog server. The Universal forwarder only sends raw data which it is not what I desired. It the log file is appended with "date: ipaddress" for example, then my TCP server will just receive the details as "date:ipaddress". Hence I am looking into installing a full splunk instance (i.e. splunk enterprise) so that I can have control over the data I want to send over to my TCP server. However, do I need to create an indexer at my destination? My purpose is only to forward the appended data to destination and my destination will not run any Splunk instance. also, if it is possible, how should I configure my config files i.e. inputs.conf, outputs.conf?
thanks.
↧
↧
How to get existing KV Store to initialize after replacing one of the three (3) members with a new instance?
Splunkers,
Having trouble getting the kvstore to indicate that it is ready on any of the three members of the shcluster running Splunk 6.4.0 on CentOS 6.7.
There are 5 existing KV Stores and none of them can be accessed.
The trouble began when an overzealous admin accidentally deleted directories in one of one of our running shcluster members while it was running.
Attempted to use CLI commands to remove the corrupted member from one of the other members which seemed to work.
Then killed the Splunk related zombie processes left due to the pid file and bin directory being deleted the corrupted instance CLI could not be used.
Deleted the corrupted /opt/splunk instance then un-tar-ed another instance of Splunk 6.4.0 into a new /opt/splunk to replace the corrupted instance.
Followed the Splunk docks for "init" and "add new" to the shcluster. Once started issued CLI commands to make sure the new instance was properly configured.
The shcluster status is good and searches are possible from any shcluster member.
Attempt to do a simple search such as: | inputlookup
Yields errors indicating that the KV Store was not properly initialized.
If we had backups of the activity store data from the original three member shcluster a clean restart would make sense i.e. rebuild all the stores from scratch. We have files in folders contained in two of the three members, and can not access them via Splunk to create backup CSV files. We are hoping someone can guide us through getting the activity store initialized.
The mongod.log on the two remaining original shcluster members contain events such as:
Error in heartbeat request to ------------ InvalidReplicaSetConfig Our replica set configuration is invalid or does not include us
We have tried most of the non-destructive suggestions provided in Answers and Google searches.
THX
↧
What is the status of Splunk for Change Management?
What's the status of Splunk for Change Management? I don't see any app for that on Splunkbase.
Thanks!
↧
How to create a time chart of HTTP error codes as a percentage of a total using rangemap, excluding httpcode=200 from the chart?
I am trying to display the percentage of a rangemap as related to the total events while excluding the httpcode=200 from the chart.
I don't have to use a rangemap, but it would help to make the chart a little cleaner. Basically, I want to do this, without the 200's, in a timechart.
![alt text][1]
So far, what I have is this.
index=application (host=TTAPPPEGACC*) sourcetype="apollo:prod:tomcat_access"
| rangemap field=httpcode 200=200-299,300=300-399,400=400-499,500=500-599
|bucket _time span=1m
|eventstats count(httpcode) as Total by _time
[1]: /storage/temp/165220-capture.jpg
↧