For example, I have the field "received_files" with 3 values: 1, 2, and 3.
I already ran "convert num(received_files)" as the values are always numerical values. I want to take those 3 numbers (1, 2, and 3) and multiply them by their count. Afterwards, I want to add them all up (sum).
How do I do this?
I am attempting to ingest data from a remote host (**Linux**) to my Search Head/Indexer host (**Windows**) via Splunk Web. I am unable to install a Splunk instance on the remote host, so a Forwarder is not a feasible solution. I have seen it suggested in other Splunk>answer threads that one can mount the filesystem of the remote server , although it is not ideal. I mounted the remote server and can successful ingest the data using the **Add Data>upload** option, but that same data is not visible if I attempt to use **Add Data>monitor>Files&Directories** for real-time ingestion. Why is the data only visible for *Upload* and not real-time *Monitor*? Would changes should I implement to enable this?
Splunk version: 7.0.3
Directory to ingest: mapped to a network drive (S:\)
Splunk_TA_Windows renames the sourcetypes for the windows logs.
WinEventLog:Security for example is renamed to wineventlog
Security Essentials searches fail.
| metasearch earliest=-2h latest=now sourcetype="*WinEventLog:Security" index=* | head 100 | stats count
Is this planned on being fixed, or should I remove Splunk_TA_Windows to use Security Essentials?
I'm trying to use a search that looks like
index= sourcetype=
| eval site=
| lookup host_and_site_coords site OUTPUT host AS siteHost
| search host=siteHost
That first 'eval' for 'site' is there because it's passed in as a token, but in the normal search it's not necessary. Just needed for troubleshooting this.
My problem is that everything works as expected up to the final 'search' command. That is, the lookup works and creates siteHost as I'd expect. The search command doesn't seem to get the value, however, result in what seems like ' search host="" '.
I know that with subsearches you can't pass in an eval'd value, but I didn't think that applied here.
Or maybe I'm missing something really obvious...
Thanks
Hi guys.
In my splunk cluster i've distributed search indexers. On one of them I've this message. What can I fix this?
Unable to distribute to peer named splunk-idx31 at uri https://splunk-idx31.xxx:8089 because replication was unsuccessful. replicationStatus Failed failure info: extra info missing Please verify connectivity to the search peer, that the search peer is up, and an adequate level of system resources are available. See the Troubleshooting Manual for more information.
Hello,
I am trying to build a role that would allow the users to access to two indexes (index1 and index2). The index1 has a field called `parameter` and I want the role to restrict search filter to `parameter=value`. But when I do this (see code below), I don't have access anymore to my index2. How could I avoid this ?
Thanks !
[role_test]
cumulativeRTSrchJobsQuota = 0
cumulativeSrchJobsQuota = 0
importRoles = user
srchIndexesAllowed = index1, index2
srchIndexesDefault = index1
srchFilter = parameter=value
srchMaxTime = 0
Hi Splunker,
Originally I have an output like this as a raw event in Splunk:-
2018-07-17 14:56:08 MIR="TUE, 17-JUL-2018", D_0="-", D_1="2", D_2="4", D_3="-", D_4="-", D_5="-", D_6="2", D_7="-", D_8="-
", D_9="2", D_10="-", D_11="-", D_12="-", D_13="-", D_14="-", D_15="-", D_16="-
", D_17="-", D_18="-", D_19="-", D_20="-", D_21="-", D_22="-", D_23="-
", TOTAL="10"
Where D_0 is 00:00 HR , D_1 is 01:00 HR AM,D_2 is 02:00HR AM similarly D_23 is 23:00 HR .
I would like to change it to below format:-
TIME VALUE
2018-07-17 00:00 -
2018-07-17 01:00 2
2018-07-17 02:00 4
2018-07-17 03:00 -
2018-07-17 04:00 -
Similarly, it goes on till 23:00 HR.
Thanks in advance for looking into it
I need to update the file with this:
with the following content which will configure messages received from the OCP source to split messages correctly:
[source::OCP]
SHOULD_LINEMERGE = false
I'm ingesting logs that have both event timestamps as well as timestamps within the contents of the logs. My props.conf contains BREAK_ONLY_BEFORE=<[A-Z] but it's breaking on CONTENTDATE as well. It is not exceeding the 10K default max event character length. Does anyone have any suggestions?<V ts="2018-07-16 22:14:28" >
...
...
CONTENTDATE=2017-11-30 10:48:11
...
...
index=applogsprd "/www.financialengines.com/api/v1/"
| regex status = "^[0-9]{3}$"
| chart count by api, status
| addtotals
| where '200' > 10
I have created the alert and wanted it to trigger when 200 is greater than 10, an alert should be sent to the hipchat room with all the results. I am using $result$ but it is only displaying the first result, How can I display all the results.
A user has a dashboard made of multiple searches all based on the last 24 hours of a single very large index.
Some panels should show stats based on the full 24 hours, others should only show stats based the last 5 or 15 minutes.
To save resources and speed it up, I'd like to run a single search that returns events for the past 24 hours, then run a sub-search on that result to retrieve the most recent 5 minutes, or 15, or....
How can I do that ?
Thanks,
-Rob
Where I work we are using Nessus to scan for vulnerabilities weekly. I'm in the process fo building a dashboard and making it all pretty for management. What I want to be able to do is compare the last two scans and get a difference between the total vulnerability of this weeks scan and the last one.
I don't have any input to provide and I couldn't find anything else on the Googles.
Thanks!
I would like to convert an event similar to the one below to be a single event when sending it out to an external Syslog server
****************************************
time: 20180717112345
dn: uid=123,ou=employees,ou=ddd,ou=ddd,o=ddd,dc=ddd,dc=ddd
changetype: modify
replace: userPassword
userPassword: #####
-
replace: modifiersName
modifiersName: uid=ddd,ou=ddd,ou=ddd,ou=ddd,o=ddd,dc=ddd,
dc=ddd
-
replace: modifyTimestamp
modifyTimestamp: 20180717112345Z
-
replace: accountUnlockTime
-
replace: passwordRetryCount
passwordRetryCount: 0
-
replace: retryCountResetTime
-
replace: pwdFailureTime
-
replace: pwdAccountLockedTime
-
******************************
I am getting 07-17-2018 10:58:21.699 -0700 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py"" HTTP Request error: 403 Client Error: Forbidden. Need help to resolve this issue.
Hi guys
Can you help me with this.
I have this extra search in the xml, just for evaluating tokens
am trying this, but its not working
|inputlookup abc.csv |search Field1="$token1$" Field2="$token2$" Field3="$token3$"-15mnowif('result.Field2' == *,"*","'result.Field4'")
the above search when ran and passed the tokens, lists out values for Field1,Filed2,Field3,Field4
& the above tokens are coming from inputs in the XML.
Requirement:
1) if (Field2 or token2) AND (Field3 or token3) is not *, need to set the token4 value as Field4 value
2) if Field2 or token2 is *, need to set the token4 value as *
Thanks
I have a JSON log file that I'm attempting to ingest (Splunk v6.6.5). The events parse correctly, but the epoch time isn't being used as the event timestamp. Splunk is using the file modified date for the event timestamp.
Here's a sample record and my props config (which lives on the Indexers):
{"time":1531405028,"name":"PSIKD01.BOOT","appl":"@ABCVDIF","server":"SERVER1","user":"LSRVID","HandleCount":792,"KernelModeTime":66875000,"OtherOperationCount":18498,"OtherTransferCount":630163,"PageFaults":320216,"PageFileUsage":1349924,"PrivatePageCount":1382322176,"ReadOperationCount":36716,"ReadTransferCount":38844376,"ThreadCount":34,"UserModeTime":363281250,"VirtualSize":2207380942848,"WorkingSetSize":672907264,"WriteOperationCount":205,"WriteTransferCount":63855}
[apm_json]
KV_MODE = none
INDEXED_EXTRACTIONS = json
TIME_PREFIX = "time":
TIME_FORMAT = %s
SHOULD_LINEMERGE = false
TRUNCATE = 100000
Any help would be appreciated. Thanks!
Hello,
I am looking for the equivalent of performing SQL like such:
SELECT transaction_id, vendor
FROM orders
WHERE transaction_id IN (SELECT transaction_id FROM events).
I am aware this a way to do this through a lookup, but I don't think it would be a good use case in this situation because there are constantly new transaction_id's generated and several thousand of them within a small timeframe, as well as my goal to create a timechart report.
As of right now I can construct a list of transaction_ids for orders in one search query and a list of transaction_ids for events in another search query, but my ultimate goal is to return order logs that have transaction_ids shared with the transaction_ids of the events log. Any help is greatly appreciated, thanks!
I need to add to the PATH environment variable used by Splunk Universal Forwarder.
I have tried editing splunk-launch.conf and I have checked the envvars using `splunk envvars` but this doesn't seem to set the path.
Is it possible to set the PATH variable that Splunk uses?
(Note, I only want to setup this specialized PATH for the splunkd instance. I do not want it to be systemwide)
I'm trying to debug why a saved search alert we have started skipping recently. Splunk says that there is another instance of the alert running, and it cannot kick off a second run of the search until that original run finishes. Or, more specifically, the error says: "The maximum number of concurrent running jobs for this historical scheduled search on this instance has been reached".
The documentation (see below) says to either increase the quota for a saved search or delete the job from the job queue. Unfortunately, neither works for me. I don't want to increase the quota for saved searches (I only want one instance of the search to be running at once), and I don't see any instance of the search running in the job queue, so I can't kill it.
I want to kill the search that is running so that future alerts can run without hitting this error. Does anyone know why it might not be showing up in the job queue? What steps can I take to fix? Thanks!
I've already reviewed the following documentation:
* https://answers.splunk.com/answers/54674/how-to-increase-the-maximum-number-of-concurrent-historical-searches.html
* http://wiki.splunk.com/Community:TroubleshootingSearchQuotas
How do I set user and password in inputs.conf?
This is my file:
[monitor://\\myServer\myFolder]
disabled = false
index = myIndex
sourcetype = mySourceType
crcSalt =
I already tried to add "user" and "password" but always returns the same error in the log :
FilesystemChangeWatcher- error getting attributes of path - "\\myServer\myFolder": The user name or password is incorrect.