Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Why do I get error message "Unknown search command" for my custom search command?

$
0
0
Hello All, I am using Splunk Enterprise 6.6.3 on Windows 10 and trying to get a custom search to work. I've followed this manual http://docs.splunk.com/Documentation/Splunk/6.6.3/Search/Writeasearchcommand along with various answer threads, but I cannot get it to work. I have a script that will calculate business hours between two timestamps. Splunk setup is as follows: - Custom python script totalbusinesshours.py is located in `$SPLUNK_HOME/etc/apps//bin/` - commands.conf located in `$SPLUNK_HOME/etc/apps//local/` and encoded with `UTF-8-BOM` (like the other conf files) contains the following stanza: [totalbusinesshours] filename = totalbusinesshours.py After restarting the server I run the following search: * | totalbusinesshours StartTime EndTime Which produces this error: Search Factory: Unknown search command 'totalbusinesshours'. I've also tried the following search: * | script totalbusinesshours StartTime EndTime but this produces a different error: Error in 'script' command: The external search command 'totalbusinesshours' does not exist in commands.conf. According to the documentation everything is set up correctly, but nothing works. Am I missing something? Maybe some flag somewhere to enable the running of external search commands? Any help would be greatly appreciated! Thank you and best regards, Andrew

How to enrich "index" field in any datamodel?

$
0
0
I have a data model where I want to enrich "index" field. I m very new to datamodel section and reading docs to gain some knowledge. Any sort of help or reference will be appreciated. Thanks & Regards.

Regular Expression to split a String into multiple strings based on a delimiter.

$
0
0
In my search, I have a field that have a String like below. I want to split this string into multiple strings based on "#@#@". Please help me to write a correct regular expression for this. 12/23/2017 12:37:06 PM#@#@Copying to removable media#@#@DEFAULT#@#@RUR90M4417#@#@File Copy#@#@20_xiamen wingtas_wk2017381_ci.pdf#@#@2.7314186096191406#@#@pdf#@#@c:\users\ichemiakin001\desktop\???????????? ???????? ?? ????????????? ?????? ???????? ??????????? test cost of sales transactions - trade entity\??????? ???????\??\#@#@c:\users\ichemiakin001\desktop\???????????? ???????? ?? ????????????? ?????? ???????? ??????????? test cost of sales transactions - trade entity\??????? ???????\??\20_xiamen wingtas_wk2017381_ci.pdf#@#@g:\assurance\clients\mm\sportmaster group\2017\sportmaster ifrs audit\office file\???????????? ???????? ?? ????????????? ?????? ???????? ??????????? test cost of sales transactions - trade entity\??????? ???????\??\#@#@g:\assurance\clients\mm\sportmaster group\2017\sportmaster ifrs audit\office file\???????????? ???????? ?? ????????????? ?????? ???????? ??????????? test cost of sales transactions - trade entity\??????? ???????\??\#@#@False#@#@False#@#@explorer.exe#@#@Operation monitored, File not saved I have tried the below regex. But it's not working properly. | rex field=allRequiredFields "^(?.*)#@#@(?.*)#@#@(?.*)#@#@(?.*)#@#@(?.*)#@#@(?.*)#@#@(?.*)#@#@(?.*)#@#@(?.*)#@#@(?.*)#@#@(?.*)#@#@(?.*)#@#@(?.*)#@#@(?.*)#@#@(?.*)"

How to use stats range(_time) and pass the results to timechart

$
0
0
I have data where every line has a timestamp and a correlationID. I can find the time elapsed for each correlation ID using the following query. index=yyy sourcetype=mysource CorrelationID=* | stats range(_time) as timeperCID by CorrelationID, date_hour | stats count avg(timeperCID) as ATC by date_hour | sort num(date_hour) I want to use timechart and timewrap on this data to be able to eventually get a week over week comparison of the output. I tried adding a timechart at the end but it does not return any results. 1) index=yyy sourcetype=mysource CorrelationID=* | stats range(_time) as timeperCID by CorrelationID, date_hour | stats count avg(timeperCID) as ATC by date_hour | sort num(date_hour) | timechart values(ATC) 2) index=yyy sourcetype=mysource CorrelationID=* | stats range(_time) as timeperCID by CorrelationID, date_hour | timechart count avg(timeperCID) as ATC I've also tried to add a _time value or recreate it using the strptime before the timechart with no luck. Please help

Splunk takes the wrong timestamp from the log

$
0
0
I have a log that has multiple timestamps like this inside, but not all lines have such a date entry. NOTE: 24DEC17:09:05:53.121 start executig macro main() syscc=0 The log creation date is 2017-12-24 9:05. Some of the lines in the log are indexed with today's date (it seems to take creation date of the file), and some are indexed as if they were yesterday and at 17:09 instead of 9:05 a.m,: 12/23/17 5:09:05.570 PM How can I make sure that Splunk takes the correct date ?

Recovering an Index cluster that we have no access to!?!?!?

$
0
0
Good morning. I am in a situation where I have no cli (Linux) access to my Index Cluster. I do have the Splunk Secret and have been able to introduce new index peer nodes to the cluster to [hopefully] keep the data. My plan is this: 1. Let the new indexers sync 2. Shutdown the inaccessable index peer nodes one site at at time and delete them (This is in AWS). This will hopefully make sure that everything is replicated. 3. Shutdown the cluster master 4. Rebuild the master 5. Reconnect the index peers to the new master. I do plan on changing the pass4SymKey. The documentation states that I need to backup the server.conf file. Do I really need to do this if I want to rebuild the master? Please share any thoughts/idea that may help me out, I am in a tough spot. Thanks!

F5-ASM Pre-processing Messages

$
0
0
Hello , I`m trying to pre-process my f5-asm syslog messages, but no luck till now . what im trying to achieve is a break-line but it sends it as string and does not see any special signs for line break like : \r\n or CRLF . alt text ![alt text][1] [1]: /storage/temp/225592-logasm1.png

How to search anything from yesterday at 8 pm (20:00)?

$
0
0
How to search anything from yesterday at 8 pm (20:00)?

Calculated Data Model Field Value Inaccessible

$
0
0
I created a data model called "Process_Creation" with a calculated field that represents the length of a specific string in the modeled events called "command_line_length". I can display the correct values for each event using a table command with "Process_Creation.command_line_length", however that seems to be all I can do with the data model field. When I attempt to compare the value to any numerical value I get zero results no matter the comparison type. The calculated field is stored as a number and the values are correct so I suspect the "where" command is not referencing the actual stored value. Any ideas? | datamodel Process_Monitoring Process_Creation search | eval threshold = [ | search index=summary "search_name=pm_command_line_length_stats" earliest=-90d@d latest=-1d@d | stats avg(command_line_length) AS command_line_average stdev(command_line_length) AS command_line_stdev | eval threshold = round(command_line_average + ( command_line_stdev * 6 )) | return $threshold ] | where Process_Creation.command_line_length > threshold

Compare data in different souretypes with no common field

$
0
0
I am having below situation - I am having 2 different sourcetypes "logs" and "range". - logs contains log events which are having a field with name "num" - range contains 2 different fields with names "lowerlimit" and "upperlimit" - I have to create a search to get the "num" field from sourcetype "logs" and compare it in sourcetype(range) and display the lowerlimit and upperlimit for which num>=lowerlimit AND num<=upperlimit I created a main search to get "lowerlimit" and "upperlimit" and a subsearch to get "num", however after that I do not know how to perform the comparison. [I am having no common field among both these searches] Thank you and looking forward for a solution.

How to make a continuous date search query?

$
0
0
HI everyone, just want to ask if you know how to write this search query continuously? | search Month>=09 AND Year>=2017 The month should be filtered always starting from September as it is the start of our Fiscal years, however the data is changing monthly so it will throw an error when the year changes to 2018.

How to get Netapp logs to splunk instance WITHOUT using the app "Splunk App for NetApp Data ONTAP"

$
0
0
im trying to get the logs from Netapp storage without using the App. can we get that data without app?? can someone help me on this problem??

run command , while splunk is a container

$
0
0
Dear Splunkers, I am beginner in splunk administration, for that I am struggling to run command on commandline , since I am using the docker , Your help is highly appreciated Thank you

Successfull brute force loggings

$
0
0
Is there a better way to check sucessfull brute force logins? raw event (this is an microsoft exchange web access log): The firs field is source IP, second field is login name, third field is date, fourth field is time......The eighth field is response length, ninth field is status code, tenth field is HTTP method, eleventh filed is access link. If response length is greater than 1000 (Usually its value is 1989), it means successful login, if response length is less than 1000 ,(Usually its value is 613),it means failed login 1.1.1.1, annie, 12/26/2017, 11:15:44, W3SVC1, TestExchangeSvr, 10.10.20.10, 613, 302, POST, /owa/auth.owa 1.1.1.1, Baron, 12/26/2017, 11:15:44, W3SVC1, TestExchangeSvr, 10.10.20.10, 613, 302, POST, /owa/auth.owa 1.1.1.1, Bill, 12/26/2017, 11:15:44, W3SVC1, TestExchangeSvr, 10.10.20.10, 613, 302, POST, /owa/auth.owa 1.1.1.1, Christ, 12/26/2017, 11:15:44, W3SVC1, TestExchangeSvr, 10.10.20.10, 613, 302, POST, /owa/auth.owa 1.1.1.1, Bob, 12/26/2017, 11:15:43, W3SVC1, TestExchangeSvr, 10.10.20.10, 1989, 302, POST, /owa/auth.owa 1.1.1.1, Burke, 12/26/2017, 11:15:43, W3SVC1, TestExchangeSvr, 10.10.20.10, 613, 302, POST, /owa/auth.owa 1.1.1.1, Burton, 12/26/2017, 11:15:43, W3SVC1, TestExchangeSvr, 10.10.20.10, 613, 302, POST, /owa/auth.owa 1.1.1.1, Barton, 12/26/2017, 11:15:43, W3SVC1, TestExchangeSvr, 10.10.20.10, 613, 302, POST, /owa/auth.owa 1.1.1.1, Beacher, 12/26/2017, 11:15:43, W3SVC1, TestExchangeSvr, 10.10.20.10, 613, 302, POST, /owa/auth.owa 1.1.1.1, Beck, 12/26/2017, 11:15:43, W3SVC1, TestExchangeSvr, 10.10.20.10, 613, 302, POST, /owa/auth.owa 1.1.1.1, annie, 12/26/2017, 11:15:43, W3SVC1, TestExchangeSvr, 10.10.20.10, 613, 302, POST, /owa/auth.owa 1.1.1.1, Benson, 12/26/2017, 11:15:43, W3SVC1, TestExchangeSvr, 10.10.20.10, 613, 302, POST, /owa/auth.owa 1.1.1.1, Curitis, 12/26/2017, 11:15:43, W3SVC1, TestExchangeSvr, 10.10.20.10, 613, 302, POST, /owa/auth.owa 1.1.1.1, Corey, 12/26/2017, 11:15:43, W3SVC1, TestExchangeSvr, 10.10.20.10, 613, 302, POST, /owa/auth.owa I create an alert to monitor brute force, the search as follows: `index=exchange sourcetype=exchange_web_log "/owa/auth.owa"|stats count,values(_time) as _time,values(user) as user by sip|search count>=25|table sip _time user count ` run once every two minutes , search span is -2m@m to @m, the search can working properly. now, I want to check sucessfull brute force loggings, my search as follows : `index=exchange sourcetype=exchange_web_log "/owa/auth.owa" length>1000 [search index=exchange sourcetype=exchange_web_log "/owa/auth.owa" lengtch<1000|stats count as user by sip|search count>=25|table sip]|table _time sip user` run once every two minutes , search span is -2m@m to @m, But the search sometimes can successfully detected brute force logins and sometimes miss brute force logins. So, I think this search is a failure. Is there a better way to check sucessfull brute force logins? Can these two searches be merged? I hope search can tell me that a brute-force attack has taken place and also can tell me which account was successfully logged in via brute force attack. If not, then I hope to get an answer to tell me successfull brute force loggings

Transaction by field not grouping correctly when amount of data is huge

$
0
0
Hi there, I have an index storing information about network connections which receives information of such connections every five (5) minutes. Each event has an identifier ( `id`), which states the connection that the event belongs to. Then, I need to group the events by its `id` so I can compute traffic differences and other stats per connection. When I run the command for a single device (by filtering by `src` prior the transaction command), all connections for the given device are properly extracted. This is the command: index=xxx event_type=detailed_connections earliest=11/24/2017:13:00:0 latest=11/24/2017:15:00:0 src=/P1zWkJszeaoJTZBVDI8ow | transaction id mvlist=true keepevicted=true maxspan=-1 maxpause=-1 maxevents=-1 | table src id bytes_in diff_bytes_in eventcount closed_txn And these are the results. Let's focus, for instance, in the connection with `id = 49529754583063`. We can see that the transaction is composed of 13 events with increasing traffic ( `bytes_in`). This is perfectly fine. ![alt text][1] However, when running the same command BUT WITHOUT SPECIFYING ANY DEVICE (no `src` filtering before the transaction command), index=nexthink event_type=detailed_connections earliest=11/24/2017:13:00:0 latest=11/24/2017:15:00:0 | transaction id mvlist=true keepevicted=true maxspan=-1 maxpause=-1 maxevents=-1 | table src id bytes_in diff_bytes_in eventcount closed_txn I realized that some events are not grouped as they should. Focusing on the same connection as before, I can see several different transactions with the same `id`, but `eventcount` equal to 1 (just showing some of them, not all of them). ![alt text][2] As a result, I cannot compute trustworthy stats for all devices, and running the same command over and over again device by device is not acceptable. As you can see, I've removed the limit for `maxspan`, `maxevents`, `maxpause`, and I'm keeping the evicted ones. What am I doing wrong? Is this actually a bug (I don't think so)? How can I get transactions properly grouped when working with thousands of events? Thanks in advance for the support! Regards, Leo [1]: /storage/temp/225598-transaction-single-device.png [2]: /storage/temp/225599-transaction-all-devices.png

Someone please help me translating Regular expression for Field extractions.

$
0
0
Splunk collecting security solusion logs that is DefensPro. I want to extract fields. # Log samples are.. Dec 26 15:59:00 10.18.18.2 DefensePro: 26-12-2017 15:59:00 WARNING 105 Anomalies "TTL Less Than or Equal to 1" UDP 59.150.19.252 1985 224.0.0.2 1985 13 Regular "Packet Anomalies" sampled 1 0 N/A 0 N/A medium forward Dec 26 15:38:38 10.18.18.1 DefensePro: 26-12-2017 15:38:38 WARNING 113 Anomalies "Invalid TCP Flags" TCP 220.77.181.118 1497 220.64.16.210 7795 13 Regular "Packet Anomalies" sampled 1 11 N/A 0 N/A low drop Dec 26 14:37:21 172.21.160.236 DefensePro: 26-12-2017 14:37:09 WARNING 125 Anomalies "L4 Source or Dest Port Zero" TCP 84.15.56.252 0 203.239.57.127 23 13 Regular "Packet Anomalies" sampled 1 0 N/A 0 N/A low drop Dec 26 14:36:10 10.18.18.2 DefensePro: 26-12-2017 14:36:10 WARNING 104 Anomalies "Invalid IP Header or Total Length" TCP 180.135.189.234 0 220.64.16.250 0 13 Regular "Packet Anomalies" sampled 1 0 N/A 0 N/A low drop And my select fields that ![alt text][1] [1]: /storage/temp/226590-splunk.jpg In regular order 1. Date : 26-12-2017 16:20:58 2. Severity : Warning 3. Category : Anomalies 4. AttackName : "TTL Less Than or Equal to 1" 5. Protocol : UDP 6. SrcIP : 59.150.19.252 7. DstIP : 224.0.0.2 8. DstPort : 1985 9. PolicyName : "Packet Anomalies" 10. AttackStatus : sampled 11. Risk : medium 12. Action : forward Automatic log extraction by Splunk is invalid. Can you create a regular expression to extract fields from the log above?

Why doesn't my base search work?

$
0
0
I am playing with my base search and wondering why is this not working for me. My XML is as below. Pretty simple one huh? So base search is just index=xyz for last 60 minutes. And the data has a field called action. I want timechart on that action. For result it just shows timechart on just action (NULL) and not all. If I open the same search in another window, I am getting proper result. Why such behavior? index=xyz-60mnowstats count by action PS: If I run stats count instead of timechart then it show No Result found but the same query works well in search.

Explain me search from «Meta Woot! Search» dashboard

$
0
0
I don't understand part with evals (what we calculate and for what) | inputlookup meta_woot where index=* sourcetype=* host=* | where recentTime<(now()-3600) | eval latency= round((recentTime-lastTime)/60,2) | eval latency_type=if(latency<0,"Logging Ahead","Logging Behind") | eval latency=abs(latency) | eval latency_type=if(latency="0.00","No Latency",latency_type) | where latency>=0 | convert ctime(recentTime) ctime(firstTime) ctime(lastTime) ctime(lastUpdated) | rename latency AS "latency (mins)" | table index, sourcetype, host, firstTime, lastTime, recentTime, "latency (mins)",latency_type, lastUpdated | sort - "latency (mins)" — — — — — — — — — — — — — — — — — — — — — — — — And what mean: | rest splunk_server=* /services/server/info From savedsearches.conf [Generate Meta Woot Server GUID Lookup] disabled = 1 action.email.useNSSubject = 1 alert.digest_mode = True alert.suppress = 0 alert.track = 0 auto_summarize.dispatch.earliest_time = -1d@h cron_schedule = 0 0 * * * enableSched = 1 search = | rest splunk_server=* /services/server/info | fields splunk_server, guid\ | outputlookup meta_woot_server_guid For what we need `fields splunk_server, guid`?

how to modify my query to get the stats count by a field in lookup?

$
0
0
I have a lookup file "hosts.csv" as below with multiple fields **category** **my_hostname** .. ... ... A abc.com B DEF.com Now I have the below query which gives the total number of host count that never reported to splunk from a lookup table | metadata type=hosts | search [| inputlookup hosts.csv | eval host=lower(my_hostname) | fields host ] | eval host=lower(host) | append [| inputlookup hosts.csv | eval host=lower(my_hostname) | eval recentTime=0, lastTime=0, host=lower(host) | fields host recentTime lastTime ] | dedup host | where recentTime=0 | stats dc(host) AS total_hosts Now I want to see those host count by a field"category" in a lookup file like below (for example Assuming total dc(host) is 50) Category count A 30 B 20 Cound anyone please suggest me the modified query to get the above result.

How do I get my table right?

$
0
0
Hello I have a search I am having an issue with, I am trying to get the table to be correctly formatted but I cant seem to get it right My search that works is: index=json_data | spath output=WF_Label path=wf.steps{}.label | spath output=WF_Step_Status_Date path=wf.steps{}.status{}.dates{}.ts.$date | spath output=WF_Step_Days_Allowed path=wf.steps{}.status{}.daysAllowed | spath output=WF_Step_Status path=wf.steps{}.status{}.dates{}.type | spath output=WF_Name path=wf.label | spath output=AssessmentName path=info.name | table AssessmentName WF_Label WF_Name WF_Step_Status_Date WF_Step_Days_Allowed WF_Step_Status What I am trying to do is eval the fields and mvzip the data, mvexpand that and then table it. I tried: index=json_data | spath output=WF_Label path=wf.steps{}.label | spath output=WF_Step_Status_Date path=wf.steps{}.status{}.dates{}.ts.$date | spath output=WF_Step_Days_Allowed path=wf.steps{}.status{}.daysAllowed | spath output=WF_Step_Status path=wf.steps{}.status{}.dates{}.type | spath output=WF_Name path=wf.label | spath output=AssessmentName path=info.name | eval wf_process=mvzip(WF_Step_Status_Date,WF_Step_Status) | eval wf_process2=mvzip(wf_process,WF_Step_Days_Allowed) | eval wf_process3=mvzip(wf_process2,AssessmentName) | eval wf_process4=mvzip(wf_process3,WF_Name) | eval wf_process5=mvzip(wf_process4,WF_Label) | table AssessmentName WF_Name WF_Label WF_Step_Days_Allowed WF_Step_Status_Date WF_Step_Status I get a table of all the fields What I need is a rows of AssessmentName WF_Name with the columns WF_Label WF_Step_Days_Allowed WF_Step_Status_Date WF_Step_Status I attempted this here but it not quite right since Im not actually counting anything: index=json_data | spath output=WF_Name path=wf.label | spath output=AssessmentName path=info.name | table AssessmentName WF_Name |appendcols [search index=mongo_assessmentworkflows md.type=assessments | spath output=WF_Label path=wf.steps{}.label | spath output=WF_Step_Status_Date path=wf.steps{}.status{}.dates{}.ts.$date | spath output=WF_Step_Days_Allowed path=wf.steps{}.status{}.daysAllowed | spath output=WF_Step_Status path=wf.steps{}.status{}.dates{}.type | eval wf_process=mvzip(WF_Step_Status_Date,WF_Step_Status) | eval wf_process2=mvzip(wf_process,WF_Step_Days_Allowed) | eval wf_process3=mvzip(wf_process2,AssessmentName) | table WF_Label WF_Step_Days_Allowed WF_Step_Status_Date WF_Step_Status] Any ideas? Thanks a bunch!
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>