Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Function startSearch() is not a function.

$
0
0
Facing issues to run a search using SearchManager. The error says that function startSearch() is not a function. I am facing this issue in splunk version 6.6.2 Anyone know why this function is not more availabe on 6.6.2 version?

Why specifying indexed fields with "field"::"value" results in faster and more efficient searches?

$
0
0
Write better searches Splunk manual contains the following recomendation: Specify indexed fields with "field"::"value" You can also run efficient searches for fields that have been indexed from structured data such as CSV files and JSON data sources. When you do this, replace the equal sign with double colons, like this: "field"::"value". This is the link to the manual: http://docs.splunk.com/Documentation/SplunkCloud/6.6.3/Search/Writebettersearches I have tried this recommendation myself and the searches indeed execute much faster. My question is why specifying indexed fields with "field"::"value" instead of "field"="value" results in faster searches? What exactly happens when the search is executed?

seaching for matches during specific times

$
0
0
My search is something like: index=foo "get /foo/bar"| eval a=_time+1s| eval b=_time+10m | table a,b,ip, field1, field2 How would I search these results for events between times a,b and where field1 and feild2 match?

How to find min and max per hour during day by host ?

$
0
0
If I use such SPL index=_internal | timechart span=1h count by host | stats max(*) AS *."max", min(*) as *."min" | transpose this produce min and max mixed in one column but I would like separate max and min column

add sum events in seprate column

$
0
0
it is my search host="splunk.local"|bucket _time span=1mon | stats count by event ![alt text][1] my question is : To sum the total number of events per month in a seprate field but when i use this query host="splunk.local"|bucket _time span=1mon | stats count by event | stats sum(count) as total ![alt text][2] the event field disappear i want to have event and count and the total field in my search. i try this host="splunk.local"|bucket _time span=1mon | stats count by event| eventstats sum(count) as total|table event total but it shows the result in all the column, not just on row how can i solve my problem? tanx [1]: /storage/temp/216811-2017-10-15-12-19-31.png [2]: /storage/temp/216812-2017-10-15-12-56-36.png

"--splunk-cooked-mode-v3-- " in the indexer

$
0
0
Splunkers, I am facing this issue of cooked data, I know there are many answers about it and this has been a real pain for many. I have gone through them and none of it is working. Below are my configurations , if anyone of you can point out where the error is Forwarder - outputs.conf [tcpout] defaultGroup = dmc indexAndForward = false disabled = false #sendCookedData=false when i uncomment it I don't get any data at all , not even the cooked one forwardedindex.2.whitelist = test_index [tcpout:dmc] server = xx.xx.xx.xx:9997 autoLB = true ------------------------------------------------------------------------------------------- Indexer - inputs.conf [splunktcp://9996] connection_host = ip [splunktcp://9997] disabled = 0 [tcp://8097] connection_host = dns index = test_index sourcetype = generic_single_line on indexer I am receiving "--splunk-cooked-mode-v3-- " junk data. Also if anyone can then please explain a bit about cooked mode.

How to install SA_plaso-app-for-splunk and TA_plaso-add-on-for-splunk in Windows

$
0
0
I am fairly new to Splunk and am attempting to pull timelines into Splunk created by log2timeline.py that I converted to a .csv file using psort with l2tcsv. I am able to do this, however it seems to be pretty messy. I have looked at the apps SA_plaso-app-for-splunk and TA_plaso-add-on-for-splunk which are supposed to help clean up the data but I am not sure how its installed on a Windows machine. Any help would be greatly appreciated.

Fill into multiselect input by clicking a table element (drill down)

$
0
0
Hi folks, I have tried to create a table drill down to insert elements into a multiselect input, that are already selected. The workflow is: User searches something by using a keyword. He then selects tokens, which are then added to the multiselect input. The mutliselect input however is used as a filter for a new search. Is there a simple way, to add selected elements to a multiselect input by a drill down. Or do I have to use java script in stead. I've already created a dashboard, which has a drilldown on a multivalue field.
$Token$| makeresults | eval _raw="2017-10-15 | INFO | NOTES=\"My app has ID APP-01234 and yours APP-56789\". The dedicated host is Test0815.de." | rex field=_raw max_match=5 "(?PAPP-\d{5})" | rex field=_raw max_match=5 "(?PTest0815.de)" | eval Tokens=mvappend(appid, host) | table _time, _raw, Tokens-24h@hnow1$click.value2$

return a custom table when no results on base search

$
0
0
Hi I have the following search, and sometimes it doesent get any result. When there are no values to return, I want to return a table with the fields: _time | sloc_type | upload_id to show the user that there are no results. My search: index=testeda_p groupID=sloc_data | search project=Periph core=pcie core_ver=1.4 sloc_type="verif" | dedup _time | sort -_time | head 1 | table _time sloc_type upload_id Thanks

How to compare the number of events in an hour of the current day with the average number of events of the same hour of the same day of the week for 6 weeks ago?

$
0
0
I want to show count of events for each hour of the current day in one column, min, max and avg count of events in the same hour same week_day during 4 weeks ago. How to do this?

How do I optimize filtering of Accelerated Report?

$
0
0
I am trying to track user/machine logons. To help with this, I created the following query as an accelerated report: (index=windows) EventCode IN (4624,4625,4648) TargetAccountName!="-" ComputerName=*.mydomain | eval acctN=mvindex(Account_Name,1) | search acctN=* | bin _time span=1d as date | eval ComputerName=replace(ComputerName,".mydomain","") | eval user=upper(acctN) | eval domain=upper(TargetAccountDomain) | stats values(EventCode) as EventCodes values(date) as DaysSeen earliest(_time) as earliest latest(_time) as latest by ComputerName user Logon_Type | sort 0 user ComputerName As an accelerated report this runs quite quickly for most time ranges: a month gives me 23K stats in 12 seconds 90 days gives me 55k stats in 50 seconds. However a YTD is brutal I figure that I could use this report to do quick research on users/logons that I might see in a new computer/logon alert (to be created). So I built a dashboard with inputs for time, user, ComputerName and tried this: (index=windows) EventCode IN (4624,4625,4648) TargetAccountName!="-" ComputerName=.mydomain TargetAccountName=$user$ ComputerName=$computer$ | eval acctN=mvindex(Account_Name,1) | bin _time span=1d as date | eval ComputerName=replace(ComputerName,".mydomain","") | eval user=upper(acctN) | eval domain=upper(TargetAccountDomain) | stats values(EventCode) as EventCodes values(date) as DaysSeen earliest(_time) as earliest latest(_time) as latest by ComputerName user Logon_Type | sort 0 user ComputerName But that runs slower, the one month query goes to 45 seconds. So it looks like the acceleration statistics are at a higher level than the windows index. So then I tried moving my search term to the end. (index=windows) EventCode IN (4624,4625,4648) TargetAccountName!="-" ComputerName=.mydomain | eval acctN=mvindex(Account_Name,1) | bin _time span=1d as date | eval ComputerName=replace(ComputerName,".mydomain","") | eval user=upper(acctN) | eval domain=upper(TargetAccountDomain) | stats values(EventCode) as EventCodes values(date) as DaysSeen earliest(_time) as earliest latest(_time) as latest by ComputerName user Logon_Type | sort 0 user ComputerName | search user=$user$ ComputerName=$computer$ This runs way better, one month in 5 seconds - which is faster than reporting on a month of everything. But that's even more confusing, since according to the query, I had to summarize a month of everything before I could filter for user and computername. So how are Accelerated Reports indexing their summarized data? And what/why is the best way to filter that data? (also, would this have been a better case for a summary index?)

How to match values within a multi-value column

$
0
0
I'm putting together a search that lists all of the IP addresses associated with scanning my firewall. Due to the fact that hundreds of IP addresses scan my firewall everyday, I'd like to be able to focus on the ones that found my remote access port. I have a search that correctly lists all scanner IP addresses, but I'm not sure how to then search the distinct values returned by the search within a multi-value column. Can you let me know what to add to this search to filter on only source_ips that hit a destination_port equal to some arbitrary number? `index=physical_defenses sourcetype=pfsense | stats dc(destination_port) AS distinct_destination_port_count values(destination_port) AS destination_ports by source_ip destination_ip | where distinct_destination_port_count>2 | table source_ip destination_ports distinct_destination_port_count` Thanks

Field extraction of log file which each line has different format, how can I include all format in one regex

$
0
0
I am doing field extraction for a log file format as below line 1: field1, field2, field3, field4 line 2: field1, field2, field3, field5, field4 line 3: field1, field2, field3, field4 I can write separate regex1 for line 1 and regex 2 for line 2 format, but when I do field extraction, I can only use one regex, how can I put both regex in to cover all log format? Any suggestions? Cheers Sam

How can I extract this pattern from my raw data using rex command? \"ProductAccountNumber\":\"5342534253425342\"

$
0
0
Hi, Can someone able to help me please. I'm very new to using Splunk and most certainly to the rex command and regular expressions, so please bear with. I'm trying to extract an ProductAccountNumber field from my raw data which is in the following format { \"ProductAccountNumber\":\"5342534253425342\" } Could someone possibly tell me please how I may strip the actual ProductAccountNumber out of this line. Many thanks and kind regards Tanvi

Find user that have latest login

$
0
0
I have a listed lookup table xxx . When I run the below search. it shows no results. inputlookup xxxx|fields USERNAME|search index=main sourcetype=oracle_aud user=*CONN*| stats count(user) by source,user What can be the issue ? Have check that there is no issue with the lookup table xxx |inputlookup xxxx. Anyone can provide some good advise?

Add data to cisco networks from local directories

$
0
0
Can I add data to cisco networks app from local directories ? Example: /var/log/alert.log?

About log rotatation.

$
0
0
I previously asked the following questions, and I vaguely understood that delaycompress options are recommended. https://answers.splunk.com/answers/577144/about-log-rotation-best-practices.html However, I want to understand about it in a bit more detail. I think delaycompress is recommended for the following reasons, Did I get that right? If I use compress, Since the inode changes when the file is compressed, if the file is compressed before Splunk finishes reading, the log is lost on Splunk. However, when using delaycompress, Since the first generation file is only renamed, the inode does not change, so even if Splunk did not finish reading the file, it can read the renamed file. Therefore, log lost does not occur.

Are time range pickers valid in reports? Also when running saved reports, do they run new updated data or run the same result as when saved?

$
0
0
For clarification on second half of my question, I've had problems on running saved reports and having to adjust settings. Does this mean it does not run a fresh search?

changing bars colors by a string value of a field

$
0
0
Hi, I have a simple bar chart that sums a number("SLOC") by another field("file"). each file has another field that describes it - "sloc_type" - and I want to change the files bars colors by the "sloc_type" field. example to the chart now: ![alt text][1] the "sloc_type" field has only 2 options: rtl, verif. I need the files bar to be in a specific color, in order to separate them by their "sloc_type" Thanks [1]: /storage/temp/217852-capture.png

Splunk_TA_nessus stalls on collection.

$
0
0
Running the Splunk_TA_nessus (5.1.1) against security center works fine, and collects event data correctly, however it frequently (approx weekly) stalls, and requires that either the input is disabled/enabled or the HF is restarted. It appears the python process is still running, but it just stops trying to connect to SC. This feels like the script is getting stuck somewhere. Wondering if anyone else has experienced the same?
Viewing all 47296 articles
Browse latest View live