Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Help understanding eventtype search

$
0
0
I have the Splunk Windows Infrastructure app installed and when I run this search below: eventtype=msad-failed-user-logons host="*" I get this returned below, but I'm not understanding how the search result is associated to eventtype=msad-failed-user-logons. The below shows EventType=0. What does msad-failed-user-logons mean and how come it doesn't show that in the search result? 09/19/2017 03:42:13 PM LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4776 EventType=0 Type=Information ComputerName=xxxxx.domain.local TaskCategory=Credential Validation OpCode=Info RecordNumber=9555000 Keywords=Audit Failure Message=The computer attempted to validate the credentials for an account. Authentication Package: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon Account: someuser1 Source Workstation: WORKSTATION Error Code: 0xC0000071 Collapse host=somehost source=WinEventLog:Security sourcetype=WinEventLog:Security

Json file getting truncated

$
0
0
Below is my i/p file { "Count": 2, "Items": [ { "total_time": { "S": "0.000s" }, "start_date_time": { "S": "2017-09-19 05:00:43" }, "bad_records": { "N": "0" }, "successful_records": { "N": "0" }, "source": { "S": "mps_dnc" }, "end_date_time": { "S": "2017-09-19 05:00:43" }, "file_name": { "S": "No File" }, "total_records": { "N": "0" }, "job_name": { "S": "mps_dnc_out" } }, { "total_time": { "S": "12.783s" }, "start_date_time": { "S": "2017-09-19 11:42:21" }, "bad_records": { "N": "0" }, "successful_records": { "N": "12094" }, "source": { "S": "mps_dnc" }, "end_date_time": { "S": "2017-09-19 11:42:34" }, "file_name": { "S": "do_not_contact_list_2017-09-19T11_42_20.581Z.txt" }, "total_records": { "N": "12094" }, "job_name": { "S": "mps_dnc_out" } } ], "ScannedCount": 2, "ConsumedCapacity": null } Below is my probs.conf and limit .conf [spath] # number of characters to read from an XML or JSON event when auto extracting extraction_cutoff = 10000 # cat props.conf [dynamoout] TRUNCATE = 0 KV_MODE = json NO_BINARY_CHECK = true SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]*) DATETIME_CONFIG = CURRENT [source::/script_logs_mps/*.*] CHECK_METHOD=entire_md5 Still on splunk i can see only 8 lines.

Splunk counting duplicate events for failed logon

$
0
0
When the below search is ran, it'll count duplicate failed logons for all users. How do I exclude duplicates in a count?> eventtype=msad-failed-user-logons> (host="*")|fields> _time,signature,src_ip,src_host,src_nt_host,src_nt_domain,user,Logon_Type|stats> count by> user,src_nt_domain,src_ip,|sort> -count|rename user as "Username", src_nt_domain as "Domain", src_ip as> "IP Address"

Get single value panel to display a "date"

$
0
0
Hi I have search for a dashboard which produces a graph and does predictions, I want to display the date when we expect a certain threshold to be crossed. I have added some smarts to the search so it now ends with | eval DATE=if('high(future)' > "3.9", _time, null()) | search DATE!=null() | head 1 | fields _time This gives me the date I require but in a stats table. How can I display it bigger such as a single value panel? Thanks Mark

rangeColors showing wrong color for rangeValues

$
0
0
I am working on a single value dashboard panel where I am showing results in percentage. I am want show different range in different colors. So, I have defined the below range: min to 30.99 -> green 30.99 to 60.99 -> yellow 60.99 to 89.99 -> amber 89.99 to max -> red in the XML form, there are 4 values inside rangeColors (green, yellow, amber, red) and 3 values (30.99, 60.99, 89.99) inside rangeValues. The above definition is showing 0.00 in red (!), when it should be shown in green. Could you advise me on this please?

numberPrecision for single value dashboard

$
0
0
I am working on a single value dashboard panel where I am showing output in percentage with precision up to 2 decimal points (e.g. 60.25%). However, I want shows 0 and 100 as a whole number (NO decimal point precision for these two). Please advise whether it is possible.

Match Lookup Table to Summary Index

$
0
0
Hi, I wonder whether someone could help me please. I'm using the following query to to interrogate a summary index, matching this to a lookup table. index=summary_dg_nmo report=ddcops3148V5 | lookup ddcops3148.csv telno OUTPUT telno as "Matched" | eval Matched=if(isnotnull(Matched), "Y", "N") | dedup telno | table telno Matched Registered The lookup table has 10 records and the summary index has 100 records and as you can see I extract the fields "telno", "Matched" and "Registered". The problem I have is that in it's current format I'm extracting all 100 records, but I would only like to extract the 10 records from the lookup table then the field "Registered" from the Summary Index and then the "Matched" field. I know that the lookup table can't filter so has to be at the beginning of the search, but I'm struggling to get this to work. I just wondered whether someone may be able to look at this and offer some guidance on how I can make the changes. Many thanks and kind regards Chris

Has anyone successfully configured Bro logs from Security Onion to be searchable in Splunk?

$
0
0
I have managed to get Bro logs into Splunk, but even with the App/TA the data is still clunked together and not very searchable. Ive seen a few props.conf files here and there but has anyone had success with any?

Does the auto_high_volume setting recommandations apply for a single indexer?

$
0
0
Hi, We typically say that if we index more than 10GB per day per index, we should put **maxDataSize = auto_high_volume** But does that apply to one indexer or the whole cluster? In other words, if I received 15GB per day for index "main", but I have 4 clustered indexers (**3.75GB per indexer**), should I still put maxDataSize = auto_high_volume? Thanks!

JIRA index

$
0
0
To access JIRA from Splunk, does indexing is necessary? Or we can fetch the necessary details from JIRA only with the help of JQL. If indexing is necessary, can I know the procedure of configuring the index please.

Splunk search for keyword match

$
0
0
Hi, Fellow Splunkers, Noob question. I would like to seek for help in my search, this is the case: The client gave csv for keywords, the search should be filtered based on the keyword matched, for example, the keywords are "Apple, Banana, Car" the output data should contain 2 or more of the keyword match. What will be my search? Is there an `if match.count > 1` condition in splunk? Thanks,

Regex Help

$
0
0
I am trying to do a field extract but running into problems Here is an example event. I am trying to build a regex to extract the signatures field (IP Fragmentation, DNS Amplification). The signature can be different for each event so I need to extract everything between the () after the word signatures. Can someone help me with a regex? My attempts are only returning partial events Sep 19 23:32:49 10.201.1.79 [pfsp] emerg: Host Detection alert #13630, start 2017-09-19 23:31:45 UTC, duration 64, direction incoming, host 1.2.3.4, signatures (IP Fragmentation, DNS Amplification), impact 1.10 Gbps/117.80 Kpps, importance 2, managed_objects ("ARIN-Allocated Prefixes"), (parent managed object "nil") Sep 20 04:56:50 10.201.1.79 [pfsp] emerg: Host Detection alert #13631, start 2017-09-20 04:56:45 UTC, duration 5, direction incoming, host 1.2.3.4, signatures (IP Fragmentation), impact 133.45 Mbps/21.82 Kpps, importance 1, managed_objects ("ARIN-Allocated Prefixes"), (parent managed object "nil")

Unable to send same source data to two different logical indexes and two different indexers groups.

$
0
0
Hi All, Facing few challlenges, mine is playing around with the same transforms. I'm trying to achieve the same source data to forward to two different logical indexes and two different indexers groups. Below is my senrio. In props.conf used [source::Dual_Data_Testing] TRANSFORMS-source = Stan1, Stan2 In transforms.conf [Stan1] SOURCE_KEY = MetaData:Source REGEX = . DEST_KEY = _MetaData:Index FORMAT = Index1 DEST_KEY = _TCP_ROUTING FORMAT = IndexerGroup1 [Stan2] SOURCE_KEY = MetaData:Source REGEX = . DEST_KEY = _MetaData:Index FORMAT = Index2 DEST_KEY = _TCP_ROUTING FORMAT = IndexerGroup2 Currently the above conf is not working. Please any suggestion can we workaround for this ? Thanks, Arun Sunny

null events while using spath

$
0
0
JSON: "mainArray": [ {"name":"MS","value":20}, {"name":"MC","value":20}, {"name":"CF","value":20}, {"name":"ST"}, {"name":"CMR","value":20} ] -- i am currently using the search as " | spath output=code path=mainArray{}.name | spath output=cnt path=mainArray{}.value | table code,cnt" and the output i see is as : code cnt MS 20 MC 20 CF 20 ST 20 CMR The Expected Output is: code cnt MS 20 MC 20 CF 20 ST CMR 20

How to include additional field from inputlookup in results?

$
0
0
Hi, I have a lookup table errors.csv ,which contains Error and Source columns.I have a query the returns log entries containing Error column values : [|inputlookup errors.csv | rename Error AS query | fields query ] How do I add the Source column to the results? Thanks, Luc

Evicted transaction duration and strptime() unable to process token of form input

$
0
0
I'm building a form with a time picker, and the output should go to a timerange visualizer based on events grouped into transactions. I'm trying to include transactions on the chart which are evicted ( `keepevicted=true`) I'm trying to achieve this with adding | eval durationms = (if(closed_txn == 0, strptime($timerange.latest$, "%s") - strptime(_time, "%s"), duration) * 1000) However `$timerange.latest$` may aquire really odd values, like `now`, which is not recognized by `strptime()`. So it is basically working with fixed time only. Any suggestions?

How to display time, host, source type in a splunk when the statement is as follows:

$
0
0
I have a stack trace for one particular error like this, [9/20/17 5:40:13:428 EDT] 000000e0 SystemOut O 20 Sep 2017 05:40:13:428 [INFO] [DMAXP01_MIF2] [] BMXAA6372I - Host name: 139.46.95.92. Server name: DMAXP01_MIF2. Cron task name: JMSQSEQCONSUMER.SEQQIN. Last run: 2017-09-20 05:40:00.0host=cltismx1waslp07 Options| sourcetype=WebSphere:SystemOutLog Options| source=/logs/websphere/DMAXP01_MIF2/SystemOut.log I want to view the feilds in tabular format. My search string is Cron task name: JMSQSEQCONSUMER.SEQQIN9. Last run: | table host, sourcetype,source. I want to display the time after the keywords " Last run:" in the above statement.

Token for a field containing spaces and special characters

$
0
0
How do i return the value of a feild which contains spaces and special characters using a Token . The feild name is License quota used (%) I tried the following combinations, however none appear to work. $result."License quota used (%)"$ $"result.License quota used (%)"$ $result.License quota used (%)$" Tried single quotes as well, but no luck.

External search head performs searches on seperate cluster?

$
0
0
Is it possible to have a cluster (1 maaster, 2 indexers, 1 search head, 1 deployer) and have an external search head connect to the indexer cluster and perform searches on them? And is it possible to limit that search head to only perform searches on specific information, like a specific index?

I want to see who has disabled and enabled the default demo lookup files under Splunk ES->Data Enrichment->Identity Management, is there any Search Query which can help me ?

$
0
0
I want to see who has disabled and enabled the default demo lookup files under Splunk ES->Data Enrichment->Identity Management, is there any Search Query which can help me ?
Viewing all 47296 articles
Browse latest View live