Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Identifying Windows SSO Application logins

$
0
0
Hi, I am currently working on a search that is supposed to tell me whether users went the prescribed CyberARK route or bypassed it for system access. So theoretically I should use for events 4624 and 4648 and see whether the connctions come from CyberARK or not. But I found plenty of login events from the Citrix servers where our users do their work. Following up on this it turns out, that users on Citrix use a web browser to access an application on the target system that uses SSO for the user login. This also shows up as 4624. Which for my purpose would be a false positive. Looking closer that the generated 4624 events, the key difference is the LogonProcessName and AuthenticationPackageName in the event. If AuthenticationPackageName=NTLM or LogonProcessName=NtLmSsp, then this seems to indicate a SSO login. And AuthenticationPackageName=Kerberos or LogonProcessName=Kerberos seem to be indicators of an RDP session (via CyberARK). Excluding the NtLm events seems to be the way to go, but as my Windows background is pracitcally NIL after years of AIX/Linux I wonder wheter someone could confirm my hypothesis. Unfortunately I do not have a lab for checking this with a control case. thx afx

How to see only earliest and latest values in a field.

$
0
0
I'm having an issue because I need to show in a report only the first ticket received by an agent and the latest one, so all the other tickets in the middle I have to leave them behind. Here is the evidence: ![alt text][1] [1]: /storage/temp/285673-pic-sa.png Of all the tickets assigned to user1 or user2, how can I capture only the oldest and newest one? Thanks in advance

help on stats by

$
0
0
hi I need to understand why I execute the first search I have much more events in "Number of CPU alerts" count than in the second search? As you can see, the first search stats the data "by host SITE" while the second stats the data only by host What I dont understand is that every host has is proper SITE so normaly like I am doing the same kind of count i should have the same result? Thanks for your help `CPU` | stats count(process_cpu_used_percent) as "Number of CPU alerts" by host SITE | search host=TUTU

Average page views at a user level for a given page

$
0
0
I have a table: PageID, UserName, Date, count of hits to that page, I would like to find the average daily page hits, per article at a UserID level. (for the top 100 most frequently viewed pages) So for example, Person xyz, on average views page x, n number of times per day over the last week. This is the start of the query... | bucket span=1d _time| stats count by PageID, UserName , _time | sort - count |head 100 | Any help much appreciated.

help on a count which is different in a subsearch versus a search

$
0
0
hi The search below returns me 558 events `CPU` | stats values(SITE) as SITE count(process_cpu_used_percent) as "Number of CPU alerts" by host | rename host as Hostname, SITE as Site | search Hostname=9831 I am doing the same stats in a subsearch and in this case I have 4389 events! `wire` earliest=-7d latest=now | stats last(AP_NAME) as "Access point", last(Building) as "Geolocation building" by host | join host type=outer [| `CPU` earliest=-7d latest=now | stats values(SITE) as Site , count(process_cpu_used_percent) as "Number of CPU alerts" by host ] | rename host as Hostname | search Hostname=9831 What explain a such difference even if i use the same stats count What I have to do in order to have the same number of events in the search and in a subsearch? Unless it is not possible to have the same number of events in the subsearch? Thanks for your help

Is it possible to configure more than 1 cron for one alert?

$
0
0
Is it possible to configure more than 1 cron for one alert? some thing like `*/2 9-11,11-13 * * 1-4,5-1`, i think the answer is no but wanted reconfirm. The reason i want to know is because alert condition is same but the triggering times will differ based upon day and hours

"Addon Metadata - Summarize AWS Inputs" is not enabled on Add-on instance?

$
0
0
Hello, I am running Splunk Add-on for AWS 4.6.1 and Splunk App for AWS 6.0.0. Majority of app panels populated with data, but I also receive this err message on the dashboard: *Some panels may not be displayed correctly because the following inputs have not been configured: CloudTrail. Or, the saved search "Addon Metadata - Summarize AWS Inputs" is not enabled on Add-on instance.* I have tried to look for this add on and enable it, but I could not find it. Anyone has the same issue and how you resolved it? Thank you,

New field defined by time ranges

$
0
0
I'm trying to create the below search with the following dimensions. I'm struggling to create the 'timephase' column. The 'timephase' field would take the same logic as the date range pickers in the global search, but only summon the data applicable in that timephase (ie. 1 day would reflect data of subsequent columns for 1 day ago, etc). I tried to approach it with a `eval case`, but would run into a mutual exclusion problem (the data captured in "1 day" would be excluded from "1 week", even thought it should be counted). Does anyone have any recommendation for approaches to this? ![alt text][1] [1]: /storage/temp/285674-time-categories.png

Need to extract a JSON value based on a condition

$
0
0
I have payload as below and I need the StartTime and EndTime values where the payload has the first IsAvailable is equal to true> "StatusList": [> {> "date": "2020-03-13T00:00:00Z",> "status": [> {> "StartTime": "2020-03-13T06:30:00Z",> "EndTime": "2020-03-13T08:30:00Z",> "IsAvailable": false,> "score": 91.05> },> {> "StartTime": "2020-03-13T08:30:00Z",> "EndTime": "2020-03-13T10:30:00Z",> "IsAvailable": false,> "score": 94.29> },> {> "StartTime": "2020-03-13T10:30:00Z",> "EndTime": "2020-03-13T12:30:00Z",> "IsAvailable": **true**,> "score": 100> },> {> "StartTime": "2020-03-13T12:30:00Z",> "EndTime": "2020-03-13T14:30:00Z",> "IsAvailable": true,> "score": 96.1> },> {> "StartTime": "2020-03-13T14:30:00Z",> "EndTime": "2020-03-13T16:30:00Z",> "IsAvailable": true,> "score": 90.39> },> {> "StartTime": "2020-03-13T16:30:00Z",> "EndTime": "2020-03-13T18:30:00Z",> "IsAvailable": false,> "score": 0> }> ], How can I achieve this?

Time Conversion

$
0
0
Hi, I have time format as: 2019-10-08 15:24:40.132 UTC I used eval to strip it to: 2019-10-08 15:24:40 I need to calculate Age. My eval is below but it is not working. Can someone assist pls? | eval age=ceiling((now()-strptime(Event_Created_Time_Date,"%F %H:%M:%S"))/86400) | eval Event_Age=case( age<1,"1_Less than 1 Days", age>=30,"6_Older than 30 Days", age>=20,"5_Older than 20 Days", age>=10,"4_Older than 10 Days", age>=5,"3_Older than 5 Days", age>=2,"2_Older than 2 Days", 0==0,"7_No Age Data")

Help with Advanced Source Type

$
0
0
I'm trying to create a custom source type which is reading a TSV log file and the 3 column in the file is a JSON payload wrapped in quotes. I can't figure out how to get the source type to parse out the 3 column in a JSON format on splunk. Here's an example of a line entry below. 6680 "2020-03-06 13:50:13.254" "{"date":"3/6/2020 1:50:13 PM","received":"from FooServer (Unknown [172.20.36.5]) by smtp-dev.foo.com with ESMTP ; Fri, 6 Mar 2020 13:50:13 -0500","message-id":"","from":"foo@thisMachine.com","recipients":"John.Smith@example.com","cc":"","subject":"Test Email"}" Any advice would be helpful, thank you.

After upgrading from version 7.0.1 to 8.0.2, the errors below appear.

$
0
0
After upgrading from version 7.0.1 to 8.0.2, the errors below appear. Splunk is not indexing some internal logs like license_usage.log, and license consumption has increased a lot, but I think it is the splunk's own log. BatchReader-0 Root Cause(s): The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data. Last 50 related messages: 03-05-2020 09:32:47.238 -0300 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 03-05-2020 09:32:45.582 -0300 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 03-05-2020 09:32:37.979 -0300 INFO TailReader - State transitioning from 1 to 0 (initOrResume). 03-05-2020 09:32:37.979 -0300 INFO TailReader - State transitioning from 1 to 0 (initOrResume). 03-05-2020 09:32:37.971 -0300 INFO TailReader - tailreader0 waiting to be un-paused 03-05-2020 09:32:37.971 -0300 INFO TailReader - Starting tailreader0 thread 03-05-2020 09:32:37.968 -0300 INFO TailReader - Registering metrics callback for: tailreader0 03-05-2020 09:32:37.969 -0300 INFO TailReader - batchreader0 waiting to be un-paused 03-05-2020 09:32:37.969 -0300 INFO TailReader - Starting batchreader0 thread 03-05-2020 09:32:37.969 -0300 INFO TailReader - Registering metrics callback for: batchreader0 TailReader-0 Root Cause(s): The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data. Last 50 related messages: 03-05-2020 09:32:47.238 -0300 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 03-05-2020 09:32:45.582 -0300 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 03-05-2020 09:32:37.979 -0300 INFO TailReader - State transitioning from 1 to 0 (initOrResume). 03-05-2020 09:32:37.979 -0300 INFO TailReader - State transitioning from 1 to 0 (initOrResume). 03-05-2020 09:32:37.971 -0300 INFO TailReader - tailreader0 waiting to be un-paused 03-05-2020 09:32:37.971 -0300 INFO TailReader - Starting tailreader0 thread 03-05-2020 09:32:37.968 -0300 INFO TailReader - Registering metrics callback for: tailreader0 03-05-2020 09:32:37.969 -0300 INFO TailReader - batchreader0 waiting to be un-paused 03-05-2020 09:32:37.969 -0300 INFO TailReader - Starting batchreader0 thread 03-05-2020 09:32:37.969 -0300 INFO TailReader - Registering metrics callback for: batchreader0

I am running an import script for an interval of 5 mins to collect data from all sourcetypes and put it into a summary index.

$
0
0
I have a situation where in the span of 10 mins there could be a possibility that we didn't get any data from one of the sourcetype for one interval but started getting data for next interval, by this way I am loosing data in summary index. Any suggestion would be helpful. Here's a part of my query: | metadata type=sources index=abc | search source=random | eval earliest=lastTime - 300 | eval latest=now() | fields earliest latest So this random source is collecting data from all the sourcetypes.

Is there a sort option for the transaction command

$
0
0
I'm working with ForeScout Audit Policy events. Some of them have this in the message, Part (1/n), Part (2/n), and so on. I'm using the transaction command below to join the parts. index=network sourcetype="forescout:audit" partOf=* | transaction fields=partOf maxspan=1s | search eventtype=fs_policy_change | append [search index=network sourcetype=forescout:audit NOT partOf=* eventtype=fs_policy_change] | sort - _time The field partOf is set in default/transforms.conf [fs_get_parts] REGEX = \|\sPart\s\((?\d{1,3})\/(?\d{1,3})\)\s\| The append adds the single event policy changes. The issue is the order is sometimes correct and other times not. For example I will get Part (4/4), Part (2/4), Part (1/4), and Part (3/4) for some of the transactions and others in the correct order. I didn't see anything in the transaction command to allow me to sort the partOf. Any ideas? Splunk Enterprise 7.2.5.1 TIA, Joe

Reroute events to a different index at the indexer

$
0
0
Hello I'm trying to reroute certain events as it hits my indexer from a particular source. In the inputs.conf on the UF, the index is set to index=tokens for my source path, but I want to catch certain events from this source and route to a different index at the indexer. So far three events have gotten past my transform and I'm trying to figure out why and what I'm doing wrong. Below is my original props and transforms props.conf [source::...redacted] TRANSFORMs-mbox_token_reroute = reroute transforms.conf [reroute] REGEX=reg FORMAT=mbox_tokens SOURCE_KEY=MetaData:Source DEST_KEY=_MetaData:Index This is what I just changed it to and waiting to see if events are rerouted once the trigger action happens. props.conf [source::...redacted] TRANSFORMs-mbox_token_reroute = reroute transform.conf [reroute] REGEX=reg FORMAT=mbox_tokens SOURCE_KEY=MetaData:Source DEST_KEY=_MetaData:Index WRITE_META=true What should I do to make sure that the events are getting rerouted?

How can i include another field into visual?

$
0
0
Im working with dashboards and the goal is to show a bar graph panel that displays the counts for two different fields separately(2 bars per timespan) if possible. The data is from the same index...the actions field(action=blocked) and category field (category=221) I can build a visual for each individual field but having trouble combining the two index=url_filter action=blocked login_id="$user$"|stats count by _time |bucket _time span=1h i have another field not exclusive to field [action=blocked] that id like to display as well. any tips appreciated

Why does service.export REST API fail when _raw is not excluded?

$
0
0
Using Java API and requesting a streaming export from Splunk a search like this: search index="client_ndx" sourcetype="client_source" (field1 = "*" ) | regex field1 != "val1|val2|val3" | fields field1, field2,field3,field4 , _time|fields - _raw (NOTE: ending with "|fields - _raw") returns the labeled fields, but ending it without that exclusion fails with the following error: java.lang.RuntimeException: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[124683119,213] Message: JAXP00010004: The accumulated size of entities is "50,000,001" that exceeded the "50,000,000" limit set by "FEATURE_SECURE_PROCESSING". at com.splunk.ResultsReaderXml.getNextEventInCurrentSet(ResultsReaderXml.java:128) at com.splunk.ResultsReader.getNextElement(ResultsReader.java:87) at com.splunk.ResultsReader.getNextElement(ResultsReader.java:29) at com.splunk.StreamIterableBase.cacheNextElement(StreamIterableBase.java:87) at com.splunk.StreamIterableBase.access$000(StreamIterableBase.java:28) at com.splunk.StreamIterableBase$1.hasNext(StreamIterableBase.java:37) at com.insightrocket.summaryloaders.splunk.SplunkParser.run(SplunkParser.java:112) at java.lang.Thread.run(Thread.java:745) Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[124683119,213] Message: JAXP00010004: The accumulated size of entities is "50,000,001" that exceeded the "50,000,000" limit set by "FEATURE_SECURE_PROCESSING". at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:596) at com.sun.xml.internal.stream.XMLEventReaderImpl.nextEvent(XMLEventReaderImpl.java:83) at com.splunk.ResultsReaderXml.readSubtree(ResultsReaderXml.java:423) at com.splunk.ResultsReaderXml.getResultKVPairs(ResultsReaderXml.java:325) at com.splunk.ResultsReaderXml.getNextEventInCurrentSet(ResultsReaderXml.java:124) ... 7 more I specifically used the system.export to get a stream and bypass the maximum record count, but a change in the system now requires the use of the _raw field

Problem loading data into aws app panels.

$
0
0
I've gotten both the aws add on and the aws app installed. THey're both installed on the heavy forwarded, and both installed on the search head, and the add on is not visible on the search head. I've verified a bunch of data is making it into the index. But the overview panels populate, but not many of the more specific panels. Particularly interested in the security vpc reports, and while I see flow log data in the index, and can run the saved searches which DO generate tables of data, if I view the index they "collect" into, it's empty. I'm not sure how to track what's going on from this point. I inherited this splunk setup fairly recently.

Excluding weekend from alerts

$
0
0
I have created few alerts which need to run only from Monday to Friday, but I have not been able to find a way to exclude Saturday and Sunday. Can anyone assist with this please?

How to include another field into the visual

$
0
0
I'm working with dashboards and the goal is to show a bar graph panel that displays the counts for two different fields separately (2 bars per timespan) if possible. The data is from the same index...the actions field(action=blocked) and category field (category=221) I can build a visual for each individual field but having trouble combining the two. index=url_filter action=blocked login_id="$user$"|stats count by _time |bucket _time span=1h I have another field not exclusive to field [action=blocked] that id like to display as well. Any tips appreciated.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>