Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How would I configure my regex to also include windows data

$
0
0
I have a query that will identify all the logs in my instance for a certain index, it list everything running except for Windows. What am i missing? thanks in advance. index="source" | rex field=source "^.*\/(?=[^/])(?.*?)($|\s|\-|\_)"

Help me with search for my use case

$
0
0
I need to setup a alert if my count is zero on that day. my query is index= abc | timechart span=1d count and I am running for last 7 days. if count=0 on that day I want trigger a alert. Please help me with search query.

Why doesn't my REST query to /services/authentication/users work anymore all of a sudden?

$
0
0
Hi, I use this query almost every day : | rest /services/authentication/users But today it doesn't work, I get this error message : Failed to parse XML Body:

Cisco eStreamer eNcore Add-on for Splunk: App is grouping events -- Is this normal behavior?

$
0
0
Cisco eStreamer eNcore is grouping events as seen in the indexer when searched. The old eStreamer client did not do this. It this normal behavior for certain events grouped together. Any help would be appreciated. Splunk V6.5.2 Cisco devices are V6

Creating dashboards based on field-names rather than field-values in nested-JSON

$
0
0
Hi Splunkers, I have events coming to Splunk Enterprise in the following JSON format: { ip : 1.1.1.1 mac : 010203040506 policies : { policy_name_1 : { rule_name_in_policy1 : { status : Unmatched timestamp : 15012456757 } }, policy_name_2 : { rule_name_in_policy2 : { status : Matched timestamp : 15012446751 } }, policy_name_3 : { rule_name_in_policy3 : { status : Matched timestamp : 15012456487 } } } username : abstract } I want to create a 'matched' dashboard which shows a pie chart conveying "rule_name_in_policy1 is matched by 25 hosts, rule_name_in_policy2 is matched by 3 hosts,... and so on). To achieve this, I can roughly think of a search string that would store the rule_names in a variable_a and possibly do a "timechart count by variable_a". But I don't know how to do this. I also can't figure out how to filter out all instances of (policies.policy_name_x.rule_name_in_policyx.status=Matched). I'm new to SPL. Can someone please help me with writing the correct search string?

How can we extract the data from a special format file?

$
0
0
We have the following - 2017-10-17 13:07:30,617 INFO [stdout] (ajp-/0.0.0.0:8009-81) ] 2017-10-17 13:07:31,694 INFO [stdout] (ajp-/0.0.0.0:8009-37) 2017-10-17 13:07:31,691 ERROR LoggerAspect.doLogging(83)> *** Method Exit *** 2017-10-17 13:07:31,694 INFO [stdout] (ajp-/0.0.0.0:8009-37) LogData[ 2017-10-17 13:07:31,695 INFO [stdout] (ajp-/0.0.0.0:8009-37) className=KanaMessagingServiceClient 2017-10-17 13:07:31,695 INFO [stdout] (ajp-/0.0.0.0:8009-37) methodName=KanaMessagingServiceClient.retrieveKanaMessage(..) 2017-10-17 13:07:31,695 INFO [stdout] (ajp-/0.0.0.0:8009-37) corrId=081fbf60-b7bb-4228-8ff9-edc98a172afc 2017-10-17 13:07:31,695 INFO [stdout] (ajp-/0.0.0.0:8009-37) userId=383e6abe-e4ba-46f9-b8b1-047f1981c508 2017-10-17 13:07:31,695 INFO [stdout] (ajp-/0.0.0.0:8009-37) argsArray=[ { 2017-10-17 13:07:31,695 INFO [stdout] (ajp-/0.0.0.0:8009-37) "lastName" : "xxxx", 2017-10-17 13:07:31,695 INFO [stdout] (ajp-/0.0.0.0:8009-37) "firstName" : "yyyy", 2017-10-17 13:07:31,695 INFO [stdout] (ajp-/0.0.0.0:8009-37) "groupId" : "0000000000", 2017-10-17 13:07:31,695 INFO [stdout] (ajp-/0.0.0.0:8009-37) "dateOfBirth" : "19901118", 2017-10-17 13:07:31,695 INFO [stdout] (ajp-/0.0.0.0:8009-37) "memberId" : "zzzzzz", 2017-10-17 13:07:31,695 INFO [stdout] (ajp-/0.0.0.0:8009-37) "middleName" : "" 2017-10-17 13:07:31,696 INFO [stdout] (ajp-/0.0.0.0:8009-37) }, null, null, "INBOX", "N" ] 2017-10-17 13:07:31,696 INFO [stdout] (ajp-/0.0.0.0:8009-37) methodEntryTime=2017-10-17T13:07:28.354-05:00 2017-10-17 13:07:31,696 INFO [stdout] (ajp-/0.0.0.0:8009-37) methodExitTime=2017-10-17T13:07:31.691-05:00 2017-10-17 13:07:31,696 INFO [stdout] (ajp-/0.0.0.0:8009-37) throwable= 2017-10-17 13:07:31,696 INFO [stdout] (ajp-/0.0.0.0:8009-37) com.uhc.kanasecuremessaging.FaultType_Exception: 2017-10-17 13:07:31,712 INFO [stdout] (ajp-/0.0.0.0:8009-37) at java.lang.Thread.run(Thread.java:745) 2017-10-17 13:07:31,712 INFO [stdout] (ajp-/0.0.0.0:8009-37) 2017-10-17 13:07:31,712 INFO [stdout] (ajp-/0.0.0.0:8009-37) ] How can we extract the data from this special format?

How would I configure my regex to also include Windows data?

$
0
0
I have a query that will identify all the logs in my instance for a certain index, it list everything running except for Windows. What am i missing? thanks in advance. index="source" | rex field=source "^.*\/(?=[^/])(?.*?)($|\s|\-|\_)"

Saved searches not working in C# SDK 2.x example

$
0
0
Here's what I needed to do in order to get the saved searches to work in C# SDK 2.x. Edit the following Class: **splunk-sdk-csharp-pcl-2.2.6\src\Splunk.Client\Splunk\Client\AtomEntry.cs** **Replace the following code on starting on line 401:** for (int i = 0; i < names.Length - 1; i++) { propertyName = NormalizePropertyName(names[i]); if (dictionary.TryGetValue(propertyName, out propertyValue)) { if (!(propertyValue is ExpandoObject)) { throw new InvalidDataException(name); // TODO: Diagnostics : conversion error } } else { propertyValue = new ExpandoObject(); dictionary.Add(propertyName, propertyValue); } dictionary = (IDictionary)propertyValue; } propertyName = NormalizePropertyName(names[names.Length - 1]); propertyValue = await ParsePropertyValueAsync(reader, level + 1).ConfigureAwait(false); dictionary.Add(propertyName, propertyValue); **Replaced the above code with this one:** bool addDictionary = false; for (int i = 0; i < names.Length - 1; i++) { addDictionary = true; propertyName = NormalizePropertyName(names[i]); if (dictionary.TryGetValue(propertyName, out propertyValue)) { if (!(propertyValue is ExpandoObject)) { // throw new InvalidDataException(name); // TODO: Diagnostics : conversion error addDictionary = false; } } else { propertyValue = new ExpandoObject(); dictionary.Add(propertyName, propertyValue); } if (addDictionary) { dictionary = (IDictionary)propertyValue; } } try { propertyName = NormalizePropertyName(names[names.Length - 1]); propertyValue = await ParsePropertyValueAsync(reader, level + 1).ConfigureAwait(false); if (!dictionary.ContainsKey(propertyName)) { dictionary.Add(propertyName, propertyValue); } } catch (Exception er) { } **Recompiled and then the saved search example worked!** Just wondering if there's a better fix!

Deploymet server only showing 1 client at a time

$
0
0
I have only a deployment server at the current time and to get ahead of the game we going to roll the UF to our windows servers as this can take months. My deployment server has no apps, so it is just the client reporting. I currently have configured 2 client but only 1 shows up at a time. If one is showing and I bounce the other client splunk service it will show but the other client disappears?

Extract Text from logs

$
0
0
Below is my log, CustomItemContainerGenerator.GenerateNextLocalContainer: Node is not the current one. in Xceed.Wpf.DataGrid.v4.5 Stack trace: at Xceed.Wpf.DataGrid.CustomItemContainerGenerator.GenerateNextLocalContainer(Boolean& isNewlyRealized) at Xceed.Wpf.DataGrid.CustomItemContainerGenerator.System.Windows.Controls.Primitives.IItemContainerGenerator.GenerateNext(Boolean& isNewlyRealized) at Xceed.Wpf.DataGrid.Views.TableflowViewItemsHost.GenerateContainer(ICustomItemContainerGenerator generator, Int32 index, Boolean measureInvalidated, Boolean delayDataContext) at Xceed.Wpf.DataGrid.Views.TableflowViewItemsHost.GenerateContainers(I How can I extract only 'Node is not the current one' from the log and display?

How to pass a different search query based on the token value from a text field

$
0
0
I have a text field with default/initial value set to "*". I wanted to use different search queries based on the values from the textfield which is mainly "*" or not "*". Any suggestion? Thanks.
**
Search No Star| makeresults | eval nonstar="$tok_text$" | stats values(nostar)-15m@mnowSearch Star| makeresults | eval star="$tok_text$" | stats values(star)-1d@dnow

Has anyone seen this Error message: Monotonic time source didn't increase; is it stuck?

$
0
0
Since we've upgraded to 7.0 we're seeing this particular error show up in the logs: 10-17-2017 11:30:30.772 -0600 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck? We weren't able to find much information regarding this error online and wanted to poll the audience to see if anyone has encountered this as well.

MAC Spoofing / Search

$
0
0
I think I'm close. Just need a little help. here is my current search index=windows sourcetype=dhcpsrvlog | stats dc(raw_mac) as macCount values(raw_mac) as mac by dest_nt_host| eventstats count by raw_mac | where count = 2 I'm trying to get results for any 2 systems sharing the same mac address.

Overlapping datapoints

$
0
0
I have two different series on a single chart. Column chart for one series is overlaid by a line chart of the other series. But at some places, data values of column chart is same as data value of line chart (overlapped datapoints), thus making it difficult to read. Is there any property for placement of datapoints or any other way to fix this overlapping?

How can I search for results that share the same Mac address?

$
0
0
I think I'm close. Just need a little help. here is my current search index=windows sourcetype=dhcpsrvlog | stats dc(raw_mac) as macCount values(raw_mac) as mac by dest_nt_host| eventstats count by raw_mac | where count = 2 I'm trying to get results for any 2 systems sharing the same mac address.

Overlapping datapoints on chart containing both a column chart and line chart

$
0
0
I have two different series on a single chart. Column chart for one series is overlaid by a line chart of the other series. But at some places, data values of column chart is same as data value of line chart (overlapped datapoints), thus making it difficult to read. Is there any property for placement of datapoints or any other way to fix this overlapping?

Sophos Central app for Splunk: Data is not being pulled by the API

$
0
0
anyone with the same issue? Im not seeing anything being pulled by the API, I have put the API info into the splunk addon. ![alt text][1] [1]: /storage/temp/216829-sophos1.jpg

Deployment server only showing 1 client at a time

$
0
0
I have only a deployment server at the current time and to get ahead of the game we going to roll the UF to our windows servers as this can take months. My deployment server has no apps, so it is just the client reporting. I currently have configured 2 client but only 1 shows up at a time. If one is showing and I bounce the other client splunk service it will show but the other client disappears?

Indexing stops before MaxSize for that index

$
0
0
Hi. I have single server Splunk architecture. I create a index let's call it "IT" and storage pointed to 500GB of separate High performance SSD (not a default drive). No other index is stored on this storage. Initially i set the size of IT index to 200GB. Now, we need to index data more than 200GB i increase the size to 450GB from Indexes Web UI. now start ingesting data into index IT. Indexing seems to stop or current size doesn't grow beyond 300GB even tsthough there is 150GB space available. Max Size of Entire Index = 500 GB Max Size of Hot/Warm/Cold bucket = auto I haven't setup Frozen path or Tsidx Retention Policy. I can't figure out why it's not indexing data? I appreciate community's help or suggestion/s?

Return results from ALL sub-searches into a table

$
0
0
Each search below gathers data via SQL Queries from 3 different databases on 3 different servers. I have combined them into one with the hope to return the "totalerrors" and "ClientID" associated with those errors in one final table at the bottom. So it would look like: totalerrors | ClientID 0 | abc 3 | def 5 | ghi As of right now, the search below only returns the first searches data. So, I am only retrieving: totalerrors | ClientID 0 | abc Any suggestions on how to return the data from each sub-search query into one final table, all together? | dbxquery query="SELECT count(*) as totalerrors, 'abc' as ClientID FROM \"CM126abc\".\"dbo\".\"table1\" (NOLOCK) WHERE PROCESSEDFLAG = 'N' and ADDEDDT>DATEADD(hour, -24, GETDATE())" connection="126" | fields totalerrors, ClientID | appendcols [dbxquery query="SELECT count(*) as totalerrors, 'def' as ClientID FROM \"CM126def\".\"dbo\".\"table2\" WHERE PROCESSEDFLAG = 'N' and ADDEDDT >DATEADD(hour, -24, GETDATE())" connection="126" | fields totalerrors, ClientID ] | appendcols [dbxquery query="SELECT count(*) as totalerrors, 'ghi' as ClientID FROM \"CM126ghi\".\"dbo\".\"table3\" WHERE PROCESSEDFLAG = 'N' and ADDEDDT >DATEADD(hour, -24, GETDATE())" connection="126" | fields totalerrors, ClientID] | table UnprocessedVendorCAs, ClientID
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>