Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

conditional search

$
0
0
I've read other answers related to conditional searches, still cannot find an answer to my problem. The situation is following. I have one search (S1, runs on index1) which provides values to search for another search (S2, runs on index2). Something like index=index2 [ search index1 | stats count by src_ip | where count > N | fields src_ip ] | stats count by src_ip, dest_ip Usually S1 has non-empty results. In most cases S2 doesn't have any results and I'm interested in cases when there are results from S2. In that case I'd like to append results from S1. So my final query basically has such a structure: index=index2 [ search index1 | stats count by src_ip | where count > N | fields src_ip ] | stats count by src_ip, dest_ip | append [ search index1 | stats count by src_ip | where count > N | fields src_ip ] The problem is that as I've mentioned, S1 query has results most of the time, therefore append (and my final query) has too. I'd like to run append only when there are non empty results from the first part.

Timezone in props.conf doesn't have any effect

$
0
0
I am working on demo using Splunk. I have a tool which uploads json data to Windows Event Log and Splunk UW forwards the data to Splunk instance (on the same machine). The json event has a field called timestamp which I plan to use for event time in splunk. I also want to interpret timestamp as from a different timezone (Europe/Lisbon). I have changed the file $SPLUNK_HOME/etc/system/local/props.conf and added: [source::WinEventLog*] TIME_PREFIX=timestamp TIME_FORMAT=%Y-%m-%d %H:%M:%S TZ=Europe/Lisbon MAX_TIMESTAMP_LOOKAHEAD=30 MAX_DAYS_AGO=1000 I expect the event in splunk to show the time of the event as that in timestamp field and also changed by 8 hours since my computer region is PST. But it doesnt seem to have any effect on the time. The event time is shown as the time event was posed to Windows Event log. Here is a sample event: 11/14/2019 07:39:41 PM LogName=CustomLog SourceName=ECEventLogProvider EventCode=256 EventType=4 Type=Information ComputerName=CHECHI TaskCategory=Network Events OpCode=None RecordNumber=40498 Keywords=Classic Message={ "country" : "United Kingdom", "description" : "Sample", "deviceId" : "Computer748", "event_id" : "34", "id" : "29", "logtype" : "Info", "msgqnum" : "0", "severity" : "High", "source" : "Sample", "system_state" : "S4/S5", "timestamp" : "2019-11-12 23:43:06", "timestamp_accuracy" : "Accurate" } The event time in splunk search shows as 11/14/2019 07:39:41 PM. I would expect it to be 2019-11-12 15:43:06

Timetable/Schedule is been given in lookup table, how to use it in splunk query

$
0
0
Hi Splunkers, I am stuck in a situation where I have been provided an input lookup file containing operational hours of a train. 9-10 10-11 11-12 12-13 13-14 14-15 15-16 16-17 ...................23-24 Today 1 2 3 4 5 T-1 1 2 3 4 5 T-2 1 2 3 4 5 T-3 1 2 3 4 5 Bin Size is 1 hour in this case and schedule of the same train for the previous 3 days has been provided with the same bin size. Scenario: Today's schedule is that the train's 1st hour of operation is 9-10 and 2nd hour of operation is 10-11 and so on. everyday train is running for 5 hours. so in the table 5 hours of operations are mentioned. Let's say as per current time I am in the 1st hour of operation so I need to consider the 1st hour of operation for the last 3 days count their alarmopened and divide it by 3 to get the average. If today, number of alarm opened in 1st hour of operation is more than the average calculated on the basis of 1st hour of operation for the last 3 days, it will give alerts. Question: How I can mark the hour of operations of previous days. If today I am in 2nd hour of operation, how to get the count of alarm opened in 2nd hour of operation in previous 3 days? Logically I am able to understand the scenario but can't think of implement in splunk. Please guide. Hope my question is clear. TIA

Deployment of Universal Forwarder to Apple Mac fleet

$
0
0
Our company operates a fleet of Apple Macs. We would like to automate the deployment and configuration of the Universal Forwarder agent to these Macs via our MDM platform, but there is very little information provided by Splunk on how to automatically configure the MacOS Universal Forwarder to communicate with our Splunk infrastructure. Given the size of the Mac fleet we ideally do not wish to have a technician install and configure the Universal Forwarder on every machine manually. The only documentation we've been able to locate is what is posted on this Splunk web page: "docs.splunk.com/Documentation/Forwarder/8.0.0/Forwarder/Installanixuniversalforwarder#Install_the_universal_forwarder_on_Mac_OS_X" - which unfortunately does not provide any guidance on automatically applying the custom configuration settings during the install. For the MSI (Windows) version of the Universal Forward installer there are a number of parameters available, such as 'DEPLOYMENT_SERVER', 'AGREETOLICENSE', 'SPLUNKUSERNAME' and 'SPLUNKPASSWORD' (ref: "docs.splunk.com/Documentation/Forwarder/latest/Forwarder/InstallaWindowsuniversalforwarderfromthecommandline"). Does anyone know if these parameters are also available for the MacOS version of the Universal Forwarder installer ? If anyone has experience with deploying the Universal Forwarder to a large Mac fleet we'd be keen to hear how you've automated that process. If indeed it is possible to do so...

dashboard with multiple dropdown menus not working

$
0
0
I have created a simple dashboard with 2 dropdown menus. Selecting an item from the second menu appears to work with no results being provided. If I open the question in search, the parameter is corrupt. Rather than "text" it appears in search as "t e x t". No idea where these spaces are coming from.

Splunk Licensers Pools and Indexers Details

$
0
0
Team, We are managing License Manager for enterprise wide, so we need to know 1) How we can get the list of license pools along with GUID 2) Where do we see these data in server? which logs 3) We are in a process of automation, where we need to add indexers to particular license pools, how we can achieve this?

Combine Values into one event then search if one of the values are contained

$
0
0
Hi, Thanks in advance This is hard one to put well in the title Basically i have sets of data which contain Students Scores for tests. Students can take these tests multiple times. I need a search that will show only events where the student has never scroed greater than 80 **Sample Data (fields "Display_Name" and "result":** **Display_Name result** John_Doe 20 John_Doe 60 John_Doe 80 Jane_Doe 95 Jack_Doe 20 **Results i need (as he is the only person not to have scored 80 or higher:** Jack_Doe i cant just simply use "where result < 80" because then John_doe will be included. I need something that will exclude someone who has scored 80 or higher. I have tried all matter of combinations, which i wont list as i find sometimes its best for people to approach without prior conception. Thanks you all

Basic question about scheduled search

$
0
0
hello In my dashboard, I use a scheduled search with a filter token because i have a dropdown list which allow me to do a filter by SITE But I need to execute the stats command after the loadjob because I need to pick up all the 10 events (head 10) for a specific site If I am doing the stats command directly in the savedsearch, I pick up all the 10 events (head 10) but for different sites Is there a solution to solve the problem directly in the saved search because if I am doing the stats command afer the loadjob, its not very useful to use a scheduled search | loadjob savedsearch="admin:SA_Monitoring_sh:Performances - Compliance host" | search SITE=$tok_filtersite|s$ | stats values(SITE) as SITE, count by host flag | where isnotnull(flag) | rename host as Hostname, flag_patch_version as "Patch level", SITE as Site | fields - count | table Hostname Site "Patch level" | sort +"Patch level" | head 10 thanks

Data model misses events when using a calculated field constraint

$
0
0
I have a data model in Splunk with a root event and two child events. The child events have a constraint that uses a calculated field. When I search the child events, only recent data is returned. This only happens when data model acceleration is enabled. The child constraint: latency>0 **Example** When I count the total number of events, the total is always correct: | tstats count from datamodel=dmdemo.rooteventdemo where nodename=rooteventdemo Results: 580,220 Which is roughly the same as: | datamodel dmdemo rooteventdemo search Results: 580,704 However, when I search the child: | tstats count from datamodel=dmdemo.rooteventdemo where nodename=rooteventdemo.child1 Results: 0. Which is the same as: | datamodel dmdemo child1 search Results: 0 Note: these values change continuously when I search for the latest 15 min. When I disable report acceleration, tstats obviously doesn't work but the search is working fine again: | tstats count from datamodel=dmdemo.rooteventdemo where nodename=rooteventdemo.child1 Results: 0 | datamodel dmdemo child1 search Results: 474,045 The total result is as expected: | datamodel dmdemo rooteventdemo search | search rooteventdemo.field1>0 Results: 474,045 | datamodel dmdemo rooteventdemo search | where 'rooteventdemo.field1'>0 Results: 474,045 I've also created a new data model where the calculated field is used as a root event. This is still causing the same issues, so the issue is not caused because it's a child data set. Does anyone know what is causing this and how to fix it? I've thought of simply sending the calculated field to Splunk or perhaps to create the calculated field at index time, but I think Splunk data models should be able to cope with calculated fields.

How can we control count in maps+?

$
0
0
Hi Splunker,Please help to find the solution this problem. **My task is to show the Bus location and Service center location in single Map** . Bus locations are stored in **Index A ** and Service Centers locations are stored in ** lookup file** . To achieve this I am using **MAPS+** Initially when map loads its only showing Bus locations When user click on show Service center then service centers location loaded into the same Map. To achieve this I am using append query to combine both Bus location and Service Center Location.Both the locations are visible on Map * but problem here is It's also counting the Service centers locations with Bus locations.* Please help me How we can restrict to not count the Service centers location. **My Query**:index="A" **|append[|inputlookup Service_lookup ] |eval markerColor=case(like(Status, "%1%"), "orange",like(Status, "%0%"), "red", 1=1, "lightblue"), icon=case(like(Status, "%1%"),"building",like(Status, "%0%"), "hand-lizard-o", 1=1, "car"),layerDescription=case(like(Status, "%1%"),"building",like(Status, "%0%"), "hand-lizard-o", 1=1, "car") |table latitude, longitude,BUS_ID,markerColor,icon,iconColor,layerDescription,ServiceCenterName,Status,Address** ![alt text][1] Count inside rectangular box is sum of both BUS and Service center. [1]: /storage/temp/275154-bus.jpg

Server error while login

$
0
0
When I try login with correct or wrong Login informations always comes the message "servererror".

recommended way to rename a kvstore collection that is not empty ?

$
0
0
Dear all, I am pretty new with KVstore, REST API and Python SDK, therefore my question might be trivial for an expert, but after some hours spent on answers.splunk.com I still don't get a real solution to that. We are using Splunk Enterprise 8.0.0 and by reading the "Endpoints reference list": https://docs.splunk.com/Documentation/Splunk/8.0.0/RESTREF/RESTlist I see that the REST-API allows for collections creation, adding items, updating items (in the sense of full updates), collections delete. I couldn't find however anything about collections renaming - how should this work ? The direct solution of gathering the data, creating another collection, pushing the data into the new collection then deleting the old collection does not seem a good choice when working with large collections (>= 10.000.000 items). So the question is, what is the splunk way to rename an existing collection ? I simply refuse to think that splunk does not offer an interface method to realise this. thx, marius

Splunk eval if ELSE or case

$
0
0
Hi All, Im working on windows AD data and gathering info from various eventIds. i have grouped the eventIds and each group has a specific Action field in the output table based on the fields related to those eventIds For Eg: (eventId=1234 OR eventid=2345 OR eventId=3456) => Action field should have the value Action1(which is alos field created with the values related to these 3 event Ids) similary (eventId=9876 OR eventid=8765 OR eventId=7654 OR eventid=5432) => Action field should have the value Action2(which is also field created with the values related to these 4 event Ids) similarly (eventId=1122 OR eventid=2233 OR eventId=3344 ) => Action field should have the value Action3(which is also field created with the values related to these 3 event Ids) I tried this logic in my spl using eval if and eval case but didnt get the expected ,can someone please look into it and help me with the soloution. Thanks in advance.

How do I break a multi-line event with regex, provided that the date is changed and the event should break only when they have the date at the beginning of the interaction?

$
0
0
Hello, How can I break this multiline event, with the condition if the date is changed and only the date at the beginning of the line. This log has dates in the middle of the line, but this event cannot be broken, it has to be only at the beginning. 2019-11-12T09:51:28.291 Dbg 23058 [MsgIn] Ended defined Clients : 2019-11-12T09:51:28.338 Dbg 23055 [MsgIn] None. 2019-11-12T09:51:28.338 Dbg 23056 [MsgIn] Scheduled Clients : 2019-11-12T09:51:28.338 Dbg 23055 [MsgIn] None. 2019-11-12T09:51:36.154 Trc 29998 [PSDK.Timer] -AP[8802]->-65331 @09:51:36.0154 2019-11-12T09:51:36.154 Trc 29998 [O worker #4] -Ap[8802]-<-65331 @09:51:36.0154 2019-11-12T09:51:51.145 Trc 29998 [PSDK.Timer] -AP[4563]->-58089 @09:51:51.0145 2019-11-12T09:51:51.145 Trc 29998 [O worker #4] -Ap[4563]-<-58089 @09:51:51.0145 2019-11-12T09:51:53.657 Trc 29998 [PSDK.Timer] -AP[5040]->-59427 @09:51:53.0657 2019-11-12T09:51:53.657 Trc 29998 [O worker #3] -Ap[5040]-<-59427 @09:51:53.0657 Timezone UTC offset: 03:00:00 UTC Start Time: 2019-11-09T05:25:11.154 Running Time (DDD:HH:MM:SS): 003:07:26:17 UTC Time: 2019-11-12T12:51:28.338 2019-11-12T09:52:28.367 Dbg 23053 [MsgIn] Clients defined in ConfigServer : 2019-11-12T09:52:28.367 Dbg 23054 [MsgIn] enabled. 2019-11-12T09:52:28.367 Dbg 23054 [MsgIn] enabled. 2019-11-12T09:52:28.367 Dbg 23054 [MsgIn] enabled.

How to break a multi-line event with regex, provided that the date and time containing the milliseconds changes only at the beginning of the line.

$
0
0
Hi, I have the following log format, How can I break this multiline event, with the condition if the date is changed only when the date containing time is at the beginning of the line. Example: 2019-11-12T12: 51: 28.338 2019-11-12T09:51:28.291 Dbg 23058 [MsgIn] Ended defined Clients : 2019-11-12T09:51:28.338 Dbg 23055 [MsgIn] None. 2019-11-12T09:51:28.338 Dbg 23056 [MsgIn] Scheduled Clients : 2019-11-12T09:51:28.338 Dbg 23055 [MsgIn] None. 2019-11-12T09:51:36.154 Trc 29998 [PSDK.Timer] -AP[8802]->-65331 @09:51:36.0154 2019-11-12T09:51:36.154 Trc 29998 [O worker #4] -Ap[8802]-<-65331 @09:51:36.0154 2019-11-12T09:51:51.145 Trc 29998 [PSDK.Timer] -AP[4563]->-58089 @09:51:51.0145 2019-11-12T09:51:51.145 Trc 29998 [O worker #4] -Ap[4563]-<-58089 @09:51:51.0145 2019-11-12T09:51:53.657 Trc 29998 [PSDK.Timer] -AP[5040]->-59427 @09:51:53.0657 2019-11-12T09:51:53.657 Trc 29998 [O worker #3] -Ap[5040]-<-59427 @09:51:53.0657 Timezone UTC offset: 03:00:00 UTC Start Time: 2019-11-09T05:25:11.154 Running Time (DDD:HH:MM:SS): 003:07:26:17 UTC Time: 2019-11-12T12:51:28.338 2019-11-12T09:51:58.353 Dbg 23053 [MsgIn] Clients defined in ConfigServer : -Ap[4564]-<-58089 @09:52:21.0160 2019-11-12T09:52:28.367 Dbg 23053 [MsgIn] Clients defined in ConfigServer : 2019-11-12T09:52:28.367 Dbg 23054 [MsgIn] enabled. 2019-11-12T09:52:28.367 Dbg 23054 [MsgIn] enabled. 2019-11-12T09:52:28.367 Dbg 23054 [MsgIn] enabled.

How can i run some script (python or powershell) if i receive some particular log ??

$
0
0
How can i run some script (python or powershell) if i receive some particular log ?? either in search or in alert ??

charting the percentage from more files based on value field

$
0
0
Hello, I'm facing with a chart representation monthly based. Every month I receive 3 files like the follow: ------------------------------------------------------------------------------ 01/10/2019 63 7,821428776 1 59,000000000 02/10/2019 57 5,666666508 0 0 03/10/2019 77 5,640625000 2 3,000000000 ... 31/10/2019 42 7,025000095 0 0 ------------------------------------------------------------------------------ Fourth file has this format ------------------------------- 01/10/2019 1337 ------------------------------ I have to monthly chart a value obtained from the following rule: 1. get the value from the fourth file (1337) 2. from first three files if the value of the column 5 is greater than 15 I have to sum the value on column 4 3. calculate the percentage: (total-column-4 / 1337) * 100 I was able to get the value using this query by setting the time keeper on search (previous month or advanced function) index=rl_ivr | eval A=if(like(source,"%HD%"),call_offered,0) | eval nn=tonumber(replace(replace(avg_aban_time,"\.",""),",",".")) | stats sum(eval(if((nn > 15),num_call_aban,0))) as abbandonate sum(A) as chiamate | eval sla11 = ((abbandonate / chiamate) * 100) | table sla11 how can I build a serach to get the value for every month ? Many thanks, G.

Querying auth failures using ldapsearch and inputlookup

$
0
0
Hello there, There are a couple of queries that I use to search for authentication failures on members of high-privileged groups. After testing, I noticed that the query is hit-and-miss. Specifically, if I reduce the number of groups in the search, it is more accurate. The structure of the query is as follows: source="wineventlog:security" EventCode=4625 AND (dest_nt_domain="SC-MIDHURST") [| ldapsearch domain=Domain_Name search="(objectClass=group)" | search cn="Domain Admins" OR cn="Administrators" OR cn="Print Operators" | ldapgroup | rename member_name AS Account_Name | table Account_Name | format ] | stats count by user Note: The number of groups is around 200 or so. My approach has been to place all of the groups in a csv file to be used as a LUT. However, I am having trouble combining the inputlookup command and the ldapsearch command. They are both required to be the first command in a search. Any ideas are appreciated.

using greater than comparison on a property is not working

$
0
0
I am trying to filter my results on a property that is greater than a certain value and it is not returning any results. If I do an equals to comparison it works. Below is my filter criteria and the property outline. Query: index="lab" source="*-test" | eval isGood=if('line.message.space-document.elements{}.y'>="1664","true","false") | where isGood="true" | stats count Below is the format of the event i'm trying to capture: line: { [-] message: { [-] space-document: { [-] elements: [ [-] { [-] x: 38 **y: 1664** } { [-] id: ac5q3ghn x: 38 y: 708 } ] } } }

Why is my KV store not being initializing after new app install?

$
0
0
After migrating from OSSEC to Wazuh , I installed the Wazuh app ver. 3.10.2. When starting the app, the API screen comes up with the message - "Kv Store is being initialized please wait some seconds and try again later." I has been a few days and the KV store is still not there. What do I need to do to get the KV store initialized? System details - single instance running Splunk Enterprise 7.3.0 Regards, Scott
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>