Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How do I merge the following searches together?

$
0
0
Hi Experts, I have a confusing situation in terms of handling two searches. The situation is like this: 1) We get a Windows event log(TYPE=1) when a server goes down, and we have one saved search which runs every 15 minutes. This search looks for the last 15 minutes of data, and if it finds an alert, it turns the status as RED ex: eval health="RED" 2) We have another search which runs every 16th minute and looks for the resolved event (TYPE=2) which says server has come up. This saved search will again look for the last 15 minutes of data and turns the status as GREEN ex: eval health="GREEN" I have two questions here : 1) How do I merge these two searches together ? 2) Also here is another scenario: The first search runs every 15 minutes and looks for the last 15 minutes of data and turns the server RED. Next time, it runs again, and if the event is not there, it will turn it GREEN. But the event is still there, back in time, and there is no corresponding (TYPE=2) event yet, so ideally the first search should keep it RED. One way of avoiding this situation is to look for last 1-2 hours data in the first search so that it will find that event far back in time and keep the server RED for good 2 hours rather than just keeping it RED for 15 minutes. Here are my two searches: index="netcool" sourcetype="netcool_alerts" ALERTKEY="Failed to Connect to Computer" TYPE=1 NODE="ISP*" NOT NODE=ISP9* | rename LOCATION as loc NODE as host | stats latest(TYPE) as TYPE,latest(_time) as _time by loc host | rex field=host "ISP(?\d+)(?\w)$" | eval health=if(hostType="F","YELLOW","RED") | append [| inputlookup host_list.csv | search NOT host=ISP9* | rex field=host "ISP(?\d+)(?\w)$" | table loc host hostType ] | eventstats count as occurence_count by host | fillnull value=0 TYPE | where NOT (occurence_count=2 AND TYPE=0) | fillnull value="GREEN" health | eventstats values(eval(case(hostType="A",health))) as A_Health by loc | eval A_Health=if(hostType="B",A_Health,"NA") | eval health=if(hostType="B" AND health="RED" AND A_Health="RED","RED", if(hostType="B" AND health="RED" AND A_Health="GREEN","YELLOW",health)) Below is the search that turns the store GREEN when it finds a resolved alert for that server. index="netcool" sourcetype="netcool_alerts" ALERTKEY="Failed to Connect to Computer" TYPE=2 NODE="ISP*" NOT NODE=ISP9* | rename LOCATION as loc NODE as host | stats latest(TYPE) as TYPE,latest(_time) as _time by loc host | eval health="GREEN" As you can see in the first search where i am appending the list of servers, thats because i want to consider all the other servers as GREEN which don't have any alert.

Comparison and condition function help. Multiple If, case or like statements in search

$
0
0
index=foo | eval Compliant=case(like(AppVersion,"14.12%"), "OK", like(AppVersion,"14.11%"),"OK" , like(AppVersion,"14.10%"),"OK" , like(AppVersion,"14.9%"),"OK" , like(AppVersion,"14.8%"),"OK"...) | table User, Platform, AppVersion, Compliant Right now table looks like this. I have only checked if an AppVersion is on the Compliant list. 12345| Windows | 14.8 | Ok 56789| Mac | 12.8 | 03468| iOS | 18.0 | 97621| Android | 18.8 | However, I need to check certain AppVersions against the Platform. I imagine it would need multiple if statements and multiple cases but not sure how to do this. One of my failures looked something like: index=foo | eval Compliant=if(Platform=Windows, case(like(AppVersion,"14.12%"), "OK", like(AppVersion,"14.11%"),"OK" , like(AppVersion,"14.10%"),"OK" , like(AppVersion,"14.9%"),"OK" , like(AppVersion,"14.8%"),"OK"...),"NO") | table foo The goal would be to show something like this. User | Platform | AppVersion | Compliant 12345| Windows | 14.8 | Ok 56789| Mac | 12.8 | Ok 03468| iOS | 18.0 | Ok 97621| Android | 18.8 | Ok 97423| Windows | 13.8 | No 32638| Mac | 11.0 | No 08346| iOS | 17.0 | No 43835| Android | 18.2 | No Thank you in advance, if you can help.

Why are events not sorting in Chronological Order with a basic search?

$
0
0
Today, I noticed that, when performing a basic search, the events are not sorted chronologically. Additionally, not all events 'match up' correctly to the timeline. I have not found any other posts which document this strange behavior. With a simple `| sort _time`, the events sort as expected and correlate to the timeline accurately. The deployment was upgraded from 7.0.2 to 7.1.2 one week ago. Here's some screenshots that show the behavior: ![Events not in Chronological Order][1] ![Events not Correlated with Timeline][2] Does anyone have any ideas how to fix this issue? [1]: /storage/temp/255882-searcheventsoutoforder.png [2]: /storage/temp/255884-eventsnotmatchingtimeline.png

combining fields from two log entries which have a common id that is named differently

$
0
0
Base, How can I combine two log entries that share a common ID when the field name of the ID is different between both entries? Currently I'm using re-name to change my field names into strings that don't contain "-" (eval seems to hate "-"), > rename v.my-very-long-field-name.rid AS rid then eval to give the unique field names a single name, and transact:> eval request_id=if(isnull(rid), req, rid)>transaction request_id | Last thing: I table values from both log entries. Seems like it should work great... but... it doesn't. I end up with table entries containing values from 1 log entry or the other, not both. Help me Obi Wan...

Multiple Emails From Real Time Alerts

$
0
0
I configured an alert to send an email every time a user is added to the Domain Admins group. I have this alert triggering on eventcode 4728, 4755, etc. The problem is that adding a single account will trigger multiple emails. I want the first event to trigger an email, but all subsequent events not to trigger an email. How do I accomplish this?

How to use datamodel field values in tstats to filter resultant data

$
0
0
I'm trying to search my Intrusion Detection datamodel when the src_ip is a specific CIDR to limit the results but can't seem to get the search right. Is this possible? | tstats count from datamodel=Intrusion_Detection where (nodename IDS_Attacks.src="1.2.3.4/30" OR IDS_Attacks.dest="5.6.7.8/30") | `drop_dm_object_name("Intrusion_Detection")` | fields src, dest, dest_port **WORKING QUERY** | tstats count from datamodel=Intrusion_Detection where (nodename = IDS_Attacks (IDS_Attacks.src="1.2.3.4/30" OR IDS_Attacks.dest="5.6.7.8/30")) groupby IDS_Attacks.src, IDS_Attacks.dest, IDS_Attacks.SrcPort, IDS_Attacks.dest_port | `drop_dm_object_name("IDS_Attacks")`

Splunk App for AWS: using one index per client (multi-tenancy)

$
0
0
Dear splunk community members, I want to configure the Splunk App for AWS for multi-tenancy. For a new customer AWS account I - created a dedicated index for this customer - configured cloudtrail and config inputs (SQS based S3) as well as description and cloudwatch inputs to write their data into the new index - created a new user and role in Splunk that can only access the new index Since this splunk cluster is only used for AWS App I removed the index filters from several search macros mentioned here: https://docs.splunk.com/Documentation/AWS/5.1.1/Installation/Useacustomindex Then I could execute the Addon Metadata searches of the addon. After that, I could use most functionality with the new user and what I see is indeed restricted to that specific account. However, I failed to get the topology view. From what I analyzed there are several specific indices for the topology handling (aws_topology_history, aws_topology_daily_snapshot, aws_topology_monthly_snapshot, aws_topology_playback). I do not want to give the user access to these indices because then he could also see data/topologies about other clients. Do you have any ideas or advice how I can have multi-tenancy and still provide the users access to their topology? Any help with that is greatly appreciated! Brgds and thanks Steffen

How to change table height when no results are found

$
0
0
How to change a Simple XML table height when no data is present? The table should be much smaller when no alerts are triggered. `Table need not be hidden to show any other custom message/panel`. The table itself should be reduced in height. Attached is the screenshot of a Table with No Results Found, which consumes a lot of space. ![alt text][1] PS: Documenting answer from Slack to Splunk Answers for future reference. [1]: /storage/temp/255889-no-results-found-table.png

License Splunk

$
0
0
Hi ALL! i have a license. How do i add license to my account thanks!

Creating static column in table

$
0
0
I would like to create one column with labels that should not be changed. For example: column title: my_own first row value: A second row value: B third row value:C Is it possible to do it?

Sending AWS data from heavy forwarder to indexer

$
0
0
Our splunk environment consists of a Universal Forwarder, Heavy forwarder and Indexer. We are importing our AWS cloudtrail data from an S3 bucket using SQS via the AWS Add on. I have configured this on the HF which has created a config entry under {SplunkApp}/etc/apps/Splunk_TA_aws/local/inputs.conf [aws_sqs_based_s3://CloudTrail] aws_account = MY-EC2-ROLE aws_iam_role = Splunk index = aws_fwd interval = 300 s3_file_decoder = CloudTrail sourcetype = sqs_batch_size = 100 sqs_queue_region = eu-west-2 sqs_queue_url = https://account/queuename disabled = 0 When you create an input type it requires an index to send the data (here it's aws_fwd). However I want to send this on to the indexer in a seperate index. How can I specify this so the data goes from AWS into the HF and then onto the indexer? The HF > Indexer output is configured on port 9997 - any help would be great.

fortiweb app

$
0
0
we have fortiweb, but splunk doesn't have any app for fortiweb. can some one help me to make report from logs of my fortiweb? thank you

show one instance of an error (out of many errors coming repeatedly) from a given time

$
0
0
I am getting many errors while just writing keyword error when searched from a single log file like Retrying connecting ES, AutoReconnect, AttributeError, etc I want to take out the distinct errors, and every time I had to go and write all the errors in search bar with "AND NOT" keyword just to figure out if there is new error came apart from the listed above, is there a way just to output these distinct errors like select distinct query of SQL? I tried out different queries from this forums from other threads but none of them seemed to work like transaction, dedup etc.

Custom search command displays only 1000 events

$
0
0
The following custom search command displays only 1000 events in Splunk; while should return 100,000; the rest of the events seems to be accounted for, but are not displayed; Splunk 6.x and 7.x: import splunk.clilib.cli_common as spcli import splunk.Intersplunk import sys import time keywords, options = splunk.Intersplunk.getKeywordsAndOptions() def main(args): results = [] row = {} for i in range(0,100000): record = {} record['_time'] = time.time() record['_raw'] = "{" + str(i) + "}" results.append(record) splunk.Intersplunk.outputStreamResults(results) exit() main(sys.argv) commands.conf: [test] filename = test.py local = true overrides_timeorder = true streaming = true supports_multivalues = true generating = stream ![alt text][1] [1]: /storage/temp/255894-a.png

compare field values Device_ID

$
0
0
I have a log: **date time USER User_IP Device_ID** *02.09.2018 18:01:34 user1 ip1 2C5DFVG78930R7JOAHP19S8USO* 02.09.2018 18:02:34 user2 ip2 androidc78697991 02.09.2018 18:03:33 user3 ip3 QUBSCJ6AM94NPCNSPIL3H4N4HC 02.09.2018 18:04:33 user4 ip4 ITqHKJMNOqwM5q5AB1QCF1C9MOJMO8 02.09.2018 18:05:32 user5 ip5 4B88FFF650C950CE 02.09.2018 18:06:32 user6 ip6 9GB9021P5wsw2927D0A3CJ55KKD89S 02.09.2018 18:07:31 user7 ip7 SEC1EE05FBA56984 02.09.2018 18:08:30 user8 ip8 QUBSCJ6AMqsw94NPCNSPIL3H4N4HC *02.09.2018 18:09:30 user1 ip1 SV863D5OL539F94wFUI7E41O8JS* How to get a notification about changing the value of the field Device_ID? Such a search does not give the relevant values: | bucket time span=10m | stats values(time) values(User_IP) values(Device_ID) dc(Device_ID) AS countDevice_ID by USER | search countDevice_ID>1

props/transforms not taking effect

$
0
0
Hello, i have a single Splunk Enterprise instance with a 9997 listener. I have a single Windows Server with a UF forwarding data to the Splunk Enterprise. This is all good, data is being forwarded as expected. I am now trying to make a few props.conf changes to the data, but none of my configuration seems to make any difference, when i go look in the Splunk Enterprise search app. Here in `props.conf` i a, trying to transform the host, set the timezone to Sydney and set the event time. [WinEventLog:*] TRANSFORMS-change_host = WinEventHostOverride TZ = Australia/Sydney DATETIME_CONFIG = CURRENT Here in `transforms.conf` is my host overide block; [WinEventHostOverride] DEST_KEY = MetaData:Host REGEX = (?m)^ComputerName=([\S]*) FORMAT = host::$1 On every change i make, i have performed a `splunk.exe restart` on the UF host, however nothing appears to change in my index. Here is a sample from my index. - As you can see the `Time` field is UTC, but i want the time in the actual Event to be the Time. - The `host` field is not transforming to the correct ComputerName field in the event. ![alt text][1] [1]: /storage/temp/255897-screen-shot-2018-09-03-at-80106-am.png Using Answers from other questions, i used the following search query to "test" the regex and it appears to work, so i am confused why it doesn't work. index=* | head 1 | eval testdata="ComputerName=ahslc01p" | regex testdata="(?m)^ComputerName=([\S]*)" | stats count

Splunk and external javascript

$
0
0
Hi guys, I want to display in browser console the number of page selected from my dashboard panel pagination. I included that script to my dashboard `
`: require(['splunkjs/mvc/simplexml/ready!'], function(){ require(['splunkjs/ready!'], function(){ console.log("hello from Script"); $(document).ready(function(){ $("#panel4 .splunk-paginator a").click(function (){ var value = $(this).text(); console.log("value: ",value); }); }); }); }); The script can display **hello from Script** in the console, but no action from the click on the paginator. PS. My jQuery selector is correct, I tested this following block from my browser console first: $(document).ready(function(){ $("#panel4 .splunk-paginator a").click(function (){ var value = $(this).text(); console.log("value: ",value); }); }); It works once, which means when I click on page 1, I got **value: 1** but when I click on 2, nothing happens..

Add-on Builder Blank Screen and Argument validation for scheme=validation_mi

$
0
0
Hello, I'm having issue validating any Add-On's within the Splunk Add-on builder app. While looking in the splunkd.log I discovered the following errors and was hoping that someone had a fix : 09-02-2018 16:31:10.943 -0700 ERROR ModularInputs - Argument validation for scheme=validation_mi: killing process, because executing it took too long (over 30000 msecs). 09-02-2018 16:31:11.437 -0700 INFO ModularInputs - Argument validation for scheme=validation_mi: script running failed (exited with code 255). ![alt text][1] [1]: /storage/temp/255900-add-onbuilderissues.png

upgrade version splunk

$
0
0
Hi ALL! Every minute I receive the error : msg="A script exited abnormally" input="./bin/instrumentation.py" stanza="default" status="exited with code 114" I get this error after upgrading to splunk 7.1.2 Thanks!

Why my searches are hitting only one indexer in a cluster ??

$
0
0
Hello everyone, I have a two indexers IDX 01, IDX 02 in a cluster connected to a search head cluster what I observed is IDX01 is having high CPU usage like 100 %, many time's in a day but IDX02 does not have any alerts. When I looked into DMC IDX01 has more scheduled searches running on it whereas IDX02 show least scheduled searches running on it. I can clearly see that searches are running only on IDX01 but not on IDX02. what can be the problem? Cluster Master shows the indexer's health is fine. How can I troubleshoot.........any suggestions, please what I see is DMC under these sections : Median CPU Usage by Process Class Maximum Search Concurrency Maximum Resource Usage of Searches more above all sections ............I can clearly see the IDX01 have high usage when compared to IDX02 Please help me
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>