Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Problem with AND operator in CASE and IF statement

$
0
0
I have one lookup in which there is a field which consist Team Member A1 A2 A3 A4 A5 A6 A7 Now,If TeamMember=(A1 OR A2) AND A4 AND A7 then print Aseries TeamMember=(A1 OR A2) and A5 AND A6 then print Bseries I tried |eval Team=if((con1=="A1 OR con1=A2)"AND con1=="A4" AND con1=A7,Aseries,Other) I used case as well but no luck.

Index Cluster Setup

$
0
0
Can anyone provide the steps to get an index cluster setup? Splunk Docs seems to jump around a lot, and not provide an instructional setup. From what I gather, and what I have done is: Built out 3 splunk servers set up the first splunk server as my master, setting my RF as 2, and my SF as 2. set up splunk box 2 and 3 as peers. When viewing on my master the "index clustering" page in the interface, i see that I have green checks, and that I have 2 peers searchable, and 3 indexes searchable (_audit, _telemetry, and _internal. I think this is the correct way. I have a couple of questions: 1. How do I go about adding another index to be searchable, such as If i want to monitor /var/log/messages? 2. Should my Universal Forwarder on linux be pointing towards the master node, or does it point to my 2 peer nodes? 3. Do I have to go to each splunk server, and navigate to "Settings > Indexes" and create my "messages" index on each one? Thanks!

Can the Azure Monitor Add-on for Splunk run on a HF in AWS

$
0
0
We currently have the majority of our infrastructure either on-prem or in AWS with more and more moving to AWS. We do use Azure DevOps though and are looking to get the data from the Azure Monitor Add-on for Splunk into our AWS Splunk environment where our index cluster, search heads, deployment servers, etc reside. I would prefer not to put the HF in Azure and rather have it in AWS along with the rest of our Splunk environment but I am not seeing any examples or documentation for anyone doing this. I don't see why it would matter where the HF resides as it is polling the event space to get the data. Can this be done? What are the cons of doing so? Also, regardless of if the HF is in Azure or AWS to run this add-on the HF is a single point of failure. What options are there to mitigate or eliminate that single point of failure (i.e. make it HA-ish) at the HF with the Azure Add-on in play and not duplicate data at the ICs.

Help with Query to list current status buckets

$
0
0
Hi. Somebody to help me with a query to list current status buckets, example Bucket Name Current Status _internal~1321~D5156C32-9B6F Cannot Fix search count as the bucket hasn´t rolled yet

DuoAdmin API Pagination Requirement

$
0
0
It would appear that come March 15, 2019, Duo will be requiring all API endpoints to support pagination. [https://help.duo.com/s/article/4744][1] Duo has released an updated python client: [https://github.com/duosecurity/duo_client_python][2] I have also posted this on [GitHub][3]. [1]: https://help.duo.com/s/article/4744 [2]: https://github.com/duosecurity/duo_client_python [3]: https://github.com/bawood/TA-DUOSecurity2FA/issues/5

How to alert if certain number of consecutive events exceed a threshold?

$
0
0
I see lots of variants of this question but have to encounter this specific case ... I have thousands of incoming events over time ... e.g. disk mem eventX 10 80 eventX 10 80 eventX 10 80 eventX 10 80 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 80 I want to alert ONLY if 10 consecutive events have a value that falls below the threshold ... consecutive being the key word there. For example, the data above would alert since 10 consecutive events have a mem value <= 20. I'm hoping this is enough detail to get my intent across.

why does some of the fields in the SH starts with #

$
0
0
why does some of the fields in the SH starts with # , and others not .

Search Head Cluster - Indexes are missing

$
0
0
We have a Search Head Cluster. The three search heads cluster members have the Indexers listed in the Search Peer. Everything looks good configuration wise but none of our existing indexes (we had a standalone SH we are using while we setup the SHC) are available to select when access a role or create a new role. I thought i was an article about this in SPlunk answers yesterday but could not find it again. Any help is appreciated. Thx

Load balancing data to a group of forwarders

$
0
0
I have a group of 3 forwarding servers behind a load balancer. When I direct syslog messages to the VIP I am getting the "host" of the load balancer not from the source server. Here is an example of the raw data: So here is the output from sending to the VIP: Jan 23 21:47:59 LOAD_BALANCER 1 2019-01-23T21:47:59.313639+00:00 vcenter101 - - - This is a diagnostic syslog test message from vCenter Server. here is the output from sending straight to the FWD: Jan 23 21:50:20 VCENTER_SERVER 1 2019-01-23T21:50:20.239883+00:00 vcenter101 - - - This is a diagnostic syslog test message from vCenter Server. The indexer is saying the source host is "LOAD_BALANCER" and not "VCENTER_SERVER"

Values Shared by Search output and Lookup

$
0
0
TIA. This has probably been asked and answered dozens of times but my brain is now mush. The following search gives me a column named "Account_Name": eventtype=wineventlog_security EventCode=4768 (Result_Code=0x12 OR Result_Code=0x17) Client_Address!="*123.456.789.4*" | regex Account_Name="^[^\\$]+$" | stats count by Account_Name dest_nt_host dest_nt_domain Client_Address | dedup Account_Name keepevents=true | where count>7 | sort -count I have a lookup search that produces "DisabledUsers.csv" where the first column is "sAMAccountName". I want to output the matches, in other words the users that are common to both sources, the accounts that are identical between "Account_Name" and "sAMAccountName". Suggestions?

How do you use an eval command to calculate 'latest' for use in a search?

$
0
0
I have crafted the following search that calculates a value for the 'latest' field relative to 'earliest' and uses it in a search. But the time window in the search result shows those values are ignored. Can you help me understand why this technique doesn't work? | makeresults | fields - _time | eval earliest=strptime("01/15/2019:20:00:00","%m/%d/%Y:%H:%M:%S") | eval latest=relative_time(earliest,"+2d@d") | eval earliest=strftime(earliest,"%m/%d/%Y:%H:%M:%S"), latest=strftime(latest,"%m/%d/%Y:%H:%M:%S") | search index=aws sourcetype=aws:guardduty earliest=earliest latest=latest

Can you help me with a problem with an AND operator in a CASE and IF statement?

$
0
0
I have one lookup in which there is a field which consist Team Member A1 A2 A3 A4 A5 A6 A7 Now,If TeamMember=(A1 OR A2) AND A4 AND A7 then print Aseries TeamMember=(A1 OR A2) and A5 AND A6 then print Bseries I tried: |eval Team=if((con1=="A1 OR con1=A2)"AND con1=="A4" AND con1=A7,Aseries,Other) I used case as well but no luck.

How do you alert if a certain number of consecutive events exceeds a threshold?

$
0
0
I see lots of variants of this question, but I have yet to encounter this specific case ... I have thousands of incoming events over time ... e.g. disk mem eventX 10 80 eventX 10 80 eventX 10 80 eventX 10 80 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 20 eventX 10 80 I want to alert ONLY if 10 consecutive events have a value that falls below the threshold ... consecutive being the key word there. For example, the data above would alert since 10 consecutive events have a mem value <= 20. I'm hoping this is enough detail to get my intent across.

Modifying Permissions for Lookup Files viewed in the App

$
0
0
Is it possible to obfuscate lookups that users are do not have access to? I don't think it makes sense to display lookups and kv-stores that the users don't have access to edit. Also they're able to open them in the viewer which may be a bit of a security breach. We would like our users to only see what they can edit.

MQ Modular input not working

$
0
0
I cannot get the MQ modular input to work at all. When I execute the mqinputs.py manually I recieve the following: root@dabcmiqm1:/opt/splunkforwarder/bin:> ./splunk cmd /usr/bin/python "/opt/splunkforwarder/etc/apps/mq_ta/bin/mqinput.py" Traceback (most recent call last): File "/opt/splunkforwarder/etc/apps/mq_ta/bin/mqinput.py", line 20, in import xml.sax.saxutils File "/opt/freeware/lib/python2.7/xml/sax/saxutils.py", line 6, in import os, urlparse, urllib, types File "/opt/freeware/lib/python2.7/urllib.py", line 30, in import base64 File "/opt/freeware/lib/python2.7/base64.py", line 10, in import binascii ImportError: Could not load module /opt/freeware/lib/python2.7/lib-dynload/binascii.so. Dependent module /opt/splunkforwarder/lib/libz.a(libz.so.1) could not be loaded. The module has an invalid magic number. Could not load module /opt/freeware/lib/python2.7/lib-dynload/binascii.so. Dependent module /opt/freeware/lib/python2.7/lib-dynload/binascii.so could not be loaded. Any help with this would be greatly appreciated!

Use same datamodel on multiple Search Heads

$
0
0
Hi, I have 2 independent Search-Head (no clustering) and they use same indexers. On first SH: I have a datamodel and i want users from 2nd SH request it But impossible to share it in parameters. Is it possible to do it ? (because in my mind, a datamodel store statistics in a new .tsidx on indexers available for my 2 SH) Thanks

Comments "macro" not working

$
0
0
I must be out of my mind. The comments built-in macro since version 6.5.0 gives me an error that it can't find the macro. I'm using the syntax found in the docs here, with my version of splunk in the url so it shows the one for my version. https://docs.splunk.com/Documentation/Splunk/6.6.10/Search/Addcommentstosearches index=* sourcetype=* `comment("THIS IS A COMMENT")` this gives me an error > Error in 'SearchParser': The search specifies a macro 'comment' that cannot be found. What could I be doing incorrectly? Chris

What are some good Splunk tips & tricks you know?

$
0
0
I write a monthly tips & tricks blog for Splunk users/consumers at my company but have steadily been running out of ideas. Anyone have anything they think is worth calling out? It can be as simple as a niche command, the idea of macros, alternatives to joins, really anything, fire away! The more the merrier. Thanks!

Filtering Custom Events in Windows Security log using regular expression in blacklist

$
0
0
Hello, We have Splunk Enterprise 7.2 with Deployment Server role and Splunk Universal forwarder on Windows SQL server. SQL server has custom event in Windows Security Log Below is a portion of Event Message. I need to create the blacklist entry in inputs.conf file to filter out events where two patterns are match at same time "class_type:LX" AND "server_principal_name:DOMAIN1\" The second pattern is 3 lines below of the first pattern. Any help will be greatly appreciated. Thank you, Joseph session_id:174 server_principal_id:274 database_principal_id:0 target_server_principal_id:0 target_database_principal_id:0 object_id:0 user_defined_event_id:0 transaction_id:0 class_type:LX permission_bitmask:00000000000000000000000000000000 sequence_group_id:A842D899-40A5-491E-886C-A8E7F7682BDD session_server_principal_name:DOMAIN1\sqlservice server_principal_name:DOMAIN1\sqlservice server_principal_sid:010500000000000515000000093a2a243fad146207e53b2b2f0a0000 database_principal_name: target_server_principal_name: target_server_principal_sid: target_database_principal_name:

Lookup Table Behavior Question

$
0
0
I have a lookup table that is giving me strange search results that I can't figure out - I have a table which is a list of names and the team they are on: person1,team1 person2,team1 person3,team2 However, there are people in the data that may not be defined in a team. I was looking to define them as "Other" so I could create searches for them without using nots. So, in my lookup definition I have Minimum Matches set to 1 and Default Matches set to Other. Also, automatic lookups are turned on. When I search like: index=myindex and drill into interesting fields, it shows a count of 239,824 in team Other If I click on Team other, or search like: index=myindex team=Other Then it shows a count of 86,495. Why would it be showing 239824 on a more general search, and 86495 when searched for specifically with everything else (including time picker) being the same? After a bit more testing, to rephrase the question: If I do the automatic lookup, with a minimum match of 1 and the default match=Other set, I get a different count than running: index=index| fillnull value=Other Team| search Team=Other Shouldn't they be the same?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>