Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Troubleshoot a MonitorNoHandle input

$
0
0
I have a UF (7.3.1) configured with the Splunk TA for Windows Inf. 6.0. It is a Domain Controller and has about 16 different inputs configured. The DC is running Windows Server 2016 Core. This morning it was working fine. I started work on sending the DNS queries for the local AD domain to the nullQueue and mistakenly put the props and transforms sections into the Deployment App without thinking and reloaded the deployment server. After the UF restarted, the DNS logs stopped coming in altogether. I reversed the changes in the local\props and transforms in the deployment app, but after numerous restarts of every component, the DNS logs still don't appear in the index. I've turned on debug logging on the UF and couldn't see anything relevant. I used btool to confirm that the DNS log file monitor is enabled in the running config on the UF and that the prop/transforms config is absent. At this point I've run out of ideas. For some reason the "MonitorNoHandle" stanza doesn't put much in the splunkd.log file except when it is broken. Can anyone help with some systematic steps to troubleshoot this. Thing I know are: 1: The DNS log entries are being created 2: All other configured inputs on the UF are being forwarded and indexed Here is the config of the input in question, as shown from using btool: [MonitorNoHandle://C:\Windows\System32\Dns\dns.log] disabled = 0 host = HOSTNAME_REDACTED index = msad interval = 60 sourcetype = MSAD:NT6:DNS Here it is from the inputs.conf file [MonitorNoHandle://C:\Windows\System32\Dns\dns.log] sourcetype=MSAD:NT6:DNS disabled=0 index = msad

Splunk DB Connect wont return results from Views

$
0
0
Hi All, I have successfully connected to the SQL server and try to run a query using views but it doesn't return results. Unlike when i'm running a query using tables, it successfully return results. is it possible that splunk db connect doesnt support views? Thanks in advance Guys! Happy Splunking! :)

How to use token in javascript file

$
0
0
I have a multiselect panel on my dashboard and js file attached to the dashboard as well. How do i pass the token value from the multiselect into the javascript file? e.g.:
abcabcindex="index" | stats count by abc0All*
What do I need to change in the xyz.js so that it would be able to read the $abc$ token?

Hover on an image inside the number viz customization panel

$
0
0
how to hover on an image to display a text over on it in a single value visualisation . ![alt text][1] [1]: /storage/temp/274552-dig.jpg Below is the screenshot for my panel . I want to display text on hover the image .Please help

View Indexer config with only access to the cluster master & search head GUI

$
0
0
I have administrator access to the GUI of the search head cluster master and search head, but not the indexers. I am troubleshooting why data isn't coming into Splunk and need to see the following through the GUI of either the search head or the cluster master, - indexes configured on each indexer - inputs configured on each indexer How can I do this, I can't seem to find an easy way to do so. I am running Splunk 6.6.2. I know this information is held within the configuration bundle on the cluster master , but I can't view this form the GUI, I can only deploy it from the cluster master console. Thanks, Paul

Splunk cluster indexers are consuming high memory

$
0
0
Splunk cluster indexers are consuming high memory. Memory usage on indexer server is always at 99% used, after restarting splunk it's coming down but within one minute again reaching at 99%. Nothing coming in logs which indicates if anything causing this. Also on same indexers internal_db is filling so quickly, are both issues related to each other. Any suggestions? We have 23 GB memory aligned to each indexer (total 5 in cluster) and we are logging around 400 -500 GB data on this environment. Splunk version 7.2.3. One more thing , is this know issue after upgrading to 7.x.x from 6.x , because while env were on 6.5.3 then we didn't face memory related issue but on that time we were logging around 300 GB data and memory aligned was 12 GB per indexer.

300 events are seen with the same Source IP and different Destination IP in 1 hour

$
0
0
Translating Qradar rules to SPL and stocked with setting thresholds 300 events are seen with the same Source IP and different Destination IP in 1 hour no idea which parameters to use ? any hints ?

help to transform an original search with fields in csv file to the same fields in an index

$
0
0
hello I need to transform the search below because now the fields of tutu.csv and toto.csv are in the index "tata" So I want to do a identical search based on the fields in the index "tata" It means that the field "flag" which is actually in tutu.csv and the field "SITE" which is actually in "titi.csv" are now in the index "tata" could you help me to match the fields SITE and flag of my new index with the host list there is in host.csv? | inputlookup host.csv | lookup tutu.csv "Computer" as host | lookup titi.csv HOSTNAME as host output SITE | search SITE=$tok_filtersite|s$ | stats count by flag | stats sum(count) as NbNonCompliantPatchesIndHost | appendcols [| inputlookup host.csv | lookup titi.csv HOSTNAME as host output SITE | search SITE=$tok_filtersite|s$ | stats count(host) as NbIndHost] | eval Perc=round((NbNonCompliantPatchesIndHost/NbIndHost)*100,2) | table Perc, NbIndHost For example, for the first part of the search (before appendcols), I try to do something like this but I doesnt know how to do the "stats count by flag" because in my index I have many differents events for one host while I have just one flag by host in the csv file index=tata sourcetype="test" | rename HOSTNAME as host | lookup host.csv host as host output host | search SITE=$tok_filtersite|s$ | stats XXXXXXXX Thanks

help on a field renaming in a subsearch

$
0
0
hello in my csv file I have a field called "host" and in my index a field called "HOSTNAME" its the same field and I have to rename it in order to be able to match the events but i dont understand why it works when I am doing this : [| inputlookup host.csv | rename host as HOSTNAME ] index=master-data-lookups sourcetype="itop:view_splunk_assets" | stats count by HOSTNAME and it doesnt works when I am doing? [| inputlookup host.csv] index=master-data-lookups sourcetype="itop:view_splunk_assets" | rename HOSTNAME as host | stats count by host thanks for your help

License Pool Violation - After Search is disabled on a license pool due to 5 violations, and event generation issue is fixed, how long do I have to wait for Search re-enabling ?

$
0
0
Hello, I had an issue with one of our applications which generate too many events => I have been in 5 days of license violation. Searches are disabled as I am in a pre 6.5 Splunk Enterprise license mode. I fixed the issue with the application so my daily volume of event is back to beeing bellow my license Gb/Day: - Do I have to way for 26 days ( so that the number of violations over the last 30 days goes down to less than 5 #) or is there another way of recovering access to the searchs/requests/dashboards ? - Will the service be available tomorrow again ? - Is it possible to "reset" the count ? Thank you

eval command help

$
0
0
Hi All, Need help to get the values from multi field value. We have a field name "properties.targetResources{}.displayName" which has the multiple field value. Now when we have the field "operationName"="Add member to role completed (PIM activation) then we need to have the new field let's say "dest" field should pick 3rd value from field "properties.targetResources{}.displayName" . And when operationName = Add member to role request denied (PIM activation) then "dest" field should pick value 4th from field "properties.targetResources{}.displayName" . Splunk search for single field mvindex is working fine sourcetype="amdl:aadal:audit" operationName="Add member to role completed (PIM activation)" | eval dest = case(operationName=="Add member to role completed (PIM activation)", mvindex('properties.targetResources{}.displayName',3)) | table dest Splunk search for mutiple field value is not working fine sourcetype="amdl:aadal:audit" operationName=* | eval dest = if(case(operationName=="Add member to role completed (PIM activation)", mvindex('properties.targetResources{}.displayName',3)), case(operationName = Add member to role request denied (PIM activation) , mvindex('properties.targetResources{}.displayName',4)) | table dest In this case eval is written wrong , need to fix this . Thanks in advance

Splunk HEC - AWS VPC Flow Logs - Timeout

$
0
0
Hi, I've been trying, unsuccessfully, to configure a Splunk HEC endpoint to consume AWS VPC Flow Logs via Firehose. Having slowly worked through various errors, including HEC acknowledgement being disabled, SSL certificates issues, I thought I had beaten the last of them. However, I am now getting a rather unhelpful error in my Firehose failed events log as follows: "attemptsMade":34,"arrivalTimestamp":1567429559545,"errorCode":"Splunk.ConnectionTimeout","errorMessage":"The connection to Splunk timed out. This might be a transient error and the request will be retried. Kinesis Firehose backs up the data to Amazon S3 if all retries fail." Having had previous errors stating that the HEC indexer acknowledgement was disabled, and that ELB stickiness was not enabled, I'm fairly certain I am getting traffic to and from my Splunk instances. So I am not sure now why this is timing out. Is there any way to understand what is causing this? HEC Acknowledgement timeout is set to 600 seconds, so I don't believe it is this (plus that has its own error and corresponding code). Any help gratefully received as I've been through all the documentation I can find, and am now stumped!

How can I use btool to find where a specific index was created?

$
0
0
I've been tasked with using btool (in debug mode) to find where the settings for the “onboarding” index was written by the GUI, and can't seem to figure out exactly how to do so. Any help is much appreciated!

How to calculate the average duration of each steps within a transaction?

$
0
0
Hi, I have events indexed in the following format: type=a transactionID=xxxxxxxxxxx status=Created lastUpdateTime=_time type=a transactionID=xxxxxxxxxxx status=Processing lastUpdateTime=_time type=a transactionID=xxxxxxxxxxx status=Held lastUpdateTime=_time type=a transactionID=xxxxxxxxxxx status=Completed lastUpdateTime=_time type=b transactionID=yyyyyyyyyyy status=Created lastUpdateTime=_time type=b transactionID=yyyyyyyyyyy status=Processing lastUpdateTime=_time type=b transactionID=yyyyyyyyyyy status=Held lastUpdateTime=_time type=b transactionID=yyyyyyyyyyy status=Completed lastUpdateTime=_time Although it's easy to calculate the duration of each step (status change) for one transaction (I can use delta or autoregress lastUpdateTime on a eval'ed duration), how can I calculate the average duration of each step per type for a given day, so I can plot an average line on a chart against a particular transaction?

Use a Python module in a custom alert action

$
0
0
I have a custom alert action that I wrote using the manual on the documentation: https://docs.splunk.com/Documentation/Splunk/7.3.1/AdvancedDev/ModAlertsIntro I need to import a Python module (boto3) into my action's script. How can I do that? Where and how do I install the module?

Regex not working as expected

$
0
0
For one of the Security usecase, we need to extract Group Memberships from the Domain\. The trickier part is some of the Group Memberships doesnt have domain name in front of it. I am attaching the Regex link which is working fine on Regex101- https://regex101.com/r/X2YAAd/1 but for some strange reasons, when i use the same regex on Splunk its not working. This is to extract Group membership on EventCode=**4627** Could anyone help me here..

Extract multiple values from a single field into multiple unique fields

$
0
0
Hello, Is there a way to split out the unique values of a field into separate fields that are returned after a search? For example, my search returns the following syslog messages Login Success from 1.1.1.1 Login Failed from 2.2.2.2 Splunk has extracted the following field "field 1" which contains the "Success" and "Failed" string values Is there a way (preferably eval command) to extract these values into there own unique fields, i.e field2=Failed, field3=Success This is so I can use a table command like the following | table ip, field1, field2, field3 Thank you

How to write throttle alert?

$
0
0
Hi,all I have a question about how to write throttle alert. I want to specify two fields. But, I can not find document. my field is "name" and "region". I think name AND region OR name, region If you know that, please help me. Thank you. ![alt text][1] [1]: /storage/temp/274631-picture.png

Custom audit path with rlog.sh

$
0
0
Hi, I have audit data coming from a port (UDP) to Heavy Forwarder[via syslog] and have to apply rlog.sh on the same. Just to start, I tried to monitor a custom path rather than the /var/log/audit/audit.log and used rlog.sh script. Something like this: [monitor:///vf/home/splunk/Audit_new.log] [script:///opt/splunk/splunkforwarder/etc/apps/Splunk_TA_nix/bin/rlog.sh] sourcetype = auditd_nix interval = 1 index = vf_os disabled = 0 passAuth = splunk Instead of indexing vf/home/splunk/Audit_new.log, SPLUNK indexed /var/log/audit/auditd.log with index=main and sourcetype=auditd_nix and source=/vf/home/splunk/Audit_new.log. I want to index the sample file i placed under custom path vf/home/splunk/Audit_new.log with rlog.sh implemented. Thanks, Payal

unable to get pdf of a splunk dashboard after hitting curl command via splunk rest api

$
0
0
Hi All, I am trying to get dashboard screenshot/pdf via hitting curl to splunk rest api as below:- curl -u usr:pwd -sk 'https://splunk-localhost:8089/services/pdfgen/render?input-dashboard=dashboardname&namespace=name&paper-size=a4-landscape' > test.pdf Can anyone please advise on this and let me know what are other available options to accomplish same ?
Viewing all 47296 articles
Browse latest View live