Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Comparing raw data volume Vs indexed data volume

$
0
0
How do i compare my raw data volume to the indexed data volume for a specific sourcetype Can someone help with the query We have index clustering, deployment server, Distributed management console. i want to make sure there same data is not indexed more than one time. (dual, triple indexing of same data)

AMQP Modular Input Setup

$
0
0
Hello, we are trying to pull in the JSON message from a rabbit server, however we seem to be getting all the config from the queue before hand, is there a way to just pull in the msg_body? here is an example of the events in splunk Fri Sep 28 12:55:36 BST 2018 name=amqp_msg_received event_id=null msg_queue=ES_queue msg_exchange=BMISG msg_body={"TIMESTAMP":"2018-09-27-18:14:26.727","MESSAGETYPE":"INFO","SYSTEM":"BMI","MODULE":"Prep Step","SUBMODULE":"unionData","MESSAGE":"Testing Data.","RUNID":"TEST_201806_064"}

Rest APi to create an incident in SNOW without SNOW add-on

$
0
0
I have an alert configured in splunk. whenever alert gets triggered i need to call an API to create an incident in SNOW. i am aware of SNOW add-on but we not required to use it. so pls help with the other way to create an incident. we have an endpoint url to create it.

How to redirect logs from the default main index to a new index

$
0
0
We are trying to set up to log container logs into Splunk 6.6.3. We have set up an index and an application with an inputs.conf file and successfully updated inside the container the Splunk forwarder with the information. However, when we see the files coming in they appear to be ignoring the app settings which includes our index and instead are sending the ingested logs to the main index. How do I redirect them to the correct index which is not main.

Disable Vertical scrollbar for a chart

$
0
0
I have a panel with 2 charts. One of the charts with trellis has vertical scrollbar and a pager. I would like to disable scrollbar and use only the pager with multiple pages. Is there an option to disable vertical scrollbar in charts and use only pager?

How to make field value as column heading

$
0
0
I would like to display weekday in the column heading. |Search.... | eval weekday=strftime(now(),"%A") Output S.no | Daily | weekday 1 101 Thursday 2 210 Thursday Desired Output S.no | Daily (Thursday) 1 101 2 210 Tried xyseries and transpose, but couldn't find a way to flip only one column instead of whole table.

Getting 'call not properly authenticated' when using Splunk SDK for javascript

$
0
0
Whenever I am trying to do a search query using Splunk SDK for javascript (using node), I am getting "{ messages: [ { type: 'WARN', text: 'call not properly authenticated' } ] }" error message. I checked multiple forums but none helped. I am able to login (always), but as soon as I call search() function I get this error. Here is my function call: splunkService.login (err, success) -> if err console.log err.data else console.log ("Login is successful") #this always successful query = 'search index=a sourcetype=b application=*abc* | top 1 error' params = { earliest_time: '-15m' latest_time: 'now' exec_mode: 'normal' } searchSplunk(query, params) searchSplunk = (query, params) -> splunkService.search query, params, (err, job) -> if err console.log 'Error Encountered:' console.log err.data else console.log 'Job ID: ' + job.sid #console.log job job.track {period: 200}, done: (job) -> console.log 'Done, here!' job.results {}, (err, results, job) -> if err console.log err else results.fields.forEach (field) -> console.log field results.rows.forEach (row) -> console.log row failed: (job) -> console.log 'Job failed' error: (err) -> console.log err.data Please note, oneshotSearch() always works, search() works but just 2 times out of 10.

View last event time in inputlookup

$
0
0
I have a inputlookup like which searches on a CSV where the CSV looks like Field_A Field_B A test1 B test2 I want to run a search where I get the output where I also get a column added to see what was the last event observed from Fiend A in splunk with output like Field_A Field_B Last_event A test1 9/22/18 7:28:16 B test2 9/25/18 7:28:16 I have written a query, but does not seem to work: inputlookup excel.csv | table Field_A Field_B | appendcols [search index=my_index src=Field_A| stats head 1 | stats first(_time)| eval Last_Seen_Event=strftime(Last_Seen_Event,"%+")]

Monitor input on windows machine with wild card character

$
0
0
I want to monitor a log file from below location on a windows server D:\Program Files\Apache Software Foundation\Tomcat 8.5\webapps\config\ However based on the version of tomcat, the folder name changes. t could be Tomcat 6.0, Tomcat 7.5 etc. on some servers. So I tried with different input stanza on UF in inputs.conf file. [monitor://d:\Program Files\Apache Software Foundation\Tomcat*\webapps\config\audit.log] [monitor://d:\Program Files\Apache Software Foundation\\*\webapps\config\audit.log] [monitor://d:\Program Files\Apache Software Foundation\\...\webapps\config\audit.log] None of the above option works and in Splunkd.log file I can see below entry 09-28-2018 13:44:24.422 +0000 INFO TailingProcessor - Adding watch on path: d:\Program Files\Apache Software Foundation. which means it is not recognizing folder structure mentioned in input stanza. Please suggest the solution.

Dedup events in data model

$
0
0
Hi everyone How to leave just unique events by specified field in accelerated data model? My base search looks like: index=main source=transactions tx_type=purchase | `registration_time` | `type_user` And I'm trying to add child dataset with just one constraint: dedup transaction_id But it doesn't work. Duplicated transaction_id cannot be deleted from original source 'cause it's feature of some transactions.. But in data model we just need one occurance of it, e.g. to sum revenue. With this query we get greater result than actualy have because of counting several times transactions with the same transaction_id. How can we get correct result? Thanks in advance! | tstats sum(transactions.price) AS Revenue from datamodel=transact.transactions where (nodename = transactions) groupby _time span=1month | rename transactions.* as * | timechart span=1month first(Revenue) as revenue

help on wmi entry

$
0
0
hi i have created a WMI entry in wmi.conf wql = SELECT Model FROM Win32_ComputerSystem when I execute it with wmi explorer i have results but no results in my splunk query even if i play with time token what is the problem please??

Field not showing color in table format

$
0
0
Hello Splunkers . I have data in following format /dev/mapper/splunkcisvg-auditlv 8.0G 353M 7.7G 5% /var/log/audit /dev/sda1 509M 164M 346M 33% /boot which i am parsing using regex expression in query index=* sourcetype="disk_load" |rex "(?^[\w\/-]*)[ ]*(?[\d\w.]*)[ ]*(?[\d\w.]*)[ ]*(?[\d\w.]*)[ ]*(?[\d]*)%[ ]*(?[\w\/]*)" | stats list(FileSystem), list(Use) ,values(Mounted) by host issue i am having is i need to red color the values where ever "Use" field has value greater than 70 . I tried table formatting but its not showing any color i have used range option but its not resulting in any color ![alt text][1] Guys please help [1]: /storage/temp/256093-splunk-test.png

In a table format, how can I color a field if its value is greater than a certain number?

$
0
0
Hello Splunkers, I have data in the following format: /dev/mapper/splunkcisvg-auditlv 8.0G 353M 7.7G 5% /var/log/audit /dev/sda1 509M 164M 346M 33% /boot which i am parsing using regex expression in query index=* sourcetype="disk_load" |rex "(?^[\w\/-]*)[ ]*(?[\d\w.]*)[ ]*(?[\d\w.]*)[ ]*(?[\d\w.]*)[ ]*(?[\d]*)%[ ]*(?[\w\/]*)" | stats list(FileSystem), list(Use) ,values(Mounted) by host The issue i am having is i need to red color the values where ever the "Use" field has value greater than 70 . I tried table formatting but its not showing any color. I have used range option but its not resulting in any color ![alt text][1] Guys please help [1]: /storage/temp/256093-splunk-test.png

How to turn string value into a time value to calculate the difference between two fields?

$
0
0
Hello, I have a log that when uploaded to SPLUNK, this appears as a string even though it should be in time format. So, I have to convert this string into a time in order to calculate the difference between an end time and a start time. This is my code: | eval StartTimestamp=strptime(StartTime,"%d/%m/%Y %H:%M:%S"), ActualTimestamp=strptime(ActualAlarmTime,"%d/%m/%Y %H:%M:%S") | eval ResponseTimestamp= (StartTimestamp-ActualTimestamp) | stats avg(ResponseTimestamp) as ResponseTime It is not creating any errors, but I am not sure if the answer is in seconds or in an epoch format. I would like to see the ResponseTime in a %H:%M:%S format. Thanks!

sum of counts

$
0
0
trying to get the sum of spam folders and where they are quarantined by user, is there a better way to do this, especially since the number of folders to count may start to grow. (also the below eval(count(x) + count(y)) doesn't work) | chart eval(count(inbound_phish) + count(inbound_spam_definite) ) AS "Quarantine1", eval( count(inbound_bulk) + count(inbound_bulk_50)) As "Quarantine2", count(recipient) by recipient

Is there a way to run a PowerShell script locally from Splunk?

$
0
0
I'm trying to hit an API with a PowerShell command through Splunk without needing to ingest the logs on a regular cadence (and setting up a VM to forward the logs, etc). Is there a way to do this with a command similar to the `|jira` command?

Can you help me with my Windows Management Instrumentation (WMI) entry?

$
0
0
hi i have created a WMI entry in wmi.conf wql = SELECT Model FROM Win32_ComputerSystem When I execute it with WMI explorer, i get results. But, I get no results in my Splunk query even if i play with the time token. What is the problem please??

Can you help me with my sum of counts query?

$
0
0
I'm trying to get the sum of spam folders and where they are quarantined by user. Is there a better way to do this, especially since the number of folders to count may start to grow. (also the below eval(count(x) + count(y)) doesn't work) | chart eval(count(inbound_phish) + count(inbound_spam_definite) ) AS "Quarantine1", eval( count(inbound_bulk) + count(inbound_bulk_50)) As "Quarantine2", count(recipient) by recipient

Why do I see "Invalid credentials" while creating ldap strategy with "ssl start_tls" config?

$
0
0
If I add strategy in authentication.conf manually and edit ldap.conf authentication.conf [test_ldap] SSLEnabled = 1 host = ldap.myldap.com port = 636 anonymous_referrals = 1 bindDN = xxxx bindDNpassword = xxxx emailAttribute = mail groupBaseDN = xxxx groupMappingAttribute = dn groupMemberAttribute = member groupNameAttribute = cn nestedGroups = 0 network_timeout = 20 realNameAttribute = displayname sizelimit = 1000 timelimit = 15 userBaseDN = dc=xxxx userNameAttribute = uid ldap.conf ssl start_tls TLS_REQCERT never TLS_CERT /auth/mycert.pem TLS_KEY auth/myprivatekey.pem

How do I redirect logs from the default main index to a new index?

$
0
0
We are trying to set up to log container logs into Splunk 6.6.3. We have set up an index and an application with an inputs.conf file and successfully updated the Splunk forwarder with the information inside the container. However, when we see the files coming in, they appear to be ignoring the app settings that includes our index and instead are sending the ingested logs to the main index. How do I redirect them to the correct index (not the main index)?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>