Quantcast
Viewing all 47296 articles
Browse latest View live

On which Splunk Enterprise servers FixDatetimexml2020 patch needs to be applied?

Hi, Query regarding Patch for "Timestamp recognition of dates with two-digit years fails beginning January 1, 2020" post. On 25th Nov 2019, the post "https://docs.splunk.com/Documentation/Splunk/latest/ReleaseNotes/FixDatetimexml2020" is asking to patch all Splunk instances instead of "Splunk Cloud, Splunk Enterprise indexer, Splunk Enterprise heavy forwarder, and Splunk Light instances" a day before. Does this mean that on Splunk enterprise Clustered setup, this file must be patched on **Cluster master, SH deployer, License Master, Indexers, Search-heads and all heavy & universal forwarder instances**? OR **just indexers & heavy forwarder instances?** Also, on Splunk Enterprise All-in-one setups, this file must be patched. Correct? Please clarify. Thanks in advance, Prashant Badhe

How to run a multi search with a dynamic calculated time frame

I have an existing search that finds "RunDate" "StartTime" "EndTime" stored as part of test run summaries. The search then proceeds to convert those time values into usable Unix, via strptime. index="IDX1" sourcetype="SRC" ProjectName="PRJ" | eval stime = strptime(StartTime,"%m/%d/%Y %I:%M:%S %p") | eval etime = strptime(EndTime,"%m/%d/%Y %I:%M:%S %p") | table RunDate stime etime | sort RunDate desc Now is the tricky part... I would like a 4th column that uses the time frame in each row to perform a calculation on values coming from a different source. index="IDX2" "HOST" "data.metricId" IN (1234) | stats avg("data.metricValues{}.value") as average | eval total=average/100 Somehow, this needs to be time constrained by "earliest=stime" & "latest=etime" for each RunDate (the results should be a series) Is this possible? To run a secondary search/eval, using calculated values from the primary search as the earliest and latest time constraints?

How to table/chart over a period of time

trying to calculate groupings of VMs capacity growth over time but a chart or table looks to be the best answer if you need to report on 100 VMs. In a simplified data set per below Date ,Name,Capacit Used 5/1/2019, VM1,100 5/1/2019, VM2,100 5/1/2019, VM4,450 6/1/2019, VM1,100 6/1/2019, VM2,140 6/1/2019, VM4,450 7/1/2019, VM1,105 7/1/2019, VM2,200 8/1/2019, VM1,110 8/1/2019, VM2,200 9/1/2019, VM1,110 9/1/2019, VM2,200 10/1/2019,VM1,110 10/1/2019,VM2,200 10/1/2019,VM3,100 11/1/2019,VM1,110 11/1/2019,VM2,200 11/1/2019,VM3,200 How can you search it so that if you search for 7/1/2019 through 11/1/2019 that the result would be tabled as VM1 5GB VM2 0GB VM3 200GB

7.0.X versions of Splunk Docker (Health check or running on splunk/splunk:latest

Loving the splunk docker but having issues with some of the older versions that we support. Versions on the 7.0.X line doesn't seem to have the same health check as anything newer (where I could use docker inspect --format "{{json .State.Health.Status }}) Is there a health check in the older versions I can call to see if the image is fully up? It was suggested that i use the latest and change the SPLUNK_BUILD_URL arg which when I tried with 7.0.1 just ended up not being able to install. Any suggestions?

Any way to break this row into two columns?

I am setting up a dashboard that monitors count of events on a daily basis and a previous 30 day average by customer. I have the search built and each customer has a row of data. For the dashboard, I want to display each customer in its own table. But I am struggling on how to convert the row of data in to the table format I'd like to display. Here is the row of data after I query a customer from the saved search: ![alt text][1] Here's what I'd like it to look like: ![alt text][2] Any thoughts on how to achieve this? [1]: /storage/temp/277651-before.png [2]: /storage/temp/277652-after.png

How to add two strings as line breaker options

Hi, In my data, I have two kinds of XML. i.e. Request & Response I want to break the log when my `starts and ends with` also `when starts and ends with`. Can someone help to achieve this?

Is it possible to use MySQL prepared statements in dbxquery?

Splunk Enterprise 6.5.1 - yes, an upgrade is planned! I have a series of equivalent dbxqueries in a Splunk dashboard panel, against two flavours of schema, whose basic form is: | dbxquery query="SELECT 'GAMMA_JETSONS' as CLIENT, (SELECT GROUP_CONCAT(DISTINCT SOURCE order by SOURCE asc , ' , ' ) as source from gamma_jetsons.live_data) AS FEEDS_TODAY, (SELECT COUNT(DISTINCT SOURCE) from gamma_jetsons.live_data) AS CURRENT, (SELECT COUNT(DISTINCT SOURCE) from gamma_jetsons.live_data_archived) AS YESTERDAY, (SELECT GROUP_CONCAT(DISTINCT SOURCE order by SOURCE asc , ' , ' ) as source from gamma_jetsons.live_data_archived) AS FEEDS_YESTERDAY FROM DUAL;" connection="PROD_GAMMA_JETSONS" shortnames=1 | fields - _raw, _time In order to make it easier to change schema and add new rows for new databases I have tried to parameterise this using MySQL Prepared Statements, which (escaped for quotation marks and semicolons and less-than symbols) is:| dbxquery query="SET @my_cust:='GAMMA_JETSONS'; SET @my_custid:='jetsons'; SET @t1s:=CONCAT("SELECT IF((SELECT COUNT(SCHEMA_NAME) from information_schema.schemata WHERE SCHEMA_NAME='",@my_custid,"_engine_persist_service') < 1,'gamma_",@my_custid,".live_data','",@my_custid,"_engine_persist_service.live_data') INTO @t1;"); SET @t2s:=CONCAT("SELECT IF((SELECT COUNT(SCHEMA_NAME) from information_schema.schemata WHERE SCHEMA_NAME='",@my_custid,"_engine_persist_service') < 1,'gamma_",@my_custid,".live_data_archived','",@my_custid,"_engine_persist_service.live_data_archived') INTO @t2;"); PREPARE stmt FROM @t1s; EXECUTE stmt; DEALLOCATE PREPARE stmt; PREPARE stmt FROM @t2s; EXECUTE stmt; DEALLOCATE PREPARE stmt; SET @my_sql:= CONCAT("SELECT '",@my_cust,"' as CLIENT, (SELECT GROUP_CONCAT(DISTINCT source ORDER BY source ASC) AS source FROM ",@t1,") AS FEEDS_TODAY, (SELECT COUNT(DISTINCT SOURCE) FROM ",@t1,") AS CURRENT, (SELECT COUNT(DISTINCT SOURCE) FROM ",@t2,") AS YESTERDAY, (SELECT GROUP_CONCAT(DISTINCT source ORDER BY source ASC) AS source FROM ",@t2,") AS FEEDS_YESTERDAY FROM DUAL;"); PREPARE stmt FROM @my_sql; EXECUTE stmt; DEALLOCATE PREPARE stmt;" connection="PROD_GAMMA_JETSONS" shortnames=1 | fields - _raw, _time The unescaped version works fine in MySQL using e.g. HeidiSQL: SET @my_cust:='GAMMA_JETSONS'; SET @my_custid:='jetsons'; SET @t1s:=CONCAT("SELECT IF((SELECT COUNT(SCHEMA_NAME) from information_schema.schemata WHERE SCHEMA_NAME='",@my_custid,"_engine_persist_service') < 1,'gamma_",@my_custid,".live_data','",@my_custid,"_engine_persist_service.live_data') INTO @t1;"); SET @t2s:=CONCAT("SELECT IF((SELECT COUNT(SCHEMA_NAME) from information_schema.schemata WHERE SCHEMA_NAME='",@my_custid,"_engine_persist_service') < 1,'gamma_",@my_custid,".live_data_archived','",@my_custid,"_engine_persist_service.live_data_archived') INTO @t2;"); PREPARE stmt FROM @t1s; EXECUTE stmt; DEALLOCATE PREPARE stmt; PREPARE stmt FROM @t2s; EXECUTE stmt; DEALLOCATE PREPARE stmt; SET @my_sql:= CONCAT("SELECT '",@my_cust,"' as CLIENT, (SELECT GROUP_CONCAT(DISTINCT source ORDER BY source ASC) AS source FROM ",@t1,") AS FEEDS_TODAY, (SELECT COUNT(DISTINCT SOURCE) FROM ",@t1,") AS CURRENT, (SELECT COUNT(DISTINCT SOURCE) FROM ",@t2,") AS YESTERDAY, (SELECT GROUP_CONCAT(DISTINCT source ORDER BY source ASC) AS source FROM ",@t2,") AS FEEDS_YESTERDAY FROM DUAL;"); PREPARE stmt FROM @my_sql; EXECUTE stmt; DEALLOCATE PREPARE stmt; In Splunk all I get is an error *in the panel's header row*: **error_message=A value for dbxquery command option connection is required** I can't see what the issue might be. Am I missing something, or are Prepared Statements and multi-statement queries of this kind not supported? ,Splunk enterprise 6.5.1 - and yes, an upgrade is planned! I am trying to improve a dashboard panel that runs the same set of queries on multiple database connections. The existing (working) search uses a series of appends of queries of the form: | dbxquery query="SELECT 'GAMMA_JETSONS' as CLIENT, (SELECT GROUP_CONCAT(DISTINCT SOURCE order by SOURCE asc , ' , ' ) as source from gamma_jetsons.live_data) AS FEEDS_TODAY, (SELECT COUNT(DISTINCT SOURCE) from gamma_jetsons.live_data) AS CURRENT, (SELECT COUNT(DISTINCT SOURCE) from gamma_jetsons.live_data_archived) AS YESTERDAY, (SELECT GROUP_CONCAT(DISTINCT SOURCE order by SOURCE asc , ' , ' ) as source from gamma_jetsons.live_data_archived) AS FEEDS_YESTERDAY FROM DUAL;" connection="PROD_GAMMA_JETSONS" shortnames=1 | fields - _raw, _time I've turned it into a set of parameterized statements to cater for different database schemata, but it doesn't work. | dbxquery query="SET @my_cust:='GAMMA_JETSONS'; SET @my_custid:='jetsons'; SET @t1s:=CONCAT("SELECT IF((SELECT COUNT(SCHEMA_NAME) from information_schema.schemata WHERE SCHEMA_NAME='",@my_custid,"_engine_persist_service') < 1,'gamma_",@my_custid,".live_data','",@my_custid,"_engine_persist_service.live_data') INTO @t1;"); SET @t2s:=CONCAT("SELECT IF((SELECT COUNT(SCHEMA_NAME) from information_schema.schemata WHERE SCHEMA_NAME='",@my_custid,"_engine_persist_service') < 1,'gamma_",@my_custid,".live_data_archived','",@my_custid,"_engine_persist_service.live_data_archived') INTO @t2;"); PREPARE stmt FROM @t1s; EXECUTE stmt; DEALLOCATE PREPARE stmt; PREPARE stmt FROM @t2s; EXECUTE stmt; DEALLOCATE PREPARE stmt; SET @my_sql:= CONCAT("SELECT '",@my_cust,"' as CLIENT, (SELECT GROUP_CONCAT(DISTINCT source ORDER BY source ASC) AS source FROM ",@t1,") AS FEEDS_TODAY, (SELECT COUNT(DISTINCT SOURCE) FROM ",@t1,") AS CURRENT, (SELECT COUNT(DISTINCT SOURCE) FROM ",@t2,") AS YESTERDAY, (SELECT GROUP_CONCAT(DISTINCT source ORDER BY source ASC) AS source FROM ",@t2,") AS FEEDS_YESTERDAY FROM DUAL;"); PREPARE stmt FROM @my_sql; EXECUTE stmt; DEALLOCATE PREPARE stmt;" connection="PROD_GAMMA_JETSONS" shortnames=1 | fields - _raw, _time The unescaped SQL works fine when searching the database directly using e.g. HeidiSQL, but in the dashboard, it produces the following error *in the table header*: **error_message=A value for dbxquery command option connection is required** Am I missing something obvious, or is this kind of use of dbxquery not supported? Thanks.

How to monitor and alert when the Splunk universal forwarder service has been stopped or modified?

On my Universal Forwarders, I want to have the ability to monitor and alert off when the Splunk Universal forwarder service has been stopped or modified. Any options on how to do this? I am already looking into basic windows event monitoring on windows services, but I didn't know if there was a Splunk related way to do this? Possibly some Splunk app or something?

Does the Splunk 7.0.X line have the same health check as anything newer?

Loving the Splunk docker but having issues with some of the older versions that we support. Versions on the 7.0.X line doesn't seem to have the same health check as anything newer (where I could use docker inspect --format "{{json .State.Health.Status }}) Is there a health check in the older versions I can call to see if the image is fully up? It was suggested that I use the latest and change the SPLUNK_BUILD_URL arg which when I tried with 7.0.1 just ended up not being able to install. Any suggestions?

Input paths with wildcards for the sub-directories tree

Hi, I haven't dealt a lot with wildcards in Paths for Inputs, so will appreciate your help. We need to monitor logs in SyslogLog sub-directory: /opt/our-application/var/log/our-processor/message_logging/dev////SyslogLog/name_of_log.log For example from the following available directories we need to get only 1),4), 5) and 7) ,8), 10)files 1) /opt/our-application/var/log/our-processor/message_logging/dev/dev/company_proxy_name_1/1/SyslogLog/name_of_1_log.log 2) /opt/our-application/var/log/our-processor/message_logging/dev/dev/company_proxy_name_1/1/SyslogLog/name_of_1_log.log.1 3) /opt/our-application/var/log/our-processor/message_logging/dev/dev/company_proxy_name_1/1/SyslogLog/name_of_1_log.log.2 4) /opt/our-application/var/log/our-processor/message_logging/dev/dev/company_proxy_name_1/2/SyslogLog/name_of_2_log.log 5) /opt/our-application/var/log/our-processor/message_logging/dev/dev/company_proxy_name_1/3/SyslogLog/name_of_3_log.log 6) /opt/our-application/var/log/our-processor/message_logging/dev/dev/company_proxy_name_1/3/FastLog/name_of_4_log.log 7) /opt/our-application/var/log/our-processor/message_logging/dev/dev/company_proxy_name_2/1/SyslogLog/name_of_5_log.log 8) /opt/our-application/var/log/our-processor/message_logging/dev/dev/company_proxy_name_2/2/SyslogLog/name_of_6_log.log 9) /opt/our-application/var/log/our-processor/message_logging/dev/dev/company_proxy_name_2/3/FastLog/name_of_7_log.log 10) /opt/our-application/var/log/our-processor/message_logging/dev/dev/company_proxy_name_2/4/SyslogLog/name_of_8_log.log Will the following template on the Path with "*" work? [monitor:///opt/our-application/var/log/our-processor/message_logging/dev/*/*/SyslogLog/*.log] index = our_index sourcetype = our_sourcetype

When main query return 0 results, how to stop subquery give error?

In my subquery, I'm using results returned from main query, when main query have results it works. But when main query return 0 results, it will return the following error: "Error in 'map': Did not find value for required attribute 'id'.", how can I make it so it will just return 0 results instead of give error. | dbxquery connection="oracle_test" query="SELECT 1 id FROM dual where 1=0" | map search="dbxquery connection=\"oracle_test_1\" query=\"select dummy col_text from dual where 1 in ('$id$')\"" | table col_text

Regex by ID removing duplicates

Hello everyone. I have a code below where each event is determined by the line break. I am wanting to take the value from the "InteractionId" parameter and check that there are no duplicates. ' I believe it could be a regex that only filters by '**InteractionId' [str] = "value"** But I'm not sure. 2019-11-23T18:08:04.990 Trc 24102 Sending to Universal Routing Server: urs_ad_ucl_ctmm_p: 'EventRouteRequest' (71) message: AttributeCustomerID [str] = "Resources" AttributeConnID [long] = 093902ed259a99fc AttributeMediaType [int] = -1 AttributeCallID [int] = 543269 AttributeCallType [int] = 0 'InteractionId' [str] = "00052aEWU1VF525" 'TenantId' [int] = 101 'MediaType' [str] = "email" 'InteractionType' [str] = "Inbound" 'InteractionSubtype' [str] = "InboundNew" 2019-11-24T18:08:04.990 Trc 24102 Sending to Universal Routing Server: urs_ad_ucl_ctmm_p: 'EventRouteRequest' (71) message: AttributeCustomerID [str] = "Resources" AttributeConnID [long] = 093902ed259a99fc AttributeMediaType [int] = -1 AttributeCallID [int] = 543269 AttributeCallType [int] = 0 'InteractionId' [str] = "00052aEWU1VFB525" 'TenantId' [int] = 101 'MediaType' [str] = "email" 'InteractionType' [str] = "Inbound" 'InteractionSubtype' [str] = "InboundNew" 2019-11-25T18:08:04.990 Trc 24102 Sending to Universal Routing Server: urs_ad_ucl_ctmm_p: 'EventRouteRequest' (71) message: AttributeCustomerID [str] = "Resources" AttributeConnID [long] = 093902ed259a99fc AttributeMediaType [int] = -1 AttributeCallID [int] = 543269 AttributeCallType [int] = 0 'InteractionId' [str] = "00052aEWU1VFB34B" 'TenantId' [int] = 101 'MediaType' [str] = "email" 'InteractionType' [str] = "Inbound" 'InteractionSubtype' [str] = "InboundNew"

search syntax to exclude dhost or URL

New to splunk here. Trying to run a search for user BLAHBLAH that does NOT contain dhost of api.drift.com Would someone help me with the query? index=* My search query below but does not seem to be working: index=* "BLAHBLAH" sourcetype=* dhost!="api.drift" Raw syslog below: Nov 26 16:40:26 QHLSTLS11 mwg: status="426/0" srcip="10.99.99.50" user="BLAHLBAH" dhost="presence.api.drift.com" urlp="443" proto="HTTPS/https" mtd="GET" urlc="Business" rep="0" mt="application/x-empty" mlwr="-" app="-" bytes="782/780/201/196" ua="Chrome77-10.0" lat="0/0/71/97" rule="Last Rule" url="https://presence.api.drift.com/ws/websocket?session_token=SFMyNTY.43QAAAACZAAEZGF0YXQAAAAFZAACaWRtAAAAEzEwMzg5Ny00MTE0MTAzMjM0LTRkAAZvcmdfaWRiAAGV2WQACXNjb3BlX3NldGwAAAABbQAAAARsZWFkamQbB3VzZXJfaWRuBADCOzj1ZAAJdXNlcl90eXBlZAAEbGVhZGQABnNpZ25lZG4GAE8ol55uAQ.7-xbZbLOyHODYgRuuNSrIkIupxR3MnYkslNfjSaDMZU&vsn=1.0.0"

Getting error when trying to setup input after installing app

Anyone have an install install with this app? I can't setup any inputs and am new to splunk. I checked the splunkd.log file and is getting an exception on a date formatting function of all things.. 11-26-2019 20:21:11.299 -0500 WARN LocalAppsAdminHandler - Using deprecated capabilities for write: admin_all_objects or edit_local_apps. See enable_install_apps in limits.conf 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': The script at path=C:\Program Files\Splunk\etc\apps\TA-microsoft-graph-security-add-on-for-splunk\bin\TA_microsoft_graph_security_add_on_for_splunk_rh_microsoft_graph_security.py has thrown an exception=Traceback (most recent call last): 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': File "C:\Program Files\Splunk\bin\runScript.py", line 82, in 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': exec(open(REAL_SCRIPT_NAME).read()) 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': File "", line 4, in 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': File "C:\Program Files\Splunk\etc\apps\TA-microsoft-graph-security-add-on-for-splunk\bin\ta_microsoft_graph_security_add_on_for_splunk\splunktaucclib\rest_handler\endpoint\validator.py", line 388 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': except ValueError, exc: 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': ^ 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': SyntaxError: invalid syntax 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': Traceback (most recent call last): 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': File "C:\Program Files\Splunk\bin\runScript.py", line 82, in 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': exec(open(REAL_SCRIPT_NAME).read()) 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': File "", line 4, in 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': File "C:\Program Files\Splunk\etc\apps\TA-microsoft-graph-security-add-on-for-splunk\bin\ta_microsoft_graph_security_add_on_for_splunk\splunktaucclib\rest_handler\endpoint\validator.py", line 388 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': except ValueError, exc: 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': ^ 11-26-2019 20:21:11.578 -0500 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py setup': SyntaxError: invalid syntax 11-26-2019 20:21:11.707 -0500 ERROR AdminManagerExternal - External handler failed with code '1' and output: ''. See splunkd.log for stderr output.

Splunk Add-on for Websense DLP shows 404 not found

Hello everyone, Splunk Add-on for Websense DLP was installed, but it shows 404 not found page after click it. any idea? Thanks!

how to get case numbers from salesforce

I configured , SplunkApp for salesforce . Set up the salesforce account [verified it is able to access data], setup input wth Object as Case and Fields as CaseNumber. Now, How could is get the Case Numbers , in the Splunk Search the search for Index = _internal Sourcetype:sfdc:Case:CaseNumber is not returning any results

I need urgent help in splunk schedule report

I am running splunk v 7.0 and it was working fine with my schedule report and then a week ago I restarted the machine (forwarder, indexer and search head. After that no schedule report are send again and I chech it status and all of them are enable and the problem is when I change the schedule it want save any time and each time says it's(none) it won't change to any time and when this happen with my all alerts to when I make an new one it won't save it and I need to send thos schedule reports so what can I do to change this schedule reports from none and save it every time I make a schedule report and save it won't change the none Please I need some helpp

HTTP Error 429: Too Many Requests

I 'm trying to implement slack add-on. Every time logging below "429 Too Many requests " in "_internal" index. I don't know this is cause, but I cannot get all channels messages (Only some messages). Does someone know this issue solution? 11-27-2019 16:25:58.865 +0900 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-slack/bin/slack_messages.py" HTTP Error 429: Too Many Requests host = splunk01source = /opt/splunk/var/log/splunk/splunkd.logsourcetype = splunkd

SNMP Modular input 's Error that when save setting

Hi Splunkers I have Problem that SNMP modular input occure some error when saved settings. that Error Messages like this : 1) ;IndexError: pop from empty list (most occure) 2) 'NoneType' object has no attribute 'clone' 3) 'errorIndication' snmp 4) Unknown SNMP security name encountered 5) MIB file "PYSNMP-USM-MIB.py[co]" not found in search path 6) Transport (1, 3, 6, 1, 6, 1, 1) already registered snmp 7) No symbol PYSNMP-USM-MIB::pysnmpUsmSecretEntry at net-snmp setting -> pysnmp-master setting -> pycripto setting -> copy Crypto dir to snmp_ta app -> (splunk restart) additional, I builded python packages as normal account, installed python packages as root account. 2) server - Product Server 3) account - splunk - root Let me know if you know the cause or solution Thank you.

Splunk platform release notes

I was going through the Release note which was updated into Splunk Docs recently. https://docs.splunk.com/Documentation/Splunk/latest/ReleaseNotes/FixDatetimexml2020?elqTrackId=29cfe97d93df4472a4746e2296a937f9&elq=7cd999a2e77845b6ad79ff165291d6e3&elqaid=25340&elqat=1&elqCampaignId= It is mentioned that > Beginning on January 1, 2020, un-patched Splunk platform instances will be unable to recognize timestamps from events where the date contains a two-digit year. This means data that meets this criteria will be indexed with incorrect timestamps. My Splunk version of Splunk Enterprise is `7.0.2`, I have checked the format of timestamp from some of the sample events which are indexing into my Splunk instances. The timestamp format of the events which are forwarded to Splunk is correct with four-digit of year. But only my Splunk is showing the timestamp of logged events in two-digit of year. Please refer the highlighted part in the screenshot `Splunk_events_timestamp` Is the existing timestamp format with the events are right or should I need to modify anything to accommodate with the changes effective on January 1st 2020? [1]: /storage/temp/276647-splunk-events-timestamp.png
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>