Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Transforms: why is nullQueue not working?

$
0
0
Hello, I want to discard events that contain a string "**Content**", the following doesnt work, because I still see events with **Content** after I restarted and re-indexed: ***transforms.conf*** `[allNullQueue]` `REGEX = Content` `DEST_KEY = queue` `FORMAT = nullQueue` ***props.conf*** `[mysrctype]` `TRANSFORMS-setnull = allNullQueue` I tried this in a standalone env, version `7.0.3` and `7.1.2` I can't find out where the problem is coming from. Any clue? Thanks

Can you help me accelerate a dataset that has streaming commands?

$
0
0
I am trying to accelerate a dataset I created.. and it tells me I can’t because it has streaming commands. I’m not sure if there is some better way to accelerate this dataset so its faster for general searches. Here is the query that builds the dataset: index=netcool_noi_1 sourcetype=netcool:policylogger netcool_serial=* | eval unassigned="FALSE" | eval enriched="FALSE" | eval correlated="FALSE" | search reporting_results=* | rex field=reporting_results "NODE:\s+(?\S+)\s+" | rex field=_raw "SERVER_SERIAL\:\s+(?\d+)" | rex field=_raw "REPORTING RESULTS: ENRICHED WITH PARENT CIRCUIT ID FROM PLUCK:\s+(?\S+\s+\S+\s+\S+)\s+" | rex field=_raw "REPORTING RESULTS: ENRICHED WITH CIRCUIT ID FROM RESOLVE MSS DATA FOR NODE:.*CIRCUIT ID:\s+(?.*)\s+RATE\s+" | rex field=_raw "REPORTING RESULTS: (?\S+)\s+" | eval enriched=if(in("ENRICHED", testfield), "TRUE", enriched) | eval unassigned=if(like(reporting_results,"%UNASSIGNED%"), "TRUE", "FALSE") | eval correlated=if(in("CORRELATED", testfield), "TRUE", correlated) | transaction netcool_serial maxevents=7 keeporphans=1 keepevicted=1 mvlist=(enriched, correlated, unassigned) | eval unassigned=if(in("TRUE", unassigned), "TRUE", "FALSE") | eval enriched=if(((in("TRUE", enriched) OR (len(parentCircuitId)>=0)) AND (unassigned="FALSE")), "TRUE", "FALSE") | eval correlated=if(in("TRUE", correlated), "TRUE", "FALSE") | eval parentfound=if(len(parentCircuitId)>=0, "TRUE", "FALSE") Any suggestions?

Could you help me with arrays, expands and joins?

$
0
0
Hello So I have data in TSV format that I am indexing. Some of the fields are arrays in the format of ['23458567','234523456978090','234568957078654'] if the array is empty its simply filled with []. When we do searches we have to `join` tables and so some searches contain several joins to follow id's through the data flow. These ids are in the array format above and we sed out the single quotes, and the brackets, to get just values the mvexpand and join. The problem I have is that when we do the sed, it removes the records that contain the empty array value [] but those are valid values as well. I was trying to do a conditional eval with a macro but that won't work or is not valid. Something like: |eval RS=if(related_vendors == "[]", "[]", `fp_mvexpand(related_vendors)`) This is what the macro does: rex mode=sed field="$arg1$" "s/[][]//g" | rex mode=sed field="$arg1$" "s/'//g" | makemv delim="," $arg1$ We do this so we can join on the array values like: |`init("assessments")` | fields id,info_subType,related_vendors,info_severity | dedup id | `fp_mvexpand(related_vendors)` | eval RV = mvindex(related_vendors,0) |join type=left RV [ `init("vendors")` |fields id infor_name |rename id as RV info_name as Vendor| fillnull value="none" Vendor| dedup Vendor] | stats count(Vendor) by info_subType In this example, related_vendors in the the assessments table is the same as the id in the vendors table. So we strip out the brackets and single quotes and mvexpand, then mvindex and `join` to vendors But I don't get records where related_vendors = [] , and I assume it's because we stripped out the [] . Any thoughts on how I could accomplish this? Thanks for all the help everyone!

How do I calculate elapsed time between hours on two specific dates?

$
0
0
I have an Incident "Open Date" in following format DD/MM/YYYY HH:MM and an Incident "Close Date" in same format. I want to calculate the amount of time between the two dates. However, I only want to calculate the hours between 09:00 and 17:00 on those dates. Can any one advise? Thanks

Can I use REST API without curl?

$
0
0
Is there a way I can make REST API calls to Splunk to run a search and return data on JSON via webservice rather than use curl? Basically, I need the HTTP URL equivalent for below that would work when invoked via javascript or when put into a browser: curl -u usr:psd -k https://xx.xx.xx.xx:xxxxx/services/search/jobs/export -d search="search index=xxx earliest=-15m latest=now "xyz123"| table c1, c2" -d output_mode=json

How do I add yesterday's date to an emailed report subject?

$
0
0
I have a scheduled report for the previous day's data that gets emailed. I'm trying to include the previous days date in the subject line. I've tried evaluating a field ReportDate in which the value is yesterday's date and then hiding the field since I don't want it in the report. I then put $result.ReportDate$, but this of course did not work since that field isn't included. Advice?

How do I search events in a Tree Based Structure?

$
0
0
I have a set of events as follows for a chain of SQL Server blocked processes. It's a tree based structure. I am trying to join the data set on itself to determine which resources are blocking the most. Either result is okay but I prefer the first one. I'm able to figure out a solution for result #2 using a `join` and searching the same data twice. However, the number of events is more than 10K so it truncates the results. I have seen the selfjoin command in docs but am not certain how to join between two different fields in the same data set. Does anyone know how to produce either of the results below? **Sample Events** Process ID, Blocked By Process ID, Resource Name, Wait Time 1, 0, Resource 1, 0 2, 1, Resource 2, 15 3, 1, Resource 3, 10 4, 2, Resource 4, 5 **Result set 1** Blocker, Total Blocked Victim Time Resource 1, 30 <- recursively sum the wait time Resource 2, 5 **Result set 2** Blocker, Total Blocked Victim Time Resource 1, 25 <-- only sum the wait time of the children (not grandchildren, etc) Resource 2, 5

Why is my index time extraction not working

$
0
0
On my Intermediates or Heavy Forwarders and Search Heads I have: props.conf [role_extract] TRANSFORMS-roleextract = extract_role transforms.conf [extract_role] REGEX=\D{3}\D\d{1,4}(...)\d{1,5} FORMAT = role::$1 SOURCE_KEY = host WRITE_META = true fields.conf [role] INDEXED = true I dont get the extracted values thought when I search for this field. Im probably doing something incorrectly. Thanks for the help

After editing monitoring console in a distributed system, why does the search head(SH) server shows both as an indexer and SH?

$
0
0
I have one indexer + one SH, on the Monitor console. After configuring monitoring console to a distributed system and applying changes, the SH server shows as both indexer and SH. Is this expected?

TIME_FORMAT AUTO works, but strptime defined does not for props.conf

$
0
0
Why is TIME_FORMAT failing for importing data? I get the error: Could not use strptime to parse timestamp from "INFO: Manager: list: Lis" Raw log is repeating sections like: >Sep 06, 2018 12:00:56 AM org.apache.catalina.core.ApplicationContext log>INFO: Manager: list: Listing contexts for virtual host 'localhost'>Sep 06, 2018 12:01:56 AM org.apache.catalina.core.ApplicationContext log>INFO: Manager: list: Listing contexts for virtual host 'localhost'>Sep 06, 2018 12:02:56 AM org.apache.catalina.core.ApplicationContext log>INFO: Manager: list: Listing contexts for virtual host 'localhost' I am using the following values in props.conf: SHOULD_LINEMERGE=true NO_BINARY_CHECK=true CHARSET=UTF-8 MAX_TIMESTAMP_LOOKAHEAD=24 disabled=false TIME_PREFIX=^ LINE_BREAKER=^\w{3} \d{2}, \d{4} \d{2}:\d{2}:\d{2} **TIME_FORMAT=%b %d, %Y %H:%M:%S %p** Splunk will not accept my timeformat that I have defined above... and I am not sure why.

Data loss in splunlk

$
0
0
we have indexers which are running in clustered environment.we have retention policy 35 days for the all app logs. Now we started missing data.now we see only 10 days of old data.and data missing continuously happening.could you please suggest how to investigate this issue.

How to implement a logic when the page is loaded

$
0
0
How to implement a logic when the page is loaded When the page is loaded, the drop-down box displays the last month of the month by default. Now, if it's August, the drop-down box will display 201807 by default. Now, if it's September, the drop-down box will display 201808 by default, and so on. ![alt text][1] code xml ![alt text][2] [1]: /storage/temp/255947-loadjs.png [2]: /storage/temp/255948-codexml.png

em_send_email error

$
0
0
Splunk 7.1.3, debian 9, Linux x86_64. Setup mail configuration within the app, tried TLS, same error, tried None, this one doesn't seem to allow no login/password. 09-07-2018 21:47:01.042 -0700 INFO sendmodalert - action=em_send_email STDERR - custom alert action em_write_alerts triggered, search_name = xxx-cpu.system-avg 09-07-2018 21:47:01.187 -0700 ERROR sendmodalert - action=em_send_email STDERR - u'None' 09-07-2018 21:47:01.195 -0700 INFO sendmodalert - action=em_send_email - Alert action script completed in duration=333 ms with exit code=3 09-07-2018 21:47:01.195 -0700 WARN sendmodalert - action=em_send_email - Alert action script returned error code=3 09-07-2018 21:47:01.195 -0700 ERROR sendmodalert - Error in 'sendalert' command: Alert script returned error code 3. 09-07-2018 21:47:01.195 -0700 ERROR SearchScheduler - Error in 'sendalert' command: Alert script returned error code 3., search='sendalert em_send_email results_file="/opt/splunk/var/run/splunk/dispatch/scheduler__admin_c3BsdW5rX2FwcF9pbmZyYXN0cnVjdHVyZQ__RMD5006eed26b1e40ef1_at_1536382020_20/results.csv.gz" results_link="https://sea-cpu-036:8443/app/splunk_app_infrastructure/@go?sid=scheduler__admin_c3BsdW5rX2FwcF9pbmZyYXN0cnVjdHVyZQ__RMD5006eed26b1e40ef1_at_1536382020_20"'

App Packager - Exception: , Value: list index out of range

$
0
0
Hi Team, I'm attempting to package an app for an internal deployment. I run the below command: ./splunk package app /Applications/Splunk/etc/apps// I am then met with the below error: An unforeseen error occurred: Exception: , Value: list index out of range Traceback (most recent call last): File "/Applications/Splunk/lib/python2.7/site-packages/splunk/clilib/cli.py", line 1145, in main parseAndRun(argsList) File "/Applications/Splunk/lib/python2.7/site-packages/splunk/clilib/cli.py", line 943, in parseAndRun retVal = makeRestCall(cmd=command, obj=subCmd, restArgList=objUnicode(argList), sessionKey=authInfo, timeout=timeout) File "/Applications/Splunk/lib/python2.7/site-packages/splunk/rcUtils.py", line 666, in makeRestCall DISPLAY_CHARS[endpoint](cmd=cmd, obj=obj, type=etype, serverResponse=serverResponse, serverContent=serverContent, sessionKey=sessionKey, eaiArgsList=eaiArgsList) File "/Applications/Splunk/lib/python2.7/site-packages/splunk/rcDisplay.py", line 187, in wrapper return f(**kwargs) File "/Applications/Splunk/lib/python2.7/site-packages/splunk/rcDisplay.py", line 165, in wrapper return f(**kwargs) File "/Applications/Splunk/lib/python2.7/site-packages/splunk/rcDisplay.py", line 706, in displayApp d = nodeToPrimitive(atomFeed[0].rawcontents) File "/Applications/Splunk/lib/python2.7/site-packages/splunk/rest/format.py", line 382, in __getitem__ return self.entries.__getitem__(idx) IndexError: list index out of range Please file a case online at http://www.splunk.com/page/submit_issue This is a particularly enraging issue, as this worked perfectly fine yesterday, stopped working today and still does not work after a fresh install of Splunk Enterprise in my Dev environment. Any suggestions?

how to configure Splunk logging for java to work with jboss logging manager?

$
0
0
We are trying to configure the splunk http event collector as a logging handler for red hat amq, which uses jboss logging. The jboss logging manager extends the java.util.logging log manager but the http event collector does not see the logging configuration properties

How to calculate duration of overlapping events from multiple Services

$
0
0
I have been working on this for quite sometime and it appears I am just going in circles. Maybe some Splunk Savant will be able to work the kinks out. I have a set of normalized data which contains the starttime, endtime, AppName, InstanceName, Type, EventName, duration. My data looks like this: 1535432400, 1535432700, App1, measure, 1, _m_WS_Time ws/cb_App1_Requests.v4_1.ws.producer.App1_requests/cb_App1_Requests_v4_1_ws_producer_App1_requests_Port?_getTaskList, 300,1535436019_0 1535443200, 1535443500, App1, measure, 1, _m_WS_Time ws/cb_App1_Requests.v4_1.ws.producer.App1_requests/cb_App1_Requests_v4_1_ws_producer_App1_requests_Port?_getTaskList, 300,1535446818_0 1535446800, 1535447100, App1, measure, 1, _m_WS_Time ws/cb_App1_Requests.v4_1.ws.producer.App1_requests/cb_App1_Requests_v4_1_ws_producer_App1_requests_Port?_getTaskList, 300,1535450417_0 1535447730, 1535448030, App4, alvelca01, 1, App4 Doc_Admin_Prod High PurePath Response Time, 300,1535641220_4 1535468400, 1535469000, App1, measure, 1, _m_WS_Time ws/cb_App1_Requests.v4_1.ws.producer.App1_requests/cb_App1_Requests_v4_1_ws_producer_App1_requests_Port?_getTaskList, 600,1535472019_0 1535471219, 1535474819, App2, ualbuacwas6, 1, App2 Online - High Active Thread Count, 3600,1535472017_0 1535471219, 1535474819, App2, ualbuacwas5, 1, App2 Online - High Active Thread Count, 3600,1535472017_0 1535471269, 1535474869, App2, ualbuacwas7, 1, App2 Online - High Active Thread Count, 3600,1535472017_0 1535471319, 1535474919, App2, ualbuacwas6, 1, High App2 WCAX JDBC Pool Percent Usage, 3600,1535472017_1 1535471319, 1535471449, App2, ualbuacwas7, 1, High App2 WCAX JDBC Pool Percent Usage, 130,1535472017_1 1535479849, 1535483449, App2, ualbuacwas5, 1, High App2 JDBC Pool Percent Usage, 3600,1535482816_1 1535481100, 1535481103, App3, ip-10-14-6-210.ec2.internal, 1, Application Process Unavailable (unexpected), 3,1535482817_0 1535481100, 1535481107, App3, ip-10-14-6-44.ec2.internal, 1, Application Process Unavailable (unexpected), 7,1535482817_1 1535481164, 1535481165, App4, alvelcw01, 1, Application Process Unavailable (unexpected), 1,1535641220_3 1535481348, 1535484948, App2, ualbuacwas8, 1, App2 Online - Hung Threads, 3600,1535482816_2 1535481348, 1535484948, App2, ualbuacwas7, 1, App2 Online - Hung Threads, 3600,1535482816_2 1535481348, 1535484948, App2, ualbuacwas6, 1, App2 Online - Hung Threads, 3600,1535482816_2 1535512218, 1535512288, App2, ualbuacwas5, 1, Application Process Unavailable (unexpected), 70,1535515215_0 I have tried to use concurrency with transaction: base search .... | concurrency start=stime duration=duration output=overlay | table _time Service EventName duration overlay The concurrency command is not splitting the Services out. But, now that I've looked at it, it shouldn't. It's calculating the concurrency across all overlaps not by Service overlaps. **What I am looking for is the durations of the overlaps by Service.** Alot like what the Timeline visualization does. Example: The first event for App1 starts at 10:30am and its duration is 300 seconds. The next event for App1 starts at 10:32 for 300 seconds, etc, etc,etc. I want the time for the Service's total durtion of events from the first overlapping event to the last. To throw a wrench into the mix. Some events for a service so not overlap and they have to be measured individually because they don't overlap. Any help at this point would be a bonus. Thanks in advance.

How to change the title name when selecting data in the drop-down box

$
0
0
How to change the title name when selecting data in the drop-down box When selecting 201808, the name of the title is 8 month of project pfee. When choosing 201807, the name of the title is 7 month of project pfee, and so on. ![alt text][1] [1]: /storage/temp/255955-dongtai-titilename.png

Help me with an error in Windows Infra App Group dashboards?

$
0
0
all, I am setting up the Splunk app for Windows Infrastructure. Dashboards I expect to work are working. HOW EVER I am not seeing Group Audit >> Full Group Membership dashboard is throwing this error. External search command 'ldapgroup' returned error code 1. Script output = "error_message=Missing required value for alternatedomain in ldap/default. " So far no other dashboards are having problems. I reviewed my SA-ldapsearch apps here is my ldap.conf config #ldap.conf [somedomain.com] alternatedomain = SOMEDOMANI basedn = DC=somedomain,DC=com binddn = somedomain\SvcSplunkLDAP port = 389 server = awesomeserver01 ssl = 0 Any ideas here?

Unable to see data for Asuswrt-Merlin WRT Firmware

$
0
0
Hey Guys So I have installed this app in splunk in docker. I have a TCP port 1514 listening for data and from my ASUSWRT Router I have logs sent to the SplunkIP:1514 UDP and shows in search for index="tomato" On the router I also But the app is still not showing any data. I edited the first panel (System Monitoring) source and change the IP to 192.168.1.50 (instead of 1.1) as that is the router IP What else am I missing? New to splunk

ERROR SearchStatusEnforcer - sid:1536453655.14705 Search auto-canceled

$
0
0
Hello, Splunk 7.1.3, Linux x86_64. One of my custom (SCPv1) commands errors when the number of events returned exceeds 20,000-30,000 (the value slightly changes between runs; it poses no problem if count(events)<10,000); this is the associated suspicious snippet from search.log: 09-08-2018 17:40:55.446 ERROR ScriptRunner - stderr from 'xxx': INFO Running /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64/bin/linux_x86_64/bin/python xxx 09-08-2018 17:40:56.247 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL 09-08-2018 17:40:56.247 INFO DispatchExecutor - User applied action=CANCEL while status=0 09-08-2018 17:40:56.247 ERROR SearchStatusEnforcer - sid:1536453655.14705 Search auto-canceled 09-08-2018 17:40:56.247 INFO SearchStatusEnforcer - State changed to FAILED due to: Search auto-canceled 09-08-2018 17:40:56.255 INFO ReducePhaseExecutor - Ending phase_1 09-08-2018 17:40:56.255 INFO UserManager - Unwound user context: NULL -> NULL 09-08-2018 17:40:56.255 ERROR SearchOrchestrator - Phase_1 failed due to : DAG Execution Exception: Search has been cancelled 09-08-2018 17:40:56.255 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL 09-08-2018 17:40:56.255 INFO DispatchExecutor - User applied action=CANCEL while status=3 09-08-2018 17:40:56.255 INFO DispatchManager - DispatchManager::dispatchHasFinished(id='1536453655.14705', username='admin') 09-08-2018 17:40:56.256 INFO UserManager - Unwound user context: NULL -> NULL 09-08-2018 17:40:56.261 INFO UserManager - Unwound user context: NULL -> NULL 09-08-2018 17:40:56.261 INFO UserManager - Unwound user context: NULL -> NULL 09-08-2018 17:40:56.261 INFO UserManager - Unwound user context: NULL -> NULL 09-08-2018 17:40:56.261 INFO UserManager - Unwound user context: NULL -> NULL 09-08-2018 17:40:56.261 INFO UserManager - Unwound user context: NULL -> NULL 09-08-2018 17:40:56.261 INFO UserManager - Unwound user context: NULL -> NULL 09-08-2018 17:40:56.261 WARN SearchResultWorkUnit - timed out, sending keepalive nConsecutiveKeepalive=0 currentSetStart=0.000000 09-08-2018 17:40:56.261 WARN LocalCollector - Local Collector Orchestrator terminating, writing to the collection manager failed. 09-08-2018 17:40:56.263 INFO UserManager - Unwound user context: NULL -> NULL 09-08-2018 17:40:56.263 WARN ScriptRunner - Killing script, probably timed out, grace=0sec, script="xxx" 09-08-2018 17:40:56.265 INFO UserManager - Unwound user context: NULL -> NULL Note: I've obfuscated the script name from the log above, my questions, what conditions must arise to have a search auto-canceled, what's a DAG Execution Exception, and finally, what's a known workaround, thank you.
Viewing all 47296 articles
Browse latest View live