Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Extraction of Stacktrace

$
0
0
Hi Out of the below sample log would like to extract the below information 1. number of cause by errors count : 3 2. For each cause by Error: org.apache.camel.TypeConversionException: 3. Cause By Error Description: Error during type conversion from type: java.lang.String to the required type: ...some more text Stacktrace org.springframework.jms.InvalidDestinationException: JMSWMQ2008: Failed to open MQ queue 'DUMMY'.; nested exception is com.ibm.msg.client.jms.DetailedInvalidDestinationException: JMSWMQ2008: Failed to open MQ queue 'DUMMY'. JMS attempted to perform an MQOPEN, but WebSphere MQ reported an error. Use the linked exception to determine the cause of this error. Check that the specified queue and queue manager are defined correctly.; nested exception is com.ibm.mq.MQException: JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED') reason '2085' ('MQRC_UNKNOWN_OBJECT_NAME'). at org.springframework.jms.support.JmsUtils.convertJmsAccessException(JmsUtils.java:285) at org.springframework.jms.support.JmsAccessor.convertJmsAccessException(JmsAccessor.java:168) at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:469) at org.apache.camel.component.jms.JmsConfiguration$CamelJmsTemplate.send(JmsConfiguration.java:245) at org.apache.camel.component.jms.JmsProducer.doSend(JmsProducer.java:413) at org.apache.camel.component.jms.JmsProducer.processInOnly(JmsProducer.java:367) at org.apache.camel.component.jms.JmsProducer.process(JmsProducer.java:153) at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:141) at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77) at appname.connectivity.core.cdi.monitoring.NodeEventProcessor.process(NodeEventProcessor.java:71) at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:91) at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:460) at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109) at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:337) at org.apache.camel.processor.DefaultErrorHandler.process(DefaultErrorHandler.java:59) at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:165) at org.apache.camel.processor.Pipeline.process(Pipeline.java:121) at org.apache.camel.processor.Pipeline.process(Pipeline.java:83) at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109) at org.apache.camel.processor.Pipeline.process(Pipeline.java:63) at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:165) at org.apache.camel.component.direct.DirectProducer.process(DirectProducer.java:62) at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:141) at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77) at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:91) at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:460) at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109) at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:337) at org.apache.camel.processor.DefaultErrorHandler.process(DefaultErrorHandler.java:59) at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:165) at org.apache.camel.processor.Pipeline.process(Pipeline.java:121) at org.apache.camel.processor.Pipeline.process(Pipeline.java:83) at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109) at org.apache.camel.processor.Pipeline.process(Pipeline.java:63) at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:165) at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109) at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:87) at com.appname.bpc.connectivity.camel.broker.MxBrokerConsumer.processMessage(MxBrokerConsumer.java:200) at com.appname.bpc.connectivity.camel.broker.MxBrokerConsumer.onMessage(MxBrokerConsumer.java:128) at appname.bos.client.jms.BOSMessageListener.onMessage(BOSMessageListener.java:37) at org.apache.activemq.ActiveMQMessageConsumer.dispatch(ActiveMQMessageConsumer.java:1361) at org.apache.activemq.ActiveMQSessionExecutor.dispatch(ActiveMQSessionExecutor.java:131) at org.apache.activemq.ActiveMQSessionExecutor.iterate(ActiveMQSessionExecutor.java:202) at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129) at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: com.ibm.msg.client.jms.DetailedInvalidDestinationException: JMSWMQ2008: Failed to open MQ queue 'DUMMY'. JMS attempted to perform an MQOPEN, but WebSphere MQ reported an error. Use the linked exception to determine the cause of this error. Check that the specified queue and queue manager are defined correctly. at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:503) at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:221) at com.ibm.msg.client.wmq.internal.WMQMessageProducer.checkJmqiCallSuccess(WMQMessageProducer.java:1061) at com.ibm.msg.client.wmq.internal.WMQMessageProducer.checkJmqiCallSuccess(WMQMessageProducer.java:1019) at com.ibm.msg.client.wmq.internal.WMQMessageProducer.access$800(WMQMessageProducer.java:68) at com.ibm.msg.client.wmq.internal.WMQMessageProducer$SpiIdentifiedProducerShadow.initialise(WMQMessageProducer.java:765) at com.ibm.msg.client.wmq.internal.WMQMessageProducer.(WMQMessageProducer.java:995) at com.ibm.msg.client.wmq.internal.WMQSession.createProducer(WMQSession.java:886) at com.ibm.msg.client.jms.internal.JmsSessionImpl.createProducer(JmsSessionImpl.java:1232) at com.ibm.msg.client.jms.internal.JmsQueueSessionImpl.createSender(JmsQueueSessionImpl.java:136) at com.ibm.mq.jms.MQQueueSession.createSender(MQQueueSession.java:153) at com.ibm.mq.jms.MQQueueSession.createProducer(MQQueueSession.java:254) at org.springframework.jms.connection.CachingConnectionFactory$CachedSessionInvocationHandler.getCachedProducer(CachingConnectionFactory.java:371) at org.springframework.jms.connection.CachingConnectionFactory$CachedSessionInvocationHandler.invoke(CachingConnectionFactory.java:329) at com.sun.proxy.$Proxy130.createProducer(Unknown Source) at org.springframework.jms.core.JmsTemplate.doCreateProducer(JmsTemplate.java:971) at org.springframework.jms.core.JmsTemplate.createProducer(JmsTemplate.java:952) at org.apache.camel.component.jms.JmsConfiguration$CamelJmsTemplate.doSendToDestination(JmsConfiguration.java:288) at org.apache.camel.component.jms.JmsConfiguration$CamelJmsTemplate.access$100(JmsConfiguration.java:234) at org.apache.camel.component.jms.JmsConfiguration$CamelJmsTemplate$1.doInJms(JmsConfiguration.java:248) at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:466) ... 45 more Caused by: com.ibm.mq.MQException: JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED') reason '2085' ('MQRC_UNKNOWN_OBJECT_NAME'). at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:209)

Forescout TA & App configuration using a 3rd party syslog server.

$
0
0
Current setup: ForeScout currently sending syslog data to a Kiwi syslog server. Splunk is monitoring the file and pulls it in successfully. Can I modify the Forescout-TA and Forescout App to read the data and perform the field extractions? At this time, we are not looking to use the adaptive response or configure policies from Forescout to send to Splunk. I simply just want to see the data and have the fields extracted correctly. I modified the the inputs.conf to align with what I think the props.conf is looking, and I included the sourcetype and the index 'fsctcenter' I created: # ForeScout CounterACT feed [monitor://E:\syslog\counteract\*\*.txt] ignoreOlderThan = 7d sourcetype = fsctcenter_avp index = fsctcenter host_segment = 3

How to use a different field other than _time to group events based on a desired time interval (e.g. 1 week)

$
0
0
I'm working with ServiceNow incident logs and I'm trying to group events weekly, based on their final state in the week. I've pulled them from the beginning of the year, and I did this starting about a month ago so _time is pretty skewed. I believe my desired field is "sys_updated_on". I want to have a line graph for each incident's state, grouped by when it was last updated ("sys_updated_on"). This is how the search looks right now: index=servicenow sourcetype=snow:incident incident_state=* | dedup sys_id | timechart span=7d count(sys_id) I was looking at this one article (had to modify the URL since I don't have enough karma yet to post a URL), but I don't understand the syntax of how to use chart to do it. splunkforums/answers/9730/using-a-different-time-base-on-timechart Thanks, Brandon

How can we find out how much data we lost during a Splunk indexer cluster rebuild?

$
0
0
hi is there any way to find out how much data we lost while one of the spunk indexer cluster host has rebuild

How can I capture the output of custom alert action scripts?

$
0
0
If I create a custom alert action script normally the output sent to stderr is logged by Splunk. But if I use the `alert.execute.cmd` option this output is not logged. Is there a way to capture the output of these custom scripts?

How can I compare the time on our server against the actual current time?

$
0
0
Hi Is there a way to find the current time on the Windows (UF installed) and compare it with the current time? I need to find the time variances in Windows Environment?

Parse field from JSON logs and build a stats table with data

$
0
0
Hi all, Very close with the offerings in other JSON/SPATH posts but just not getting it done. We have a JSON formatted log coming into Splunk that gives a ton of data on our servers. One of them being a 'metal' field that we classify our systems by. We'd like to parse that values.metal field and build a stats table (?) that shows how many systems are in each metal. The current search (which isn't working well) is 'index=unix source="/var/log/facts/*" metal | stats distinct_count(host) by values.metal Here's some of the JSON file: { "name": "toritsgitvlp01.xx.com", "values": { "aio_agent_build": "1.7.2", "aio_agent_version": "1.7.2", "architecture": "x86_64", "augeas": { "version": "1.4.0" }, ...... }, "memoryfree": "6.76 GiB", "memoryfree_mb": 6918.28125, "memorysize": "7.63 GiB", "memorysize_mb": 7815.03125, "metal": [ "dirt" ], ....... Any help MUCH appreciated.

How can I find out how much volume hosts are sending to my "main" index?

$
0
0
I need to find how much volume hosts are sending to my "main" index. The search below queries the internal index, and I'm not seeing the hosts that I need. If I search a specific host under main index, the host is there and actively sending data to the indexer. I've tried modifying the search from index="_internal" to index="main", and it doesn't report anything back From: index="_internal" source="*metrics.log" group="per_host_thruput" | chart sum(kb) by series | sort - sum(kb) To: index="main" source="WMI:WinEventLog:Security" | chart sum(kb) by series | sort - sum(kb) But, with only: index="main" source="WMI:WinEventLog:Security" Brings back 2710 results from today. I have hosts that are sending to this index, and I need to be able to tell how much data they're sending, but the internal index isn't showing them for some reason....

Is there a way to zoom in on a scatter plot visualization?

$
0
0
Hello All, I have scatter plot visualization, I am trying to zoom the visualization using mouse cursor but it's not happening , if the same visualization I make on a bar chart I can zoom the visualization. Is it the case that scatter plot in Splunk cannot be zoomed? ![alt text][1] Can Someone please let me know how can I zoom the scatter plot. Regards Shailendra Patil [1]: /storage/temp/217674-screen-shot-2017-09-27-at-120952-pm.png

How do you increase retention time of Splunk monitoring console reports?

$
0
0
How to increase the retention time of Splunk monitoring console Reports in distributed environment?

TA-prtg: How do I add the PAI on the prtg to the prtg.conf file?

$
0
0
Hello, Using https://splunkbase.splunk.com/app/3282/ TA-prtg, I'm specifically trying to get the API to work in Splunk. I have all of our index servers loaded with the app. I have a user built on both sides, and I have the prtg.conf file configured with port 443 as well. I think it needs to specifically gather from the "live data" API on PRTG? Maybe not. But what do I add from the api on prtg to the prtg.conf file (or maybe the searchbnf.conf file?) to make that connection? Any help would be appreciated. Thanks!

Error while sending email using AWS SES

$
0
0
I have an AWS SES email service configured in Splunk using TLS enabled. When I try to test if email configuration is working, I am getting a below error: * | top 5 host | sendemail to="user@test.com" `ERROR`: command="sendemail", 'NoneType' object has no attribute 'find' while sending mail to: user@test.com Your help is appreciated.

Can I create a field with a predefined value to append to results in a Splunk search?

$
0
0
I am trying to include something in my query like this index=* domain=acbd_1 earliest=-16m@m latest=-1m@m | bin _time span=15m | stats avg(responstime) by domain | stats values(avg(responsetime)) as avg_res_time by _time, domain | eval ts_time=_time * 1000 | where avg_res_time > 2 | top limit=1 avg_res_time by domain, ts_time | table ts_time, domain, avg_res_time, channel, lob I want the display to be like this : ts_time domain avg_res_time channel lob 1506542400000 abcd_1 120.83 dot Clear 1506542600000 abcd_1 82.11 dot Clear 1506563400000 acbd_1 9 dot Clear I want result As shown in above table; ts_time, domain,avg_res_time as extracted from the data we have. I am trying to add "channel, lob" fields with "dot, Clear" values in my splunk result table by using query, in other words I want to predefine channel and lob values within the query and display them in table. How can I achieve it? Appreciate the help ASAP, please.

Help extracting a field from raw data and generating a count

$
0
0
For a simple query - index=app_au ms.ab=true I have a raw output of - {"dtm":"2017-09-27 10:44:42.389 PDT", "logger":"audit.com.foo.store.RequestAuditLog", "app":{"p":8523,"a":"WebNav","e":"prod.live.txn","h":"rn2-rosp-pr02-lweb04.fno.foo.com","dc":"fno"}, "msg":{"ab":true,"forwApp":"entry","resTime":12,"dx":1,"mc":{"s":"consumer","gp":"ww.emea.de","gc":"DEU"},"reqHost":"secure.foo.com","resStatus":"503","forwUrl":"urls-entry.loginJSON","d":"0ef7e2b2-f0f2-4a3e-9098-6812d9546b1b","ip":"92.211.19.113","reqPat":"///login/sign_in","reqApp":"entry","r":"c461b663-7102-4431-a0fc-fff7c472b748","t":1506534282377,"sampleWeight":1.0,"reqUrl":"urls-entry.loginJSON"}} I need to extract the ip field and get a list of IP with counts. Please help. thanks, Vik

How do I resolve this message: "maximum number of concurrent auto-summarization searches on this instance has been reached"

$
0
0
The below searches appear on my Skip Ration report with the following messages: The maximum number of concurrent historical scheduled searches on this instance has been reached, and The maximum number of concurrent auto-summarization searches on this instance has been reached. I cannot locate these searches under the App to which they seem to belong to, nor am I finding them in Data Models. Any suggestions on how to terminate these searches? Thank you. _ACCELERATE_A3F1133B-692A-49B4-98B0-C6FC50DFB20D_splunk_app_stream_nobody_615f5f04b93533e7_ACCELERATE_ _ACCELERATE_DM_DA-ESS-ThreatIntelligence_Threat_Intelligence_ACCELERATE_ _ACCELERATE_DM_DellNetworking_Dell_Events_ACCELERATE_ _ACCELERATE_DM_SA-NetworkProtection_Domain_Analysis_ACCELERATE_ _ACCELERATE_DM_SA-ThreatIntelligence_Incident_Management_ACCELERATE_ _ACCELERATE_DM_SA-ThreatIntelligence_Risk_ACCELERATE_ _ACCELERATE_DM_SA-UEBA_UEBA_ACCELERATE_ _ACCELERATE_DM_cisco_ios_Cisco_IOS_Event_ACCELERATE_

Only include certain rows in appendcol- need help building search

$
0
0
So i am trying to convert some of my searches from joins to appendcol to improve performance but I am running into some problems. I can't figure out how to create a table in this question- so just read the first row like Field=A, Baseline=100, Week1=103, Week2=105 Field Baseline Week1 Week2 A 100 103 105 B 50 54 56 C 20 Originally- I was using join to pull in baseline because baseline is based on historical trends, while week1 and week2 are the most recent weeks. To improve performance- I want to change to using something else, but when I use appendcol- field value C is pulled in based on the historical baseline even when they don't have values in week1 and week2. I only want to display the baseline for A and B because they have data in week1/2. Is there anyway to control this with appendcols? This is a very simplified example so I need a somewhat generic solution or idea. Thanks for the help!

Cisco CPS and Splunk integration

$
0
0
Dears, May i know if anyone able to successfully able to integrate CPS with Splunk as per my knowledge logs is written in MongoDB database

Help with formatting my XML checkbox

$
0
0
I want to be able to click on a text and that acts as a checkbox, and once clicked will pass a token to the below panel and therefore display that panel. I have managed to do it using a check box below but I don't like the way it appears v the h1 heading option. The H1 heading below just looks better. Can this be done? I am just using xml, so maybe it can be done in html.

SMS

The code above creates a H1 text in my xml dashboard. Shown below in the pic. The code above is the check box below in the pic. But ![alt text][1] [1]: /storage/temp/217678-input-tick-box.png

How to Join entries for a summary index

$
0
0
I have two indexes that I want to create a summary from every hour. Index1 request_type, request_guid, request_timestamp, meta_field1, meta_field2, ... Index1 contains log entries from each processing steps in each service request. Each service request is assigned a unique request_guid and all ~10 logs for the processing of a request have that id. The time the request was made is stored in request_timestamp and also remains the same through all logs for a request.. index2 request_guid, meta_fieldA, meta_fieldB, ... index2 contains more data for the logs, but is in a separate index so that it can be secured differently from index1. The request_guid is the same value as in index1 I want to summarize by collecting stats for each request type by hour. The approach I have taken is to select all the logs from Index1 where the request_timestamp is in the hour. I cannot use the log time directly as a request logs might span into the next hour ( as in started at 9:59:59 and ended at 10:00:01) index=index1 earliest=0 | addinfo | eval timemillis=strftime(strptime(request_timestamp,"%Y-%m-%dT%H:%M:%S.%3N%z"),"%s") | where timemillis>=info_min_time AND timemillis

Splunk Universal Forwarder TCPOUT Cutting Events in Transit

$
0
0
I have a UF that is monitoring 5 rather large (200MB to 12GB) files and then sending via TCPOUT uncooked data to an rsyslog server. However, it appears that some of the events are getting split randomly. I suspect it's due to the AUTOLB function but I want to ask here before I resort to sending additional tens of GBs of data per hour to a single server. Inputs on UF: [batch:///output/file.txt] move_policy = sinkhole crcSalt = _TCP_ROUTING = senddata Outputs on UF: [tcpout] [tcpout:senddata] server=1.1.1.1:515, 1.1.1.2:515, 1.1.1.3:515, 1.1.1.4:515 sendCookedData = false disabled=false To also add to this I have verified that the data is NOT cut in the raw txt file before the UF picks it up. It is about 5-19 cuts per file (so about losing 5-19 events per hour per file) which makes me suspect that AutoLB for TCPOUT is load balancing in the middle of an event. All of these events are single-line JSON files, the data is cut randomly throughout the events, sometimes it's 5 characters in, in the middle of a JSON object or it is even sometimes the very last "}" in the JSON and no other characters.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>