error: splunk-winevtlog - WinEventLogChannel::deleteCheckpointFile: Failed to delete checkpoint file for Windows Event Log channel='security'
I have installed UF on my PDC primary domain controller and following is my inputs.conf and am getting above error in splunkd.log and not receiving logs from my UF to indexer.
inputs.conf
[WinEventLog://Security]
disabled = 0
start_from = newest
current_only = 1
evt_resolve_ad_obj = 0
checkpointInterval = 5
# only index events with these event IDs.
whitelist = 4723,4724,4740,4782,4624
# exclude these event IDs from being indexed.
blacklist = 1100-8191
index = wineventlog
renderXml=false
I have installed UF with local account and can see in task manager it is running with system account and I ran wevtutil gl security from splunk bin directory on command prompt and following is the result for this
name: security
enabled: true
type: Admin
owningPublisher:
isolation: Custom
channelAccess: O:BAG:SYD:(A;;CCLCSDRCWDWO;;;SY)(A;;CCLC;;;BA)(A;;CC;;;ER)(A;;CC;
;;NS)
logging:
logFileName: %SystemRoot%\System32\Winevt\Logs\security.evtx
retention: false
autoBackup: false
maxSize: 4095737856
publishing:
fileMax: 1
which means this is not a permission issue also i verified from winevt properties system account has full control for permissions.
same config and UF works on my other DCs
↧
splunk-winevtlog - WinEventLogChannel::deleteCheckpointFile: Failed to delete checkpoint file for Windows Event Log channel='security'
↧
How do I edit my rex field=UEI mode=sed syntax to 'district' my sample URIs?
As of now I am using:
rex field=URI mode=sed "s/=[^?]+/=xxx/g"
But its not working
/v1/mb/members/15d628b4-0d113-09b8ec770efd/option
/v1/mb/members/216570ce-c199-4ab9--c0cf3ddd404e/option
/v1/mb/members/36fbe9a8-882d-4a94-882561f81074/option
/v1/mb/members/4d573446-1d4f-483a-c5d64c33/option
/v1/mb/members/5cc2fa84-4b91-45bf-9c1/option
↧
↧
Are function names case-sensitive?
The following query did not return any results:
... | stats count(EVAL(error_code=2000)) ...
I had to use **lower-case** `eval` to make it work.
Is it a general rule or a specific case?
↧
Why are we getting "failed to parse timestamp defaulting to file mtime error" for events with no timestamp logs?
Hi Folks,
we have below format logs and there is no time stamp on first 5 lines and we are getting error "failed to parse timestamp defaulting to file mtime error" while indexing the data. We have e created some timezone and prefix on props.conf but it doesn't fix the issue. Could you please anyone help me to fix the issue?
logs example:
---------------------------------------------------
trcd file: "dedv_w10", trcd levels: 1, rgeleaese: "742"
---------------------------------------------------
*
* ACTdIVE TRACE wLEVEL 1
* ACsTIVE TRAsCE CsOMPONENTS all, MJ
*
M sysno s00
M sid P015
M systemid 3290 (AMD/Inddtel x86_64 with Lgeiewnux)
M relno 742e0
M patchlevel 01
M patchno 439d
M Sun Sep 17 10:42:57 2017
M kernel runs with dp version 3000(ext=117000) (@(#) DPLIB-INT-VERSION-0+3000-UC)
Props.conf
[ ]
SHOULD_LINEMERGE=false
CHARSET=UTF-8
LINE_BREAKER=([\r\n]+)\w{1}\s\w{3}\s\w{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\s\d{4}
↧
I need a help,that I want write a IDP to connect Splunk's SAML implements SSO use Java
I need a help,that I want write a IDP to connect Splunk's SAML for SSO use Java,
but I don't know what's Splunk's need.I mean,I need some documentation about this.
could you help me.
If that' all right,I wish get somes in my e-mail: zk0398@126.com
(My English is poor,I can't express my meaning very well,Excume me!)
Thanks
from Zhao Kun
↧
↧
How can I use my billing info to create a prediction for the future?
I've asked about this before and now I've re-loaded the **raw** data without any modifications. It looks like this (without an actual timestamp):
Month,Billing,MsgType,BillSize,Direction
2013-04,BI70276,ORDHDR,5,SENT
2013-04,BI70276,INVFIL,8,RECV
2013-04,BI70276,ORDHDR,5,SENT
2013-04,BI70276,INVFIL,34,RECV
2013-04,BI70276,ORDHDR,20,SENT
2013-04,BI70276,INVFIL,13,RECV
2013-04,BI70276,ORDHDR,7,SENT
2013-04,BI70276,INVFIL,1,RECV
2013-04,BI70276,ORDHDR,1,SENT
2013-04,BI70276,ORDHDR,5,SENT
2013-04,BI70276,INVFIL,4,RECV
2013-04,BI70276,ORDHDR,6,SENT
2013-04,BI70276,INVFIL,9,RECV
2013-04,BI70276,ORDHDR,12,SENT
2013-04,BI70276,INVFIL,178,RECV.................................etc.
I have this data for every CCYY-MM for the last 53 months, c200k events. So, no **actual** timestamp for each event.
If I use this:
index=IX Billing=BI70400 MsgType=ORDHDR Direction=SENT | stats sum(BillSize) as MonthSize by Month
...I get the column chart that I expect/want.
How can I use this to create a prediction for the future? We've tried a few variations, based on this, but without success.
Thank you.
↧
How can I see the difference in a count for two different type of events by day?
Hi,
I would like to see the difference in a count for two different type of events per day. Currently I have it in total but not sure how to split it per day
index="index1" ("first string" OR "second string") | eval First=if(searchmatch("first string"),1,0) | eval Second=if(searchmatch("second string"),1,0) | stats sum(First) as FirstChecks sum(Second) as SecopndChecks | eval missing=FirstChecks - SecondChecks
Thanks
↧
How can I filter events befoer they are indexed so they aren't indexed?
I tried this solution but no success.
I am trying to filter data from being indexed.I need only the Error events
In props conf:
[source:://C:\\Windows\\System32\\winevt\\Logs]
# Transforms must be applied in this order
# to make sure events are dropped on the
# floor prior to making their way to the
# index processor
TRANSFORMS-set = setnull, setparsing
In transforms.conf:
[setnull]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue
[setparsing]
REGEX = Error
DEST_KEY = queue
FORMAT = indexQueue
↧
When I delete an old version of Splunk does it delete the old indexes and hosts?
Hi All,
I've recently had to reinstall Splunk on my server.
It was using an index called "index2", I've since removed that version of Splunk (which I thought would of deleted the index) and installed Splunk v7.
It's worth noting that I had about 10-12 forwarders sending syslog data to my Splunk instance before uninstalling.
Since installing the new Splunk v7 I can only see one available forwarder when selecting "add data"
The odd thing is that when I go to search and reporting and select index="newindex" it generates a whole lot of data and tells me I have 6 hosts contributing data.
This is quite puzzling. The IP address and port of the Splunk instance is exactly the same, so I'm not sure why they don't appear in the forwarder list under 'add data'
Appreciate any help i can get
S.
↧
↧
Move license from cluster to standalone
Hi,
**Splunk version: Splunk Enterprise 6.4.1**
**OS: Linux CentOS 7**
We have a standalone Splunk enterprise where the license will soon expire. As we also have a distributed Splunk implementation with a lot of spare license on. We want to move one of our 5GB licenses from the distributed depolyment to the standalone server. Here is what we have done.
- Copied out the license XML from the distributed deployment.
- Deleted the license on the distributed deployment
- Installed the license on the standalone server.
Here is where the issue arises. When we install the license we get a success reply. But when we go back to the license overview the license hasn't been added. But it is saved in the
/opt/splunk/etc/license/enterprise
directory. And we can see it on the debug page "All license details". We have of course restarted Splunk after installing the new license.
From what I understand, there should not be any problems moving a license from one server to another. Is there anything that I am missing? Or is this a bug?
↧
Proofpoint TAP Modular Input App: admin_all_objects error?
Seeing the following message after installing the Proofpoint TAP Modular Input and it is not working.
Error from _internal splunkd proofpoint_tap_siem.py
stream_events/Error encrypting and saving password - (An error occurred updating credentials. Please ensure your user account has admin_all_objects and/or list_storage_passwords capabilities.
I am admin on Splunk Enterprise, what am I missing?
Thanks in advance!!
↧
How to search for Matching fields using 2 different host with Same sourcetype
I'm looking to find matching field (lets call this field action) from 2 different host with the same sourcetype.
example Sourcetype=pan host=1 and host=2
I'm looking to create a ta table that would show the matching field for field action (I only want the matching field to generate result)
so if host 1 has action=allowed and host 2 has action=allowed. I want to create a table that would include the time, action, src, dest.
↧
Unix Add-on Not Extracting Fields
I've got the Splunk Add-on for Unix and Linux installed on my index master and across my 3 indexers via a cluster bundle.
In the App for Unix & Linux running on my search head, I can see results from all 4 hosts, text like the output from `cpu.sh` and `ps.sh`.
But none of the add-on specific fields, e.g., **pctCPU** from `top.sh`, are being extracted, which of course breaks many of the associated dashboards.
Any help on getting the app & add-ons working, and in particular, fixing field extraction, across the cluster would be very much appreciated.
↧
↧
Dynamic Dashboard Title with hideTitle=true - Show Filters not displayed after clicking Hide Filters
I have a multpurpose dashboard/form that I needed to label based on url params I am setting in Nav. Form labels do not pick up on token values.
Based on another post I used hideTitle=true to hide the dashboard label and created an HTML panel with the dynamic title that sits below the fieldset. Problem is that now when hide filters link is used, show filters is no longer displayed.
Is it possible to move the show filters link into my HTML panel while leaving hide filters next to fieldset?
↧
How to join 2 indexes by common field respective to time. Index 2 has multiple events with the same field.
Hello there,
I have two sets of data under two different indexes. The fields for each index are respectively **[customer_id, datetime]** and **[customer_id, date_of_creation, motive]**.
I would like to perform a join on the field **"customer_id"** in order to have the motives for each line. Problem is that in the second index, there can be multiple lines with the same **"customer_id"**, so to perform the join on this field I need to check that the dates field are consistent (difference of 5 minutes max).
Any idea how could I do that ? Thanks in advance :D
↧
After Splunk upgrade (6.4.2 to 6.6.2) we can't create dashboards
Hello Team,
We did upgrade splunk from 6.4.2 to 6.6.2 on linux platform. could see we can neither open the previous dashboards created by users nor we can create a new dashboard.
It is just displaying a "loading" message and we endup in waiting for dashboard to complete loading.
Request your expertise towards resolve the same.
↧
How do we find the first non-zero packet loss event?
example dated newest to oldest :
{ "ip_address": "255.255.255.255","loss_pct": 0, "device_id": "ABC"}
{ "ip_address": "255.255.255.255","loss_pct": 10, "device_id": "ABC"}
{ "ip_address": "255.255.255.255","loss_pct": 20, "device_id": "ABC"}
{ "ip_address": "255.255.255.255","loss_pct": 0, "device_id": "ABC"}
{ "ip_address": "255.255.255.255","loss_pct": 20, "device_id": "XYZ"}
{ "ip_address": "255.255.255.255","loss_pct": 20, "device_id": "XYZ"}
{ "ip_address": "255.255.255.255","loss_pct": 0, "device_id": "XYZ"}
{ "ip_address": "255.255.255.255","loss_pct": 20, "device_id": "XYZ"}
{ "ip_address": "255.255.255.255","loss_pct": 10, "device_id": "PQR"}
{ "ip_address": "255.255.255.255","loss_pct": 10, "device_id": "PQR"}
{ "ip_address": "255.255.255.255","loss_pct": 50, "device_id": "AAA"}
{ "ip_address": "255.255.255.255","loss_pct": 0, "device_id": "AAA"}
{ "ip_address": "255.255.255.255","loss_pct": 20, "device_id": "AAA"}
{ "ip_address": "255.255.255.255","loss_pct": 0, "device_id": "AAA"}
{ "ip_address": "255.255.255.255","loss_pct": 100, "device_id": "AAA"}
{ "ip_address": "255.255.255.255","loss_pct": 0, "device_id": "AAA"}
expected output:
{ "ip_address": "255.255.255.255","loss_pct": 20, "device_id": "XYZ"}
{ "ip_address": "255.255.255.255","loss_pct": 50, "device_id": "AAA"}
Notice : if newest packet loss zero then exclude. if packet loss zero has not seen within the search result then exclude. if packet lost swap few times then take take first newest non zero pack lost
↧
↧
JIRA Core compatible with Real-Time JIRA Service Desk Connector for Splunk?
I am interested in the ability to create issues in JIRA via Splunk Alerts, but we don't utilize Service Desk. We only use JIRA Core. Will this connector work?
↧
ERROR extraction from log file
All,
would like to extract the below information from the logs
Caused by: org.apache.camel.TypeConversionException: Error during type conversion from type: java.lang.String to the required type: int with value [% JMS_Input_Consumers %] due java.lang.NumberFormatException: For input string: "[% JMS.LOGSBUS.JMS_Input_Consumers %]"
at org.apache.camel.impl.converter.BaseTypeConverterRegistry.createTypeConversionException(BaseTypeConverterRegistry.java:610)
at org.apache.camel.impl.converter.BaseTypeConverterRegistry.mandatoryConvertTo(BaseTypeConverterRegistry.java:177)
at org.apache.camel.impl.converter.BaseTypeConverterRegistry.mandatoryConvertTo(BaseTypeConverterRegistry.java:156)
at org.apache.camel.util.IntrospectionSupport.convert(IntrospectionSupport.java:622)
at org.apache.camel.util.IntrospectionSupport.setProperty(IntrospectionSupport.java:537)
at org.apache.camel.util.IntrospectionSupport.setProperty(IntrospectionSupport.java:602)
at org.apache.camel.util.IntrospectionSupport.setProperties(IntrospectionSupport.java:459)
at org.apache.camel.util.IntrospectionSupport.setProperties(IntrospectionSupport.java:469)
at org.apache.camel.util.EndpointHelper.setProperties(EndpointHelper.java:256)
at org.apache.camel.impl.DefaultComponent.setProperties(DefaultComponent.java:257)
at org.apache.camel.component.jms.JmsComponent.createEndpoint(JmsComponent.java:886)
at org.apache.camel.impl.DefaultComponent.createEndpoint(DefaultComponent.java:114)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Out of the above sample would like to extract the below
caused by: org.apache.camel.TypeConversionException
Description: Error during type conversion from type: java.lang.String to the required type: int with value [% JMS_Input_Consumers %] due java.lang.NumberFormatException: For input string: "[% JMS.LOGSBUS.JMS_Input_Consumers %]"
at org.apache.camel.impl.converter.BaseTypeConverterRegistry.createTypeConversionException(BaseTypeConverterRegistry.java:610)
at org.apache.camel.impl.converter.BaseTypeConverterRegistry.mandatoryConvertTo(BaseTypeConverterRegistry.java:177)
at org.apache.camel.impl.converter.BaseTypeConverterRegistry.mandatoryConvertTo(BaseTypeConverterRegistry.java:156)
at org.apache.camel.util.IntrospectionSupport.convert(IntrospectionSupport.java:622)
at org.apache.camel.util.IntrospectionSupport.setProperty(IntrospectionSupport.java:537)
at org.apache.camel.util.IntrospectionSupport.setProperty(IntrospectionSupport.java:602)
at org.apache.camel.util.IntrospectionSupport.setProperties(IntrospectionSupport.java:459)
at org.apache.camel.util.IntrospectionSupport.setProperties(IntrospectionSupport.java:469)
at org.apache.camel.util.EndpointHelper.setProperties(EndpointHelper.java:256)
at org.apache.camel.impl.DefaultComponent.setProperties(DefaultComponent.java:257)
at org.apache.camel.component.jms.JmsComponent.createEndpoint(JmsComponent.java:886)
at org.apache.camel.impl.DefaultComponent.createEndpoint(DefaultComponent.java:114)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Any help would be appreciated.
↧
Why am I getting these errors from extraction of Stacktrace?
Stacktrace
---------------------------------------------------------------------------------------------------------------------------------------
org.springframework.jms.InvalidDestinationException: JMSWMQ2008: Failed to open MQ queue 'DUMMY'.; nested exception is com.ibm.msg.client.jms.DetailedInvalidDestinationException: JMSWMQ2008: Failed to open MQ queue 'DUMMY'.
JMS attempted to perform an MQOPEN, but WebSphere MQ reported an error.
Use the linked exception to determine the cause of this error. Check that the specified queue and queue manager are defined correctly.; nested exception is com.ibm.mq.MQException: JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED') reason '2085' ('MQRC_UNKNOWN_OBJECT_NAME').
at org.springframework.jms.support.JmsUtils.convertJmsAccessException(JmsUtils.java:285)
at org.springframework.jms.support.JmsAccessor.convertJmsAccessException(JmsAccessor.java:168)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:469)
at org.apache.camel.component.jms.JmsConfiguration$CamelJmsTemplate.send(JmsConfiguration.java:245)
at org.apache.camel.component.jms.JmsProducer.doSend(JmsProducer.java:413)
at org.apache.camel.component.jms.JmsProducer.processInOnly(JmsProducer.java:367)
at org.apache.camel.component.jms.JmsProducer.process(JmsProducer.java:153)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:141)
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77)
at appname.connectivity.core.cdi.monitoring.NodeEventProcessor.process(NodeEventProcessor.java:71)
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:91)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:460)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:337)
at org.apache.camel.processor.DefaultErrorHandler.process(DefaultErrorHandler.java:59)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:165)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:121)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:83)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:63)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:165)
at org.apache.camel.component.direct.DirectProducer.process(DirectProducer.java:62)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:141)
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77)
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:91)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:460)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:337)
at org.apache.camel.processor.DefaultErrorHandler.process(DefaultErrorHandler.java:59)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:165)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:121)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:83)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:63)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:165)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109)
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:87)
at com.appname.bpc.connectivity.camel.broker.MxBrokerConsumer.processMessage(MxBrokerConsumer.java:200)
at com.appname.bpc.connectivity.camel.broker.MxBrokerConsumer.onMessage(MxBrokerConsumer.java:128)
at appname.bos.client.jms.BOSMessageListener.onMessage(BOSMessageListener.java:37)
at org.apache.activemq.ActiveMQMessageConsumer.dispatch(ActiveMQMessageConsumer.java:1361)
at org.apache.activemq.ActiveMQSessionExecutor.dispatch(ActiveMQSessionExecutor.java:131)
at org.apache.activemq.ActiveMQSessionExecutor.iterate(ActiveMQSessionExecutor.java:202)
at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129)
at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.ibm.msg.client.jms.DetailedInvalidDestinationException: JMSWMQ2008: Failed to open MQ queue 'DUMMY'.
JMS attempted to perform an MQOPEN, but WebSphere MQ reported an error.
Use the linked exception to determine the cause of this error. Check that the specified queue and queue manager are defined correctly.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:503)
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:221)
at com.ibm.msg.client.wmq.internal.WMQMessageProducer.checkJmqiCallSuccess(WMQMessageProducer.java:1061)
at com.ibm.msg.client.wmq.internal.WMQMessageProducer.checkJmqiCallSuccess(WMQMessageProducer.java:1019)
at com.ibm.msg.client.wmq.internal.WMQMessageProducer.access$800(WMQMessageProducer.java:68)
at com.ibm.msg.client.wmq.internal.WMQMessageProducer$SpiIdentifiedProducerShadow.initialise(WMQMessageProducer.java:765)
at com.ibm.msg.client.wmq.internal.WMQMessageProducer.(WMQMessageProducer.java:995)
at com.ibm.msg.client.wmq.internal.WMQSession.createProducer(WMQSession.java:886)
at com.ibm.msg.client.jms.internal.JmsSessionImpl.createProducer(JmsSessionImpl.java:1232)
at com.ibm.msg.client.jms.internal.JmsQueueSessionImpl.createSender(JmsQueueSessionImpl.java:136)
at com.ibm.mq.jms.MQQueueSession.createSender(MQQueueSession.java:153)
at com.ibm.mq.jms.MQQueueSession.createProducer(MQQueueSession.java:254)
at org.springframework.jms.connection.CachingConnectionFactory$CachedSessionInvocationHandler.getCachedProducer(CachingConnectionFactory.java:371)
at org.springframework.jms.connection.CachingConnectionFactory$CachedSessionInvocationHandler.invoke(CachingConnectionFactory.java:329)
at com.sun.proxy.$Proxy130.createProducer(Unknown Source)
at org.springframework.jms.core.JmsTemplate.doCreateProducer(JmsTemplate.java:971)
at org.springframework.jms.core.JmsTemplate.createProducer(JmsTemplate.java:952)
at org.apache.camel.component.jms.JmsConfiguration$CamelJmsTemplate.doSendToDestination(JmsConfiguration.java:288)
at org.apache.camel.component.jms.JmsConfiguration$CamelJmsTemplate.access$100(JmsConfiguration.java:234)
at org.apache.camel.component.jms.JmsConfiguration$CamelJmsTemplate$1.doInJms(JmsConfiguration.java:248)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:466)
... 45 more
Caused by: com.ibm.mq.MQException: JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED') reason '2085' ('MQRC_UNKNOWN_OBJECT_NAME').
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:209)
Out of these sample log would like to extract the below
1. Caused by: All possible cause by and its count
2. Stack trace related to the error
↧