My application has multiple plugins and the Splunk event contains the number of plugins that have failed to load. Sometimes all the plugins are active and sometimes more than 10 plugins fail to load. Here is a sample event. Basically the word behind the string "**IS UNACCOUNTED FOR**" is my plugin name. and I need all the plugin names that are present behind the string "is accounted for". And as I said, there could any number of failed plugins in the event. The following example event contains two failed plugins. i.e 'Announcer for CONF' and 'HipChat for CONF'
___ FAILED PLUGIN REPORT _____________________
1 plugin failed to load during CONF startup.
'com.bsaassian.plugins.authentication.bsaassian-authentication-plugin' - 'SAML for bsaassian Data Center' failed to load.
Unexpected exception parsing XML document from URL [bundle://127.0:0/META-INF/spring/plugin-context.xml]; nested exception is javax.xml.parsers.FactoryConfigurationError: Provider for class javax.xml.parsers.DocumentBuilderFactory cannot be created
Provider for class javax.xml.parsers.DocumentBuilderFactory cannot be created
Provider for class javax.xml.parsers.DocumentBuilderFactory cannot be created
javax.xml.parsers.DocumentBuilderFactory: Provider com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl not found
It was loaded from /apps/bsaassian/CONF.7.6.7/bsaassian-CONF/WEB-INF/bsaassian-bundled-plugins/bsaassian-authentication-plugin-2.0.8.jar
4 plugins are unaccounted for.
Unaccounted for plugins load as artifacts but fail to resolve into full plugins.
'com.wittified.atl-announcer-CONF' - 'Announcer for CONF' IS UNACCOUNTED FOR.
It was loaded from /atlshare/bsaassian/application-data/CONF/plugins/installed-plugins/plugin.2625541172025988687.atl-announcer-CONF-2.3.10-7x.jar
'com.bsaassian.labs.hipchat.hipchat-for-CONF-plugin' - 'HipChat for CONF' IS UNACCOUNTED FOR.
********************************************************************************************************************************************************************************************************
↧
How do I write a regex that extracts a field behind a specific string?
↧
Can you help me create a lookup table for fields coming from Azure Monitoring Data Add-On?
We're using the Azure Monitoring Data Add-on to integrate Splunk and Azure. The Azure events have the subscription ID value (fields name is am_subscriptionId) in each of the events. I would like to be able to put a name/email address to the subscription. I have a lookup table configured which has the fields subscriptionID, subscriptionName, and subscriptionContact. I have attempted to use lookups to no avail. Below is my search. I would like to have a table result with the am_subscriptionId, subscriptionName, and subscriptionContact displayed.
index=* sourcetype=amal:security
| lookup azure_subscription_id_to_support_group subscriptionID AS am_subscriptionId OUTPUT subscriptionName
↧
↧
Is there a place to download the Machine Learning Toolkit (MLTK) Technical Deep Dive and Demo 1+2 ?
Hi, Is there a place to download the MLTK Technical Deep Dive and Demo 1+2 or at least the slides? It is only available as a on-demand webinar and I need to go back a reference it periodically. Thanks
↧
regex perm text to comma
I have this log:
2139,A-1112,74,01:11:71:E1:A1:C1,store,store@store.net,Nitro,Enroll,nitrofire Enroll,,Windows ,Redblue - B111.B4321,,C,1.1.3213,5/4/2018 7:23,Compliant,Enrolled,,MDM,9/20/2018 4:43,,No ,N/A,United States,Yes,00000000A6C344A354543534535345CEBD4A928D,000-88,,No,3/9/2018 17:38,9/20/2018 4:30
I am trying to capture "9/20/2018 4:43". The characters "MDM," will always be there before the date/time. It will also always end with a comma.
Any ideas?
↧
Can you help me create a regex expression that captures text with a comma?
I have this log:
2139,A-1112,74,01:11:71:E1:A1:C1,store,store@store.net,Nitro,Enroll,nitrofire Enroll,,Windows ,Redblue - B111.B4321,,C,1.1.3213,5/4/2018 7:23,Compliant,Enrolled,,MDM,9/20/2018 4:43,,No ,N/A,United States,Yes,00000000A6C344A354543534535345CEBD4A928D,000-88,,No,3/9/2018 17:38,9/20/2018 4:30
I am trying to capture "9/20/2018 4:43". The characters "MDM," will always be there before the date/time. It will also always end with a comma.
Any ideas?
↧
↧
Can you help me come up with a regex expression which would extract a number from a string?
Hi,
I have a field which produces a value like this example: DB=HR_10_7_3043_TGTHRLIVE
I am trying extract the number and write it in the following way: DB_Version=10.7.3043
How do I get Splunk to cut off before and after the number and then replace the _ with .
Note: The strings before and after the numbers can vary in length, and the number can vary too.
Many thanks,
Sam
↧
Why am I getting Invalid key in stanza errors when running ./splunk btool check --debug ?
Checking: /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 3: p
ort (value: 8088)
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 4: e
nableSSL (value: 1)
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 6: d
edicatedIoThreads (value: 2)
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 7: m
axThreads (value: 0)
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 8: maxSockets (value: 0)
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 9: useDeploymentServer (value: 0)
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 11: sslVersions (value: *,-ssl2)
Did you mean 'source'?
Did you mean 'sourcetype'?
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 12: allowSslCompression (value: true)
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 13: allowSslRenegotiation (value: true)
Checking: /fs/untd-1/splunk/etc/apps/splunk_instrumentation/default/app.conf
Invalid key in stanza [ui] in /opt/splunk/etc/apps/splunk_instrumentation/default/app.conf, line 12: show_in_nav (value: 0)
Checking: /fs/untd-1/splunk/etc/apps/splunk_instrumentation/default/collections.conf
Invalid key in stanza [instrumentation] in /opt/splunk/etc/apps/splunk_instrumentation/default/collections.conf, line 10: type (value: internal_cache)
What I have identified is after the Splunk server moved from CentOS 5 to CentOS 6, below are new folders that got created.
drwxr-xr-x 3 31855 31855 4096 Feb 28 2018 splunk_httpinput
drwxr-xr-x 5 31855 31855 4096 Feb 28 2018 splunk_archiver
drwxr-xr-x 4 31855 31855 4096 Feb 28 2018 appsbrowser
drwxr-xr-x 7 31855 31855 4096 Feb 28 2018 alert_webhook
drwxr-xr-x 7 31855 31855 4096 Feb 28 2018 alert_logevent
drwxr-xr-x 7 31855 31855 4096 Feb 28 2018 splunk_instrumentation
drwxr-xr-x 11 31855 31855 4096 Feb 28 2018 splunk_monitoring_console
I'm getting alerts from all the files in the above dirs. How can I fix them? I'm using Splunk 6.2.2 version
Thanks
Rajesh
↧
How to use regex for just extracting and indexing custom fields of windows eventlogs?
Hi Splunker
I have question about how to use regex for just extract and index custom fields of windows eventlogs. for example, for event id=4624 i need to extract fields like logname source eventid level and their values and to index just this fields in my indexer for reducing volume of windows eventlog. Thanks for your helps splunkers.
↧
Can someone help me with a search that parses through two lookup tables?
Hi,
I have two lookup tables
lookup1:
RealName, username
Smith, J ( LDN), smithj
Andy, H (LDN),andyh
Tan, Y ( JPN), tany
Jiang, T ( JPN), jiangt
lookup2:
Group, Members
admin, CN=Smith, J ( LDN),OU=Users,OU=LDN CN=Andy, H ( LDN),OU=Users,OU=LDN
access, CN=Tan,Y ( JPN),OU=Users,OU=JPN CN=Jiang, T ( JPN),OU=Users,OU=JPN
My original search will output a username ( eg: "smithj"). I need to parse this username into lookup 1 to get the RealName. Then parse that RealName into lookup 2 to search it under "Members" field to get the Group value.
Eg: If my original search returns "joesmith" and parse it into lookup1, I need "admin" from lookup 2.
Could someone help with this search?
↧
↧
Why is our Splunk Universal Forwarder not able to read the modification on a file under the path "C:\Program Files (x86)"?
My Splunk Universal Forwarder is not able to read the modification on a file under the path "C:\Program Files (x86)"
My inputs.conf is:
[monitor://C:\Program Files (x86)\TeamViewer\TeamViewer13_Logfile.log]
sourcetype = TeamViewer:Connection:Client
index = teamviewer
disabled = 0
queue = indexQueue
What am I doing wrong? I cannot see anything about this file in splunkd.log.
↧
Using registry monitoring (WinRegMon) with a universal forwarder for Windows server BIOS versions, why are the _time values for baseline events 3 days late?
Hi,
I am trying to monitor Windows servers BIOS versions using Registry monitoring with UF. For testing, I installed a full Splunk Ent. and used a web GUI to add some Registry input with the baseline.
I received several events, but the _time field for the baseline event is weird. Approximately, they are in 3 days late. The create/etc. events look good.
_raw _time
09/21/2018 21:59:09.175 event_status="(0)The operation completed successfully." pid=16872 process_image="c:\Windows\regedit.exe" registry_type="DeleteKey" key_path="HKLM\hardware\description\system\bios\új azonosító (#1)" data_type="REG_NONE" data="" 2018-09-21 21:59:09
09/21/2018 21:59:09.175 event_status="(0)The operation completed successfully." pid=16872 process_image="c:\Windows\regedit.exe" registry_type="SetValue" key_path="HKLM\hardware\description\system\bios\test_key" data_type="REG_SZ" data="" 2018-09-21 21:59:09
09/21/2018 21:59:04.570 event_status="(0)The operation completed successfully." pid=16872 process_image="c:\Windows\regedit.exe" registry_type="SetValue" key_path="HKLM\hardware\description\system\bios\új azonosító (#1)" data_type="REG_SZ" data="" 2018-09-21 21:59:04
09/18/2018 10:47:04.786 registry_type="baseline" key_path="\registry\machine\hardware\description\system\bios\SystemVersion" data_type="REG_SZ" data="" 2018-09-18 10:47:04
09/18/2018 10:47:04.786 registry_type="baseline" key_path="\registry\machine\hardware\description\system\bios\SystemVersion" data_type="REG_SZ" data="" 2018-09-18 10:47:04
09/18/2018 10:47:04.786 registry_type="baseline" key_path="\registry\machine\hardware\description\system\bios\SystemVersion" data_type="REG_SZ" data="" 2018-09-18 10:47:04
The upper events are the create/update/etc events created within a few minutes with the baseline events, but baseline shows 18. Sept, update events 21 Sept. (today).
How it is possible, what am I doing wrong? The base system is a Win 10, and the system time is ok.
Inputs:
[WinRegMon://kulcsi01]
baseline = 1
disabled = 0
hive = HKEY_LOCAL_MACHINE\\HARDWARE\\DESCRIPTION\\SYSTEM\\BIOS\\?.*
proc = C:\\.*
type = set|rename|create|delete
[WinRegMon://kulcsi02]
baseline = 1
disabled = 0
hive = HKEY_LOCAL_MACHINE\\SYSTEM\\HardwareConfig\\Current\\?.*
proc = C:\\.*
type = create
Thx,
István
↧
Splunk Add-on for Java Management Extensions: Why does jmx data stop for all servers when one goes down?
We are using Splunk Add-on for Java Management Extensions (JMX)
Application: Splunk_TA_jmx
We are monitoring jmx data from two environments -- ENV_A and ENV_B
Both environments have several weblogic servers and connections were created for each of them.
We have 3 templates to subset the jmx data to be collected – T1, T2, and T3.
Each environment has two tasks:
ENV_A – T1, T2 (interval 60)
ENV_A – T3 (interval 86400)
ENV_B – T1, T2 (interval 300)
ENV_B – T3 (interval 86400)
ENV_A is an environment that goes up and down. When ENV_A goes down, no jmx data from ENV_B is sent to the indexer. There are no errors in the log file other than those indicating that a port from the ENV_A is not available.
I have to restart the Splunk forwarder for the jmx data to resume.
Anyone know why the availability of ENV_A impacts the collection of data from ENV_B?
↧
I need help with setting up distribution search. I am getting the error below when i add the search peers or index peer
Encountered the following error while trying to save: The time difference / clock skew between this system and the intended peer at uri=https://:8089 was too big. Please bring system clocks into agreement. search_head_time=1537513347.801773 peer_time=1537517825.000000 skew_seconds=-4477.198227 addpeer_skew_limit=600 Skew limit from limits.conf, [search] stanza.
Can someone help please
↧
↧
Splunk DB3 Call Stored Procedure from Specific Database
Hello all,
I just created a SP in one of our databases on our server. We have hundreds of databases, so I am not sure how to tell Splunk to look specifically for this one (or how else would Splunk know which database to look in?).
If I run the following in Splunk, it says it cannot find the SP:
| dbxquery connection="SERVER_NAME" procedure="{call dbo.GetData}"
I have also tried:
| dbxquery connection="SERVER_NAME" procedure="{call DATABASE.dbo.GetData}"
and
| dbxquery connection="SERVER_NAME" procedure="{call DATABASE..dbo.GetData}"
↧
In my own dashboard, Is there a way to make a progress bar like the one in Splunk Monitoring Console?
Use stats results columns of table formatted as progress bar similarly as in Monitor console
I am trying to create my own dashboard where I am getting results as a table and one of the columns is in the format:
"number / number", example: 2021.78 / 3991.24.
Is it possible to have it formatted in table results with a progress bar to see visually how much disk space is left?
I saw these progress bars in Splunk monitoring console, but was unable to find out how to do it in my own dashboards.
See picture attached and column: (Volume Usage (GB))
![alt text][1]
[1]: /storage/temp/255013-screenshot-2018-09-21-15-35-052.png
↧
Splunk Add-on for Java Management Extensions: Why does JMX data stop for all servers when one goes down?
We are using Splunk Add-on for Java Management Extensions (JMX)
Application: Splunk_TA_jmx
We are monitoring JMX data from two environments -- ENV_A and ENV_B
Both environments have several weblogic servers and connections were created for each of them.
We have 3 templates to subset the JMX data to be collected – T1, T2, and T3.
Each environment has two tasks:
ENV_A – T1, T2 (interval 60)
ENV_A – T3 (interval 86400)
ENV_B – T1, T2 (interval 300)
ENV_B – T3 (interval 86400)
ENV_A is an environment that goes up and down. When ENV_A goes down, no JMX data from ENV_B is sent to the indexer. There are no errors in the log file other than those indicating that a port from the ENV_A is not available.
I have to restart the Splunk forwarder for the JMX data to resume.
Anyone know why the availability of ENV_A impacts the collection of data from ENV_B?
↧
When setting up a distribution search, why am I getting the following error when adding search or index peers?
I encountered the following error while trying to save:
"The time difference / clock skew between this system and the intended peer at uri=https://:8089 was too big. Please bring system clocks into agreement. search_head_time=1537513347.801773 peer_time=1537517825.000000 skew_seconds=-4477.198227 addpeer_skew_limit=600 Skew limit from limits.conf, [search] stanza."
Can someone help please?
↧
↧
Splunk Database Connect: How do we use the dbxquery command to find a Stored Procedure (SP) from a specific database?
Hello all,
I just created a SP in one of our databases and on our server. We have hundreds of databases, so I am not sure how to tell Splunk to look specifically for this one (or how else would Splunk know which database to look in?).
If I run the following in Splunk, it says it cannot find the SP:
| dbxquery connection="SERVER_NAME" procedure="{call dbo.GetData}"
I have also tried:
| dbxquery connection="SERVER_NAME" procedure="{call DATABASE.dbo.GetData}"
and
| dbxquery connection="SERVER_NAME" procedure="{call DATABASE..dbo.GetData}"
↧
help on sourcetype regex
hello
In the file atatched i need to do a line break not after a format date like "06/09/2018 - 14:21:24" as its actually done but just after ------
so i want that _raw is equal to all the text between ----- and -----
which regex i have to use please??
↧
How to get average and percentage change in the same query?
Below is the data in my index named index
ETS=20180921 CNT=161756 BRAND=A INDICATOR=Y
ETS=20180921 CNT=156203 BRAND=B INDICATOR=Y
ETS=20180921 CNT=12354 BRAND=C INDICATOR=Y
ETS=20180921 CNT=26267 BRAND=D INDICATOR=Y
ETS=20180921 CNT=1014571 BRAND=E INDICATOR=Y
ETS=20180921 CNT=2323 BRAND=F INDICATOR=Y
ETS=20180920 CNT=158563 BRAND=A INDICATOR=Y
ETS=20180920 CNT=156174 BRAND=B INDICATOR=Y
ETS=20180920 CNT=12332 BRAND=C INDICATOR=Y
ETS=20180920 CNT=26248 BRAND=D INDICATOR=Y
ETS=20180920 CNT=1013469 BRAND=E INDICATOR=Y
ETS=20180920 CNT=2321 BRAND=F INDICATOR=Y
where ETS is the date, cnt is the count, brands are A B C etc and and indicator.
I want to know how to get the percentage change in today's count by brand w.r.t. the average of 90 days. (Currently it has data for 2 days only) .
I am able to get the sum by brand but not the able to divide it by the days to get average .. also I am not able to use that further to decrease from today's value.
please help.
↧