Hi Splunker. I have question about how to use regex for just extract and index custom fields of windows eventlogs. for example, for event id=4624 i need to extract fields like logname source eventid level and their values and to index just this fields in my indexer for reducing volume of windows eventlog. Thanks for your helps splunkers.
↧
how to use regex for just extracting and indexing custom fields of windows eventlogs.
↧
Uncooked Data Containing hostname
I am sending uncoooked data but the hostname the other end are seeing is the Heavy Forwarder.
Is there anyway of sending the correct hostname?
↧
↧
Two lookup tables
Hi
I have two lookup tables
lookup1:
RealName, username
Smith, J ( LDN), smithj
Andy, H (LDN),andyh
Tan, Y ( JPN), tany
Jiang, T ( JPN), jiangt
lookup2:
Group, Members
admin, CN=Smith, J ( LDN),OU=Users,OU=LDN CN=Andy, H ( LDN),OU=Users,OU=LDN
access, CN=Tan,Y ( JPN),OU=Users,OU=JPN CN=Jiang, T ( JPN),OU=Users,OU=JPN
My original search will output a username ( eg: "smithj") ,I need to parse this username into lookup 1 to get the RealName. Then parse that RealName into lookup 2 to search it under "Members" field to get the Group value.
Eg: If my original search returns "joesmith" and parse it into lookup1, I need "admin" from lookup 2.
Could someone help with this search?
↧
Splunk Universal Forwarder is not able to read the modification on a file under the path "C:\Program Files (x86)"
My Splunk Universal Forwarder is not able to read the modification on a file under the path "C:\Program Files (x86)"
My inputs.conf is:
[monitor://C:\Program Files (x86)\TeamViewer\TeamViewer13_Logfile.log]
sourcetype = TeamViewer:Connection:Client
index = teamviewer
disabled = 0
queue = indexQueue
What am I wrong? I cannot see nothing about this file into splunkd.log.
↧
What are the two ways to list indexes available in splunk search head ?
Hi All,
I had two question's on splunk.
1) How to list the indexes details available in splunk search heads?
2) What is streaming and non-streaming commands and how are they executed (in which scenario's it is used)
thanks in advance.
↧
↧
string search / regex /get behind a string
My application has multiple plugins and the Splunk event contains the number of plugins that have failed to load. Sometimes all the plugins are active and sometimes more than 10 plugins fail to load. Here is a sample event. Basically the word behind the string "**IS UNACCOUNTED FOR**" is my plugin name. and I need all the plugin names that are present behind the string "is accounted for". And as I said, there could any number of failed plugins in the event. The following example event contains two failed plugins. i.e 'Announcer for CONF' and 'HipChat for CONF'
___ FAILED PLUGIN REPORT _____________________
1 plugin failed to load during CONF startup.
'com.bsaassian.plugins.authentication.bsaassian-authentication-plugin' - 'SAML for bsaassian Data Center' failed to load.
Unexpected exception parsing XML document from URL [bundle://127.0:0/META-INF/spring/plugin-context.xml]; nested exception is javax.xml.parsers.FactoryConfigurationError: Provider for class javax.xml.parsers.DocumentBuilderFactory cannot be created
Provider for class javax.xml.parsers.DocumentBuilderFactory cannot be created
Provider for class javax.xml.parsers.DocumentBuilderFactory cannot be created
javax.xml.parsers.DocumentBuilderFactory: Provider com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl not found
It was loaded from /apps/bsaassian/CONF.7.6.7/bsaassian-CONF/WEB-INF/bsaassian-bundled-plugins/bsaassian-authentication-plugin-2.0.8.jar
4 plugins are unaccounted for.
Unaccounted for plugins load as artifacts but fail to resolve into full plugins.
'com.wittified.atl-announcer-CONF' - 'Announcer for CONF' IS UNACCOUNTED FOR.
It was loaded from /atlshare/bsaassian/application-data/CONF/plugins/installed-plugins/plugin.2625541172025988687.atl-announcer-CONF-2.3.10-7x.jar
'com.bsaassian.labs.hipchat.hipchat-for-CONF-plugin' - 'HipChat for CONF' IS UNACCOUNTED FOR.
********************************************************************************************************************************************************************************************************
↧
Amount of data outlier detection can process
Is there a limitation on the amount of data outlier can process. I have several examples processing around 3K rows in a csv file, but can it process 50K? Does anyone have working examples? Thanks,
↧
Basic Informations from Microsoft Exchange
Since I'm not an Exchange Administrator nor have access to a test environment, I would like to have an answer for a specific question from someone more experienced on Exchange:
- Some areas/departments use a common email address (call centers, service desks, work groups like in support). Is it possible to track/identify who was the sender (workstation login id, IP address, etc) of a specific email sent by the use of these "group accounts"? What would be the add-on and apps needed to do so?
Thanks!
↧
How to configure Distributed Search Groups for clustered Indexer environment?
**Question:** How to configure Distributed Search Groups - distsearch.conf - on a Search head that run searches across both on clustered indexers and non-cluster indexers?
**Context:**
The documentation on "Configure distributed search groups" [1] explains on how to define distributed search groups using distsearch.conf on the Search Head but only for the use case of non-clustered peers/indexers.
However, the documentation mentions the following:
> These are some examples of indexer cluster deployments where distributed search groups might be of value:> Search heads that run searches across both an indexer cluster and standalone indexers. You might want to put the standalone indexers into their own group.
**Problem:**
We already use this distributed search group feature for non-clustered indexers. However, we haven't been successful in enabling this feature to work for non-clustered and clustered indexers (without using DMC).
[distributedSearch:groupIDX1]
default = false
servers = myserver1:8089, myserver2:8089
[distributedSearch:groupIDX2]
default = false
servers = myserver3:8089, myserver4:8089
[distributedSearch:groupIDXClustered]
default = false
servers = myserverCluster1:8089, myserverCluster2, myserverCluster3:8089
With a configuration similar to the above we get the warning on the search:
> warn : Search filters specified using splunk_server/splunk_server_group do not match any search peer.
Has anyone been successful in configuring Distributed Search Groups for clustered Indexers?
[1]: http://docs.splunk.com/Documentation/Splunk/7.1.3/DistSearch/Distributedsearchgroups
↧
↧
Universal Forwarder not Forwarding?
Hello all!
I have banged my head for about 2 hours trying to figure out why my universal forwarder won't transfer data to my Heavy Forwarder.
Steps I have done:
- Opened receiving **port 80** on Heavy Forwarder
- The heavy forwarder's forwarding port has been configured correctly (HTTP data inputs forward correctly)
- netstat -lpnt -> shows that 0.0.0.0:80 is in **LISTEN** mode
- Using **tcping.exe** from my Windows client, I was able to successfully **/tcping.exe server-ip port**
- **Ping** server-ip is successful
- I added a forwarder server **./splunk add forward-server server-ip:port** with correct port
- I added the server to **NO_PROXY** env var
outputs.conf:
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = ip:port
[tcpout-server://ip:port]
inputs.conf
[default]
host = hostname
Universal Forwarder log:
09-20-2018 09:15:52.949 -0400 WARN TcpOutputProc - Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has been blocked for 249499 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
09-20-2018 09:15:57.504 -0400 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected
09-20-2018 09:15:57.504 -0400 INFO DC:PhonehomeThread - Attempted handshake 2430 times. Will try to re-subscribe to handshake reply
09-20-2018 09:16:00.296 -0400 WARN HttpPubSubConnection - Unable to parse message from PubSubSvr:
09-20-2018 09:16:00.296 -0400 INFO HttpPubSubConnection - Could not obtain connection, will retry after=72.778 seconds.
09-20-2018 09:16:09.515 -0400 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected
I have also restarted both my heavy forwarder and universal forwarder.
When running the **./splunk list forward-server**, my server and IP is listed under "Configured but inactive"
Any thoughts?
↧
Splunk add on for AppDynamics
For our Application names we match them to our internal application names in our environment. Some of our applications have a "/" in the name. When we access the metrics URL it uses the app id number instead of the application name, so this never causes us a problem. But when we use the summary option we get errors for the apps that have the "/" because it is using the application name. Here is an example of the error:
M
09-21-2018 09:55:55.025 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_AppDynamics/bin/appdynamics_summary.py" HTTPError: 400 Client Error: Invalid application id outstanding customer balance is specified for url: https://progressive-prod.saas.appdynamics.com/controller/rest/applications/**Oneof%20our%20Apps/withthechar%20notworking**/metric-data?output=JSON&time-range-type=BEFORE_NOW&duration-in-mins=5&metric-path=Overall%20Application%20Performance%7C*
Besides changing our application name is there any way around this issue?
↧
Can you help us troubleshoot the following naming problem in the Splunk add on for AppDynamics?
We match our application names to our internal application names in our environment. Some of our applications have a "/" in the name. When we access the metrics URL, it uses the app ID number instead of the application name so this never causes us a problem. But, when we use the summary option, we get errors for the apps that have the "/" because it is using the application name. Here is an example of the error:
09-21-2018 09:55:55.025 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_AppDynamics/bin/appdynamics_summary.py" HTTPError: 400 Client Error: Invalid application id outstanding customer balance is specified for url: https://progressive-prod.saas.appdynamics.com/controller/rest/applications/**Oneof%20our%20Apps/withthechar%20notworking**/metric-data?output=JSON&time-range-type=BEFORE_NOW&duration-in-mins=5&metric-path=Overall%20Application%20Performance%7C*
Besides changing our application name is there any way around this issue?
↧
Why is my universal forwarder not forwarding?
Hello all!
I have banged my head for about 2 hours trying to figure out why my universal forwarder won't transfer data to my Heavy Forwarder.
Steps I have done:
- Opened receiving **port 80** on Heavy Forwarder
- The heavy forwarder's forwarding port has been configured correctly (HTTP data inputs forward correctly)
- netstat -lpnt -> shows that 0.0.0.0:80 is in **LISTEN** mode
- Using **tcping.exe** from my Windows client, I was able to successfully **/tcping.exe server-ip port**
- **Ping** server-ip is successful
- I added a forwarder server **./splunk add forward-server server-ip:port** with correct port
- I added the server to **NO_PROXY** env var
outputs.conf:
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = ip:port
[tcpout-server://ip:port]
inputs.conf
[default]
host = hostname
Universal Forwarder log:
09-20-2018 09:15:52.949 -0400 WARN TcpOutputProc - Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has been blocked for 249499 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
09-20-2018 09:15:57.504 -0400 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected
09-20-2018 09:15:57.504 -0400 INFO DC:PhonehomeThread - Attempted handshake 2430 times. Will try to re-subscribe to handshake reply
09-20-2018 09:16:00.296 -0400 WARN HttpPubSubConnection - Unable to parse message from PubSubSvr:
09-20-2018 09:16:00.296 -0400 INFO HttpPubSubConnection - Could not obtain connection, will retry after=72.778 seconds.
09-20-2018 09:16:09.515 -0400 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected
I have also restarted both my heavy forwarder and universal forwarder.
When running the **./splunk list forward-server**, my server and IP is listed under "Configured but inactive"
Any thoughts?
↧
↧
How do I get min/max of a column chart PER field?
I created values for the average CPU, memory and swap memory usage and managed to get it in a column chart. I'd like to get the chart to display the min/max of each field (cpu, memory, swap) — not the min/max of all the fields by date.
Here is my query and what my chart currently looks like:
index=os (sourcetype=cpu cpu=all) OR (sourcetype=vmstat)
| search host=$server_name$
| eval Percent_CPU_Load = 100 - pctIdle
| eval date=strftime(_time,"%A")
| stats avg(Percent_CPU_Load) avg(memUsedPct) avg(swapUsedPct) by date
| rename avg(Percent_CPU_Load) AS "Avg CPU" avg(memUsedPct) as "Avg Memory" avg(swapUsedPct) AS "Avg Swap Memory"
| stats values by myvalues
| eval sort_field = case(date=="Monday",1, date=="Tuesday",2, date=="Wednesday",3, date=="Thursday",4, date=="Friday",5, date=="Saturday",6, date=="Sunday",7)
| sort sort_field
| fields - sort_field
![alt text][1]
[1]: /storage/temp/256052-splunk.png
↧
Is there a Splunk Add-on for tracking emails in Microsoft Exchange?
Since I'm not an Exchange Administrator, nor do I have access to a test environment, I would like to have an answer for a specific question from someone more experienced with Microsoft Exchange:
— Some areas/departments use a common email address (call centers, service desks, work groups like in support). Is it possible to track/identify who was the sender (workstation login id, IP address, etc) of a specific email sent by the use of these "group accounts"?
What would be the add-on and apps needed to do so?
Thanks!
↧
How do I configure Distributed Search Groups for a clustered Indexer environment?
**Question:** How to configure Distributed Search Groups - distsearch.conf - on a Search head that run searches across both on clustered indexers and non-cluster indexers?
**Context:**
The documentation on "Configure distributed search groups" [1] explains on how to define distributed search groups using distsearch.conf on the Search Head but only for the use case of non-clustered peers/indexers.
However, the documentation mentions the following:
> These are some examples of indexer cluster deployments where distributed search groups might be of value:> Search heads that run searches across both an indexer cluster and standalone indexers. You might want to put the standalone indexers into their own group.
**Problem:**
We already use this distributed search group feature for non-clustered indexers. However, we haven't been successful in enabling this feature to work for non-clustered and clustered indexers (without using DMC).
[distributedSearch:groupIDX1]
default = false
servers = myserver1:8089, myserver2:8089
[distributedSearch:groupIDX2]
default = false
servers = myserver3:8089, myserver4:8089
[distributedSearch:groupIDXClustered]
default = false
servers = myserverCluster1:8089, myserverCluster2, myserverCluster3:8089
With a configuration similar to the above we get the warning on the search:
> warn : Search filters specified using splunk_server/splunk_server_group do not match any search peer.
Has anyone been successful in configuring Distributed Search Groups for clustered Indexers?
[1]: http://docs.splunk.com/Documentation/Splunk/7.1.3/DistSearch/Distributedsearchgroups
↧
Azure Monitoring Data Add-On and Lookups
Using the Azure monitoring data add-on to integrate Splunk and Azure. The Azure events have the subscription ID value (fields name is am_subscriptionId) in each of the events. I would like to be able to put a name/email address to the subscription. I have a lookup table configured which has the fields subscriptionID, subscriptionName, and subscriptionContact. I have attempted to use lookups to no avail. Below is my search. I would like to have a table result with the am_subscriptionId, subscriptionName, and subscriptionContact displayed.
index=* sourcetype=amal:security
| lookup azure_subscription_id_to_support_group subscriptionID AS am_subscriptionId OUTPUT subscriptionName
↧
↧
How do you set up a time range from 10 pm to 4am for a scheuled hourly report?
We had set up a report which triggers on an hourly basis from 10PM to 4AM but the 10PM, 11PM report contains last 24 hours data. We only need report starting from 10PM to 4AM data. Please let us know what we need to feed in EARLIEST and LATEST.
↧
How to extract number from a string?
Hi,
I have a field which produces a value like this example: DB=HR_10_7_3043_TGTHRLIVE
I am trying extract the number and write it in the following way: DB_Version=10.7.3043
How do I get Splunk to cut off before and after the number and then replace the _ with .
Note: The strings before and after the numbers can vary in length, and the number can vary too.
Many thanks,
Sam
↧
Is there a limitation on the amount of data an outlier can process?
Is there a limitation on the amount of data an outlier can process? I have several examples processing around 3K rows in a CSV file, but can it process 50K? Does anyone have working examples? Thanks,
↧