How to index the same field "A" different values for the unique ID? A set of field "A" values is finite and for each ID can have multiple identical field values.
After a few search strings I have a table. I try to explain by img:
![alt text][1]
My main difficulty that I can't calculate the time difference between any two points of the field "A", because there are the same field "A" values. I think that this way will help me.
[1]: /storage/temp/214579-index-field.jpg
↧
How to index a field where identical values have different values for ID?
↧
What is meant by "Spent X ms reaping search artifacts"?
We saw a spike in the memory usage in one of the cluster search heads. This spike stayed for around 12 hours. When looking and comparing splunkd.log from all search heads, the impacted search head had something different. The warning in splunkd.log looks something like this:
Spent 10777ms reaping search artifacts in /opt/splunk/var/run/splunk/dispatch
Can anyone help me find out if the above would cause an excessive use of memory?
↧
↧
Adding api key for splunk add-on builder
Hi
I added api key for splunk add-on builder but i am getting error as " no connection could be made as target machine is actively refusing when the sending URL https://api.splunk.com . Can someone please help with the error i made. i also tried adding api-key in header section but still the same issue.
![alt text][1]
[1]: /storage/temp/214583-splunk-addon-builder.png
↧
Splunk automatically get cdr data?
The original way that my team set up Polycom resource manager (rprm) was to get it to generate cdr (call detail report) every 7 days and place that file into a folder we set up on the network. Then, when Splunk detect if there's a new file in this folder it will automatically input these data into a specific index.
But since we want Splunk to provide us real-time metrics, is there a way I could hook up Splunk to this polycom resource manager so that it could input the data from rprm directly?
↧
Trying to change the background color of a panel that has a single value (using XML)
This is my XML :["host","source","sourcetype"]
Am trying to change the background color of the single value panel , but no matter what color code I put in there , the background color remains black.
2nd thing am trying to do is change the color of the actual value being displayed on the single value panel from black to something else.
The value shown in the single value panel is textual and not numeric
↧
↧
Splunk Add-on Builder: issue with adding API key
Hi
I added api key for splunk add-on builder but I am getting error as " no connection could be made as target machine is actively refusing when the sending URL https://api.splunk.com . Can someone please help with the error I made. I also tried adding api-key in header section but still the same issue.
![alt text][1]
[1]: /storage/temp/214583-splunk-addon-builder.png
↧
Can Splunk automatically get cdr data or Polycom resource manager data?
The original way that my team set up Polycom resource manager (rprm) was to get it to generate cdr (call detail report) every 7 days and place that file into a folder we set up on the network. Then, when Splunk detected if there's a new file in this folder it would automatically input these data into a specific index.
But since we want Splunk to provide us real-time metrics, is there a way I could hook up Splunk to this polycom resource manager so that it could input the data from rprm directly?
↧
Tableau Splunk Extract (no index field)
Hi,
I am using Tableau Desktop (10.x) to connect to Splunk using the ODBC driver. It extracts from a saved Splunk report that searches across multiple indexes.
I unfortunately cannot see the index field in the data that is extracted when using this method. However, if I manually extract the data from Splunk (csv extract), then the index field is part of the extracted data set.
Is this a feature or a bug? Or am I doing something wrong?
-Harish
↧
How can you specify additional characters to the indexing tokenizer?
We have messages that have tabs replaced with #011 along with other control characters (See rsyslog EscapeControlCharactersOnReceive setting) but we do not want to turn this setting off. Ideally, we want to have Splunk split on #011 in addition to the existing splitting tokens (real tab, spaces, etc). When we have log lines like:
\#011Testing 123
We are unable to search for "Testing" without specifying it as a wildcard or some other substring technique. We would like to be able to search for Testing as if it a log line without the #011 replacement.
↧
↧
DB Connect 3 Hive Connection
Hey,
We are trying to connect "DB connect 3" to our Hive DB. Unfortunately I'm running into some issues.
I have followed the instructions from this link, but I am still facing issues: https://www.splunk.com/blog/2015/02/25/splunk-db-connect-cloudera-hive-jdbc-connector.html
Our Hive instance is on version 1.2.1. I am attempting to use both the Hive 1.2.1 and 2.5.6 JDBC drivers, I get the same error from both sets of drivers.
**The JDBC URL I'm using**
jdbc:hive2://{IP}:{Port}/default
**The errror - I have looked through the Jar file and this Class is in there**
io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: d5f34034d981f17d
java.lang.NoClassDefFoundError: Could not initialize class com.cloudera.hive.hive.core.Hive2JDBCDriver
**My config entry in db_connection_types.conf
- attempt using different jar hive-jdbc-1.2.1.jar\org\apache\hive\jdbc\HiveDriver**
[hive3]
displayName = Hive Server 3
jdbcDriverClass = org.apache.hive.jdbc.HiveDriver
defaultPort = 10001
connectionUrlFormat = jdbc:hive2://{0}:{1}/{2}
defaultSchema = default
defaultCatalogName = default
**Attempt using HiveJDBC4.jar **
[hive4]
displayName = Hive Server 4
dbcDriverClass = com.cloudera.hive.jdbc4.HS1Driver
defaultPort = 10001
connectionUrlFormat = jdbc:hive2://{0}:{1}/{2}
defaultSchema = default
defaultCatalogName = default
Thanks
James
↧
Unable to accelerate some of the Splunk_SA_CIM data models (like email, web, malware etc)
Some of the data models of Splunk_SA_CIM data models are not getting accelerated. I have tried to accelerate for different time range like 1d, 1 month. It just showing building and at 0%, never goes forward. When I enable acceleration and check splunkd.log nothing shows up. The dashboard for the datamodel also doesn't anything other than some garbage earliest time and max time. Any suggestions on how this needs to be checked and resolved are greatly appreciated.
↧
Splunk - Add-on for JIRA - Setup
Hi,
I am new to the application "Add-on for JIRA". I am unaware what are the things to be installed and set up with applications to work on or fetch details from JIRA environment through Splunk.
Could anyone please help me in the basic procedures what are all need to be done to start working with JIRA from Splunk. If it is step by step procedures it would be helpful to any beginners who searches in answers.splunk.com.
↧
UF stopped sending data after a reinstall
We were facing issue in Splunk log forwarding to IDXer cluster.
I found that our enterprise instance servers are 6.5.3 and UFs were of 6.6.2. So I uninstalled 6.6.2 version of UF and reinstalled 6.5.2 version on the same machine.
Then I did the similar configuration on the new UF. Now in the logs I can see UF is connected to Indexer but no data is been forwarded to the enterprise version.
I feel there is something I missed during the reinstallation.
Thanks.
Vikram.
↧
↧
How to append field value to events based on its category
I have all events logged under one index. The events arent categorzied. Below is the query
index=main host="prod*" AND host= "*web*" AND _raw!="*sql*" AND exception!="*db2*" error earliest=1504915200 latest=1510358400 | eval layer="Application"| append [search index=main host="prod*" MQ _raw="*ERROR*" earliest=1504915200 latest=1510358400 | eval layer="Queue"]|stats count by layer
Is it possible to combine both to single query somehting like below so that same index need not be queried twice
index=app host="prod*" _raw!="*INFO*" error earliest=1504915200 latest=1510358400 |eval layer=case(host=="*web*" OR host=="*wap*" AND _raw!="" AND _raw!="*sql*" AND _raw!="*MQ*" AND exception!="*db2*" AND exception !="*solr*", "Application", raw=="*MQ*", "Queue") |stats count by layer
↧
Looking for Thoughts on using lookup tables when data is indexed
I know I can create lookup tables and use them during a search. We would like to apply that same process to fields as they are indexed.
so rather making field user Paul instead of Xxxad during a search we want to do this when the event is indexed.
Is this possible?
Does this impact indexing and what are the impacts on searching?
Thanks!
↧
Getting Error - Unknown "Sid", The search job "admin__admin_Y29ydmlsX3NlY3VyaXR5X2FuYWx5dGljcw__search1_1505452955.8" was canceled remotely or expired.
I get this error only when i come to search from dashboard. If run the search independently i don't get this error.
↧
SPLUNK & vCenter - Changing Management Port 8089 on Only the vCenter Universal Forwarders
Hi - I've seen various discussions on this topic, namely 8089 used by vCenter as well as SPLUNK's deployment server but not always being resolved.
From a server environment the vCenter ports (can't be changed) and we're considering only changing this port on the Universal Forwarder, not on the Deployment Server otherwise the whole environment needs to be adjusted. What would be the best practice process to follow changing these on the vCenter (forwarder) environment.
Note: As an alternative we're looking at Syslogs and digesting these.
Any practical experiences with users having had to perform these already?
Thanks
↧
↧
Splunk - join two search queries having common field
Hi,
I need to join two splunk search queries based on a common field (JoinId).
All I would like to have at the output is to return how many values of that particular common field is mapped from both queries and how many are unmapped from query2.
Can someone help me with this?
**query1:**
index="source1" "layer-monitor" ServiceEvent "debug"| search NOT ErrorMessage=* | stats count(JoinId) as "Total success"
**query2:**
index="source2" sourcetype="sourceData" "Message=Log Message invoked" JoinId
I would like to know how many JoinId fields of query2 are unmapped with respect to query1 JoinId
↧
CPE Certificate
Dear Splunk Community,
I'm reaching out to you as I completed the Splunk Power User course some time ago and wanted to collect CPE for various other certifications that require continued education proof.
So far I've received one of the certificates from do_not_reply_educ@splunk.com with a long link starting by http://www.splunk.training/getpdf.asp?data=something.
Now I would like to get certificates for the other courses completion, so I reached to elearn@splunk.com, without success so far.
Would anyone know how to get these certificates?
Thanks in advance!
See below for a "dummy" certificate:
![alt text][1]
[1]: /storage/temp/214589-splunk-certificate-dummy.png
↧
List of users in AD that consequently login after working hours
I'm trying to search if there are users in AD that always log in after working hours, but I have no success.
↧