Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Why is my eval search returning empty field results?

$
0
0
Hello, I have searched some of the previous questions, but none seem to pertain to my problem. I am running the below search: | jirarest jqlsearch "type = *(typename)* AND \"Environment Type\" = *(environmenttype)* AND (\"Environment Name\" in (*(environmentname1)*, *(environmentname2)*, *(environmentname3)*) OR \"Environment Name\" is EMPTY) AND createdDate >= startOfMonth()" | eval Created=strptime(Created, "%d:%m") | table Created The search returns table rows as if it is finding results, but all of the rows are blank. The field I am evaluating is a date/time field, but it has more data than I need, and I am also trying to present it in a more easily readable format. Any insight anyone may have will be greatly appreciated. Thank You.

How to generate a search to find out hosts in Splunkd that have restarted?

$
0
0
Can i please know the search to find out the hosts in Splunkd that have restarted or has " splunkd started Conf mutator lockfile has disappeared error " in splunkd_stderr.log on forwarder?

Can't get instantaneous_kbps and average_kbps from metrics.log in universal forwardwer

$
0
0
From Documentation: To verify how often the forwarder is hitting this limit, check the forwarder's metrics.log. (Look for this on the forwarder because metrics.log is not forwarded by default on universal and light forwarders.) cd $SPLUNK_HOME/var/log/splunk/metrics.log grep "name=thruput" metrics.log Example: The instantaneous_kbps and average_kbps are always under 256KBps. 11-19-2013 07:36:01.398 -0600 INFO Metrics - group=thruput, name=thruput, instantaneous_kbps=251.790673,instantaneous_eps=3.934229, average_kbps=110.691774, total_k_processed=101429722, kb=7808.000000, ev=122 But when i run the grep -i "name=thruput" metrics.log , i wouldn't get any result. So, can i please know whether there is any way to check the instantaneous_kbps and average_kbps ?

Why has Splunk stopped indexing my files?

$
0
0
Customer reported that a standalone Splunk Indexer had stopped indexing any monitored files. They also noticed that : - Splunks _internal index was no longer been written to. - The Splunk GUI was available and "splunk status" showed splunk was running - Splunks log files in $SPLUNK_HOME/var/log/splunk were being written to corectly

Why am I unable to delete indexes from the Splunk GUI? Why do I have to restart Splunk when I create a new Index from the GUI?

$
0
0
Customer reported several issue with Index Management using the Splunk GUI: - Unable to create new Indexes from Settings > Indexes > New Index (GUI reports that a restart is required) - Unable to create new Indexes from Data Inputs > Files & Directories > New (GUI reports that a restart is required) - Unable to delete Indexes from Settings > Indexes > Delete (GUI will not accept any input apart from "Cancel")

how can I use hidden columns. but use them in a drilldown?

$
0
0
hey there I am trying to use my columns named unixtime_start and unixtime_end for a drilldown. but I don't want them to appear in my table. what can I do to use them either way? thanks SJanasek

why does my dropdown say "Duplicate values causing conflict"?

$
0
0
I am trying to output the CUSTOMER_NAME via a csv lookup. my lookup file (lookup_test.csv) looks like that: CUSTOMER_ID,CUSTOMER_NAME 39076,Customer1 56706,Customer2 20294,Customer3 my dropdown includes the following search string: index="index1" sourcetype="source1" | lookup kunde_test CUSTOMER_ID OUTPUT CUSTOMER_NAME | top 10 CUSTOMER_NAME it now shows the following message: "Duplicate values causing conflict" but there are not any duplicate values? what could be the problem here?

summary indexing : hide psrsvd fields?

$
0
0
Hello, is it possible to hide psrsvd fields, at least those like **psrsvd_vm** with sensible data like session ids? Thanks.

Splunkd.log error rolling hot bucket to warm

$
0
0
I just noticed yesterday that there are errors in the Splunkd.log. Splunk is still running and so far we are not encountering any problems but I am concerned it might become an issue in the near future. Is there anybody here who knows what the problem could be? The error log: 01-25-2017 12:15:57.882 +0900 ERROR databasePartitionPolicy - Failed again to move bucket, reason='failed to get necessary info for hot bucket='E:\Splunk\var\lib\splunk\defaultdb\db\hot_v1_180' from hot mgr'. Will retry later. 01-25-2017 12:15:57.882 +0900 ERROR databasePartitionPolicy - failed to get necessary info for hot bucket='E:\Splunk\var\lib\splunk\defaultdb\db\hot_v1_180' from hot mgr 01-25-2017 12:15:57.882 +0900 ERROR databasePartitionPolicy - failed to get necessary info for hot bucket='E:\Splunk\var\lib\splunk\defaultdb\db\hot_v1_180' from hot mgr 01-25-2017 12:15:54.897 +0900 ERROR databasePartitionPolicy - failed to move hot with id=180, due to exception='Unable to rename from='E:\Splunk\var\lib\splunk\defaultdb\db\hot_v1_180' to='E:\Splunk\var\lib\splunk\defaultdb\db\db_1484904368_1483489423_180' because Access is denied.' 01-25-2017 12:15:54.835 +0900 ERROR databasePartitionPolicy - Unable to rename from='E:\Splunk\var\lib\splunk\defaultdb\db\hot_v1_180' to='E:\Splunk\var\lib\splunk\defaultdb\db\db_1484904368_1483489423_180' because Access is denied. It seems that this is the first time that this error occurred ( I ran the search All time) . There were no configuration changes, restarts, or updates that were done. Seems like it just happened. Hope someone could help.

Unable to deploy to Splunk cluster

$
0
0
Hi, While trying to deploy the Funnel app to a Splunk cluster, we got stuck with the following error message: "Error while creating deployable apps...No such file or directory" Any ideas what is causing this? The problem seems to be tied to the file package.json in the "for-in" directory

Reindex data with change of crcSalt if crcSalt = is used?

$
0
0
I have a data input which uses crcSalt = Task is to reindex these data. Preferably using the crcSalt optione (see [How to reindex data from a forwarder][1]) I already tested crcSalt = REINDEXME but of course it doesn't work. Any idea - except from "modify the first line of the files to reindex" or "clean the fishbucket"? [1]: https://answers.splunk.com/answers/72562/how-to-reindex-data-from-a-forwarder.html

Backup Strategies for Indexer

$
0
0
What strategies do people use for backups of their buckets? Is there a clean way to identify "new" buckets for a given day based on their file name?

dynamic earliest time SPLUNK Query

$
0
0
We have to implement following scenerio in splunk. We are indexing a log "extractA" with _time as settlement day which can be 20 days ahead. We are running a query to check the events on settlement Date with earliest as @d (all the event with current day as settlement day) There are following two condition which we have to ensure :- 1. If settlement date is of Monday say 30 Jan 2017 then it is posted on Saturday 28 Jan 2017, but on Saturday we will not be getting "extractA" events from our query (index=abc sourcetype=extractA earliest=@d) as earliest time is @d and time on event will be 30 Jan 2017. Posting happens only from Tuesday to saturday morning. 2. We are maintining a holiday calendar in holiday.csv file, on holidays for e.g if 30 Jan 2017 is a holiday in holiday.csv then earliest time should shift to 31st Jan 2017 and pick all the event with timestamp as 31Jan 2017. Thanks Pradeep

What is a good approach to extract fields from a single event that captures a structured data table?

$
0
0
We have events coming in from stdout, such as the top command, where a single event captures a multi-line structured data output, e.g., this is a single Splunk event: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 11981 root 20 0 2121m 860m 6996 S 0.3 1.8 36:17.82 python 12149 root 20 0 19.1g 1.0g 6556 S 0.3 2.2 45:00.03 java 13744 root 20 0 4959m 207m 5676 S 0.3 0.4 22:26.91 java 1 root 20 0 19364 1232 1064 S 0.0 0.0 3:43.65 init What is a good approach to do field extractions on this type of data, where a single event is a structured data table? Thanks

Splunk rex help

$
0
0
I am trying to count the number of 200 response codes from an access log. can you please help in getting me the output. "POST /webservice/services/serviceABC HTTP/1.1" A_Cell/A_node/A_Cluster_jvm 117 118 200 "POST /webservice/services/serviceABC HTTP/1.1" B_Cell/B_node/B_Cluster_jvm_2 164 819 200 "POST /webservice/services/serviceABC HTTP/1.1" A_Cell/C_node/C_Cluster_jvm_1197 917 200 Log looks like the above. Is it possible to get the output of stats count by each node and and eachjvm in that node?

Why does my drop-down say "Duplicate values causing conflict" when there are no duplicate values?

$
0
0
I am trying to output the CUSTOMER_NAME via a csv lookup. my lookup file (lookup_test.csv) looks like that: CUSTOMER_ID,CUSTOMER_NAME 39076,Customer1 56706,Customer2 20294,Customer3 my drop-down includes the following search string: index="index1" sourcetype="source1" | lookup kunde_test CUSTOMER_ID OUTPUT CUSTOMER_NAME | top 10 CUSTOMER_NAME it now shows the following message: "Duplicate values causing conflict" but there are not any duplicate values? what could be the problem here?

Summary Indexing: Is it possible to hide psrsvd fields?

$
0
0
Hello, is it possible to hide psrsvd fields, at least those like **psrsvd_vm** with sensible data like session ids? Thanks.

How do I reindex data with change of crcSalt if crcSalt = is used?

$
0
0
I have a data input which uses crcSalt = Task is to reindex these data. Preferably using the crcSalt optione (see [How to reindex data from a forwarder][1]) I already tested crcSalt = REINDEXME but of course it doesn't work. Any idea - except from "modify the first line of the files to reindex" or "clean the fishbucket"? [1]: https://answers.splunk.com/answers/72562/how-to-reindex-data-from-a-forwarder.html

What are good backup strategies for indexer buckets?

$
0
0
What strategies do people use for backups of their buckets? Is there a clean way to identify "new" buckets for a given day based on their file name?

Is there a way to quantify how much data is sent between search head and indexers?

$
0
0
Hi, I'm at the planning stages of designing a Splunk deployment in our global setup, I've been tasked with making this as lightweight on the network as possible as our WAN links are expensive (time and cost) and I can't get in the way of existing traffic. So I think I need to ignore the best practice examples of having indexers all replicating their data between them as that appears to be all about search performance. We're happy to accept slower searches over less data replication cost. Please point me at docs if this idea is covered but I haven't found anything myself. I'm planning the following: * One indexer in each data center around the globe, with hosts sending their logs to their local indexer and nowhere else. * One search head in each data center and users will use the search head nearest to them. Am I right in thinking that a search head will send a query to each indexer (or should I be saying search peer?) and they will prepare a results set and send back to the requesting search head to collate and presents results to the user. If that's all true and would work is there a way to quantify how much data is sent between the indexers and search head, is it as simple as just the _raw values that meet the search criteria and the search head does any further processing? Thanks in advance!
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>