Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all articles
Browse latest Browse all 47296

For my particular configuration, where and how do I delete all traces of all events ingested during prior runs?

$
0
0
I have a 4-server Splunk scenario: deployment server index server search head server A deployment client server (w/ a Splunk Universal Forwarder) I used the deployment server web interface to create a *.csv files monitor on the deployment client server. Using csv sourcetype. The data is ingested into a single index. I'm developing a script on the deployment client server to create the multiple, uniquely named csv files. This is the cycle I'll use until the script produces what I need: 1. Run the script to create multiple, uniquely named csv files (Splunk ingests the csv files) 2. Verify what's been ingested by searching the index from the search head server web interface. Observe that the number of events created equals the sum of rows from all csv files. Observe that the "source" field for each event is one of the uniquely named csv files. 3. Clean the index using the CLI on the index server 4. Delete the fishbucket on the deployment client server 5. Make needed changes to the script 6. Repeat from step 1. The problem: After I verify what's been ingested by searching the index from the search head server web interface, I create a pivot table where each row is the source. The first time, I did this, it looked ok. But for second and subsequent runs, the number of rows in the pivot table far outnumber the number of events ingested. I assume that I've not deleted all events from all relevant Splunk repositories (fishbuckets, etc?) ingested during the prior run. For my particular configuration, where and how do I delete all traces of all events ingested during prior runs?

Viewing all articles
Browse latest Browse all 47296

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>