Is it possible to enable indexerDiscovery on a Heavy Forwarder? I followed instructions here (http://docs.splunk.com/Documentation/Splunk/6.6.2/Indexer/indexerdiscovery), but haven't been able to get it to work. If I check https://localhost:8000/services/server/info on a heavy forwarder, it just lists the server role as kv_store, which to me tells me the indexerDiscovery is not working. If I manually add indexer peers as a forwarder server, the server role is correctly updated to reflect heavyweight_forwarder.
I don't see anything in the logs that points to any issues. Is there a way to check the status of a heavy forwarder or the status of indexer discovery (IE: list indexers that were discovered)?
↧
Indexer Discovery on Heavy Forwarder
↧
I do a search for an index and it finds it. I look in the web interface for indexes and it is not listed. I look in data inputs and it is listed there as an index. Why does the web interface not show it in the indexes?
I do a search for an index and it finds it. I look in the web interface for indexes and it is not listed. I look in data inputs and it is listed there as an index. Why does the web interface not show it in the indexes?
↧
↧
How to push updates from SH deployer and Index Cluster master?
I had to update a props.conf and I am trying to push it via my Index cluster master and my sh cluster deployer.
what is the command to push from my index cluster?
when I try pushing from SH cluster deployer I get this error:
Error while deploying apps to target=https://txxxxxxx2:8089 with members=2: No captain found amongst members
I used command ./splunk apply shcluster-bundle -target https://txxxxxx2:8089.
THis worked last week when I pushed a new app.
Thanks!
↧
How to add an add-on icon using Splunk Add-on Builder App?
Hi,
I am trying to add an icon or logo to the add-on that I am creating with Splunk add-on Builder App to be downloaded on Splunkbase before packaging it.
I could not find a documentation on this.
Can someone please guide me?
Thanks a lot!
↧
Is there a better way to represent varying data sets in chart visualization?
Hi all,
I am having an issue with a dashboard that I am working with. The values of the bucket I am using vary from 1 to ~800. Because of this, it makes it impossible to effectively convey the data using this visualization as seen in the attached picture. Has anyone found a way to better represent varying data sets or have any suggestions?
Thanks in advance
![alt text][1]
[1]: /storage/temp/254706-screen-shot-2018-08-15-at-90959-am.png
↧
↧
How do I take data from a search and output it to REST API?
I need to pass data from Splunk to an external system based upon a triggered Alert.
Could I use the REST API to pass the JSON data or would a python script be a better approach?
↧
Report on the latest events
Hello,
I am trying to create a report that only looks at the latest events by a sourcetype.
The sourcetype is an indexed text file, and it its pulls in the events every time the file changes.
This is the working search:
index=ops sourcetype="csv-marketData" earliest=-12h@h | where Price!="NA"
| eval cal_mkt_cap=round(Share_Outstanding * Price,3)
| eval rnd_MKT_CAP = round(MARKET_CAP,3)
| eval perc_range = (cal_mkt_cap / rnd_MKT_CAP)*100
| where perc_range < 99
| eval rnd_perc = round(perc_range,2)
| rename cal_mkt_cap as "Calculated MKT CAP"
| rename rnd_MKT_CAP as "Provided MKT CAP"
| rename rnd_perc as "%"
| table ID "Calculated MKT CAP" "Provided MKT CAP" "%"
I would like this table to only show results from the latest set of events. each event set has the same _time value. new events can come in minutes apart of once daily, so I would like to always be reviewing the indexed file.
Thanks for your help.
↧
group similar url's into single event ?
I am doing a search to get the total count of different URIs and their response times. My result has multiple events of similar URLs -
search/abc/1/mno/count/ctr/div/1/link/4
search/abc/1/mno/count/ctr/div/1/link/4,5
find/xyzi/1/fig/count/exact/abc/24
find/xyzi/1/fig/count/exact/abc/24/25
My search query :-
| rex "\s+\/(?(url_path)\S+)"
| search url_path!="*error*"
| eval Date=strftime(_time, "%Y-%m-%d")
| chart count over url_path by Date
| addtotals
| sort - Total
↧
group similar url's into single field ?
I am doing a search to get the total count of different URIs and their response times. My result has multiple events of similar URLs -
search/abc/1/mno/count/ctr/div/1/link/4
search/abc/1/mno/count/ctr/div/1/link/4,5
find/xyzi/1/fig/count/exact/abc/24
find/xyzi/1/fig/count/exact/abc/24/25
My search query :-
| rex "\s+\/(?(url_path)\S+)"
| search url_path!="*error*"
| eval Date=strftime(_time, "%Y-%m-%d")
| chart count over url_path by Date
| addtotals
| sort - Total
↧
↧
Dashboard set input variables with a token from another input
I'm in the process of building out a new dashboard that will have 3 input selects.
1. Datetime
2. Input1
3. Input2
Input1 is dependent on Datetime and input2 is dependent on input1. I'm using search strings to set all 3 inputs but I need to have it setup populate the drop downs for input1 and 2. This way everything in the dashboard will only return results for the values that are in the drop downs. I know I can do an earliest latest in my search but I'd rather have input1 derive its data from the datetime selector. I don't see an option in the time selector values to tie it back to a token like when you add a search to a panel.
We're currently on Splunk 7.0.3
Thank you for your assistance.
↧
Events lost when exporting to csv
Hello
When searching from the Splunk Search Head, get thousands of events.
When I export the result to a csv I only get 64 lines.
What could be the problem?
Thanks!
↧
Can triggered alerts be sent to a separate Search Head?
Imagine, several stovepipes exist... all separately configured...
Due to constraints, your customer doesn't want to turn the stovepipes into Heavy Forwarders and build an indexing tier and Search Head. So, the thought occurred to me:
Can you identify alerts that you want to be triggered, then, in turn, send those alerts to a separate SH?
I've read solutions that write the triggered alerts to syslog, but, I was curious if there were any other creative ways to send triggered alerts to a separate Search Head?
↧
Data Visualization Collision
Hi all,
I am having trouble with data visualizations. Two of my data points are layered on top of each other. I have tried adjusting the scale and size of the visualization and can't figure it out. Does anyone know how to fix this collision?
![alt text][1]
Thanks in advance
[1]: /storage/temp/254710-screen-shot-2018-08-15-at-31043-pm.png
↧
↧
How to sort in groups in splunk
How to sort in groups
In addition, do you have functions similar to Oracle?![alt text][1]
[1]: /storage/temp/255716-组内排序.png
↧
working with IP addresses - creating a table of old IP addresses
**Background:**
I have a directory/folder of CSV files containing the following fields:
mac ;IP;devicename;interface;vlan which is being indexed into switchlogs.
[collected from all my LAN switches]
Currently, to check if an IP address is older than 90 I use the following search:
index="switchlogs" IP=xxx.xxx.xxx.xxx daysago=90 | timechart count | sort by _time desc
Any results returned tells me that IP has been active in the last 90 days.
eg.
2018-08-16 0
2018-08-15 92
2018-08-14 108
2018-08-13 112
2018-08-12 106
**Question:**
How do I get a table of IP addresses which have expired [not seen in 90 days] in one single search.
[maybe I can use a lookup table to check against?]
For example,
xxx.xxx.xxx.xxx last seen on the network
thankyou
↧
Indexing Data from NFS without mounting
Hello, I'm relatively new to Splunk, so please bear with me. I wanted to know whether there was any way to point to my shared storage data without actually doing an NFS mount. Can I maybe point the universal forwarder in such a way where I give it the name and the directory path for NFS to fetch the data and send it to the indexer, all from my host machine?
Thank you for your time.
↧
Indexing data with multiple forwarders on the same host
Hello,
I googled around for similar questions but could not find anything, so I'm sorry if this question has already been asked before. If i want to index large amounts of data using multiple forwarders, is there some way where i can configure the various forwarders to act in a distributed fashion? I know of managing pipelines for index parallelization (https://docs.splunk.com/Documentation/Splunk/7.1.2/Indexer/Pipelinesets), but that still does not quite solve the issue.
What do people in general do to solve such a problem? Thank you!
↧
↧
how to get the result of sorting in the group.
I want to get the result of sorting in the group.
![alt text][1]
[1]: /storage/temp/255717-range.png
↧
How to configure splunk to get field value from Splunk DB connect data pull
Hi - we have a requirement to get the data from DB Connect.
In pulling data, we also need to take the value of a field (data field) and append that value to the splunk Source field (source=filename_).
We know that formatting can be done in props and transforms but getting the field value from DB Connect pull is the one we are not familiar of.
Any help can be greatly appreciated.
↧
PaloAlto APP Eventgen
In the old Paloalto APP there was an Eventgen but I can not find it now.
I want to generate a log automatically, is not there a good way?
Thanks guys.
↧