Hi. I have a dashboard allow users click on the row and pass the parameter to a deep link from another application; however, the parameter is a part of the string from "$click.value$". Here is an example, for instance, "United States of America (USA)" is the "$click.value$" and the parameter needs to be passed is "USA". How can I strip out "USA" from eval tag below? Thanks.How to strip out USA from "United States of America (USA)"
↧
Customize token for a drilldown from a dashboard
↧
How to get difference in results from two searches over different timelines
I want to get top 20 errors of the day & top 20 errors of the week. Then get the difference between both results. i.e. new errors that were seen in last 24 hrs which were not seen earlier.
I tried this, but it throws some error:
| multisearch [search ERROR earliest=-1d | top limit=20 error_field | eval type="search1" ] [search ERROR earliest=-8d latest=-2d | top limit=20 error_field | eval type="search2"] | eval difference = search1-search2
Error thrown:
Multisearch subsearches may only contain purely streaming operations (subsearch 1 contains a non-streaming command.)
↧
↧
Searching an inputlookup for the results of your query
I'm a little stumped with what I am trying to achieve with the lookup of values from a csv based on the search results I get when performing a search
csv is defined as an inputlookup and contains field1,field2
When I search I will have a value returned that is in the format of field1 in the csv and would like to display the corresponding field2 in my search results. For example username,displayname.
I've looked at the inputlookup and lookup documentation but am unsure on how to pass results or filter a subqueries results for the value
↧
Indexer down, expected forwarder behavior?
We have a small Splunk infrastructure, one indexer, one search head and 300 machines with forwarders installed. Our indexer has gone down with hardware issues. Our log traffic is less than 10GB a day over the weekends and up to 60GB during the weekdays. Our indexer went down on Saturday night. From what I've read, "if the indexer refuses data (full or down) then the forwarders fill their memory queue up to 2MB (default), then pause the monitoring".
My question is on "Pause the Monitoring", will the forwarders pick up where they left off once the indexer is online?
↧
Search Head Cluster Latency
I am trying to figure out how I can measure the latency that my search head cluster nodes are experiencing between each other.
The configuration of the search head cluster is Splunk 6.6.3, all servers are Windows Server 2012 R2, 2 of the members are in 1 data center (along with the deployer) and the other node is in another data center.
The search head cluster has been up for a while and was running without any real issue but after this months Windows security patching and reboots, the captain fails over to a different member pretty regularly. Where before it was only failing over to another member when we were performing work on the cluster.
I am figuring that the issue has to do with latency between the cluster members and want to query the metric.
And if anyone has any other ideas why it might all of a sudden start having this issue (I have other stand alone search heads which got the same security patches and are having no issues).
↧
↧
Count of users logged in.
Here are two sample events
Event 1 -
2018-09-10 11:17:57,982 INFO [http-nio-127.0.0.1-8085-exec-130] [BreakssFogFilter] BF27462 GET https://rambo.ixngames.com/start.action 7485905kb
Event 2 -
2018-09-10 11:10:55,644 INFO [http-nio-127.0.0.1-8085-exec-51] [BreakssFogFilter] ZD07220 POST https://rambo.ixngames.com/userLogout.action 1615031kb
Event 1 indicates that a user just logged in, , Event 2 indicates user logged out. Around 30 similar events gets created with slightly different format events when a users logs in or logs out specifying the user name in the events.
We are trying to figure out How many users(distinct) are logged in to the server at any specific hour by analyzing the events from the above mentioned event formats.
↧
Dynamic Names and Table Pivot
Hi,
I have a single CSV source where the columns names are not fixed as well as the number of the columns. A simple search returns the following:
![alt text][1]
The number and the letter after the string PhysicalDisk is variable. I'm calculating the avg() and perc95() for each value.
How can I get the following output from this source, where the Instance is part of the original field name?
![alt text][2]
[1]: /storage/temp/254896-capture1.jpg
[2]: /storage/temp/254897-capture.jpg
↧
Predicting Cpu load in Splunk
Hi, I need to predict the cpu % when the load is increased. So basically suppose 10000 request are hitting per day on a platform averaging a cpu utilization as 70%. Now I want to predict when the number of requests get increased to double say 20000 per day then how will the cpu utilization look like?
How should i approach this type of work?
↧
How to change a password in the CLI without typing it in cleartext
Hello,
I was wondering how to change a password using the CLI without typing it into the command in cleartext.
This is primarily because PowerShell commands are logged in the environment and so any password changes will be written to the log. It is not feasible to disable PowerShell logging to change the password either, so this will have to be done in the CLI.
Does anyone know of a method to accomplish this?
Thank you in advance!
↧
↧
Single-server Splunk (S1) on windows server 2012 2-node failover cluster - good idea?
We need a High Availability (HA) Splunk Environment. The idea architecture could be Distributed Clustered Deployment + SHC - Single Site (C3 / C13), which includes 2-node Indexer Cluster, 3-node Search Head Cluster, 1 Deployment Server, etc.
But we only have 2 available servers - virtual machine with Win 2012. So, I'm thinking to build a 2-node Windows Server Failover Cluster, then install Single Splunk Server (S1) (one instance includes Search Head and Indexer) on this cluster. Is this possible?
I don't have much experience on Splunk architecture. I did research online, seems no one mentioned this solution. Is this a good idea? any Pros and Cons? Any suggestions are welcome. Thank you!
↧
Dashboard Visualization having four status (Stopping, Stopped, Starting, Started)
Hi Guys,
Working on one PoC and week in search commands
After extracting fields i get four search strings as given below for an individual service status
- index=* sourcetype=* service_status="*Core Stopping"
- index=* sourcetype=* service_status="*Core Stopped"
- index=* sourcetype=* service_status="*Core Starting"
- index=* sourcetype=* service_status="*Core Started"
I want to create a dashboard (visualization of the service status) (Single pane 4 status) please help
Thanks and Appreciated
↧
How do I predict the cpu load in Splunk?
Hi, I need to predict the cpu % when the load is increased. So basically, suppose 10000 requests are hitting per day on a platform averaging a cpu utilization of 70%. Now, I want to predict when the number of requests get increased to double, say 20000 per day. Then, what will the cpu utilization look like?
How should I approach this type of work?
↧
Can you help me create a dropdown with a multiple checkbox select ?
Can you help me create a dropdown with a multi select combination?
Detail Explanation
1.Dropdown would populate when clicked.
2.After clicking, whatever values in the dropdown would be in the form of multi select values
Thanks
↧
↧
How do I create a dashboard visualization that is single pane, 4 statuses? : (Stopping, Stopped, Starting, Started)
Hi Guys,
Working on one PoC and week in search commands
After extracting fields i get four search strings as given below for an individual service status
- index=* sourcetype=* service_status="*Core Stopping"
- index=* sourcetype=* service_status="*Core Stopped"
- index=* sourcetype=* service_status="*Core Starting"
- index=* sourcetype=* service_status="*Core Started"
I want to create a dashboard (visualization of the service status) (Single pane 4 status): (Stopping, Stopped, Starting, Started)
please help
Thanks and Appreciated
↧
How do I measure latency between search head cluster nodes?
I am trying to figure out how I can measure the latency that my search head cluster nodes are experiencing between each other.
The configuration of the search head cluster is Splunk 6.6.3, all servers are Windows Server 2012 R2, 2 of the members are in 1 data center (along with the deployer) and the other node is in another data center.
The search head cluster has been up for a while and was running without any real issue. But, after this months Windows security patching and reboots, the captain fails over to a different member pretty regularly. Before, it was only failing over to another member when we were performing work on the cluster.
I am figuring that the issue has to do with latency between the cluster members and want to query the metric.
And if anyone has any other ideas why it might all of a sudden start having this issue (I have other stand alone search heads which got the same security patches and are having no issues).
↧
Dynamic Names and Table Pivot: How can I get the following output?
Hi,
I have a single CSV source where the columns names are not fixed as well as the number of the columns. A simple search returns the following:
![alt text][1]
The number and the letter after the string PhysicalDisk is variable. I'm calculating the avg() and perc95() for each value.
How can I get the following output from this source, where the Instance is part of the original field name?
![alt text][2]
[1]: /storage/temp/254896-capture1.jpg
[2]: /storage/temp/254897-capture.jpg
↧
Can Splunk be served from a different endpoint?
I am currently trying to let splunk run behind a reverse proxy so that there can be multiple web-services on the same domain.
The goal is to run splunk from e.g.:
https://example.com:9000/abc/
so that this maps to e.g.
http://some-local-machine:8000/
where splunk is running at port 8000.
I was able to configure nginx to handle normal requests and even the redirects coming form splunkweb in the right way but it seems some of the assets contained in the page are not referenced relative to the current page but rather contain an absolute path which is determined based on the Host-field in the request issued from the proxy.
Is there some way to let splunk know that it is supposed to run from some other endpoint than /? So that it can inject this endpoint into all links (by prefixing them) that are needed for the dynamic parts of the page.
Thanks in advance!
↧
↧
How to change time token when comparing two time range
I'm trying to compare two time ranges in one chart like the way they taught in this article: https://www.splunk.com/blog/2012/02/19/compare-two-time-ranges-in-one-report.html
My question is how should I change the query so that I can display it in a dashboard and able to change the time range (eg display the two time range 3 hours ago and last week same time 3 hours ago)
Ex. the time token is called "bandwidth_time_range", and my query will be:
index=xxx earliest=$bandwidth_time_range.earliest$ latest=$bandwidth_time_range.latest$ |eval period="today"| append [search index=xxx earliest=$bandwidth_time_range.earliest$-7d@m latest=$bandwidth_time_range.latest$-7d@m | eval period="last_week" | eval _time=_time+(60*60*24*7)] | timechart span=1m sum(bytes) by period
The panel didn't return a timechart instead it say "invalid value "now-7d@m" for time term "latest""
Is there any way I can do to link the query and the time picker together?
↧
How do you change time tokens when comparing two time ranges?
I'm trying to compare two time ranges in one chart like the way it was taught in this article: https://www.splunk.com/blog/2012/02/19/compare-two-time-ranges-in-one-report.html
My question is how should I change the query so that I can display it in a dashboard and be able to change the time range (eg display the two time range 3 hours ago and last week same time 3 hours ago)?
Ex. the time token is called "bandwidth_time_range", and my query will be:
index=xxx earliest=$bandwidth_time_range.earliest$ latest=$bandwidth_time_range.latest$ |eval period="today"| append [search index=xxx earliest=$bandwidth_time_range.earliest$-7d@m latest=$bandwidth_time_range.latest$-7d@m | eval period="last_week" | eval _time=_time+(60*60*24*7)] | timechart span=1m sum(bytes) by period
The panel didn't return a timechart. Instead it says "invalid value "now-7d@m" for time term "latest""
Is there any thing I can do to link the query and the time picker together?
↧
Is it possible to pass event id to HEC to avoid indexing if same event is sent twice?
My app notifies splunk with the call to HEC on data changes. As data actually stored as series of events it is quite straightforward to use splunk for analysis. But due to some internal reasons, it is possible that same event will be delivered to HEC twice. And it is crucial to have only one event stored in splunk in this case. Most obvious way to achieve it is to have some uniquie id posted with event and having splunk ignoring event if it has id matching any of previously indexed events. But I failed to find anything like this in the documentation.
↧