Hi,
I'm trying to show all the source types within the last 24 hours (I set that by using presets), and if those source types have no data, I still want to show the name of the soucetype but with 0 (represent no data).
This is what I'm doing now, but it only shows the source types with data for the last 24 hours.
index=* |chart count over sourcetype
|eval name=if(count=="0", "0", "1")
Please help. I searched everywhere and tried so many things but still no luck. Also, I'm trying to use the trellis visualization to represent those source types
↧
How do you show all source types even if no data is available?
↧
How do I create a histogram to show distribution?
I have a search like this:
My Search|chart count(data.url) as SongsPlayed over userEmail
It gives me a list of users and the number of songs they listen to for a time.
I would like a chart that breaks down the users in groups, like those who listen between 0-10, the up to 20, 30 etc.
How do I do that in Splunk?
Eva
↧
↧
Why am I receiving "500 Internal Server Error" after login? (Windows Server 2012 R2 - Splunk 7.2.3)
We have recently updated our Splunk server to version 7.2.3 and I am now receiving an "500 Internal Server Error" immediately after logging in with the admin account.
I'm not sure where to look in the logs for more information. Any help on where to look to get more details is appreciated.
↧
How do I remove suffix and prefix from a search for a dropdown selection in a dashboard?
Hi,
I have a dashboard with a dropdown selection. For some of the panel further on, I needed to add suffix and prefix to simplify the search. But then I realized that, for another panel, I needed a clean value from the dropdown selection, i.e. w/o prefix and suffix.
How can I remove the prefix and suffix in the panel below? will appreciate an answer.
Thank you!
↧
Using the Splunk Add-on for ServiceNow, how do you connect to Service Now in Cloud that is using ADFS SSO for Authentication?
Our ServiceNow instance is running in the cloud, and we use ADFS SSO authentication to connect to it. We got the add-on working with a local account but we are unable to authorize with the Service Now web service using an AD account.
Does this add-on support SSO authentication?
↧
↧
How do I replace text within a field with text from another field?
I have events that contain multiple fields. For example
PARAM1: Thing1
PARAM2: Thing2
PARAM3: Thing3
MESSAGE: Refer to P1 and P2 in conjunction with P3 and escalate as need be.
What I'd like to create is a message that populates with everything in one sentence / field.
For example:
MESSAGE: Refer to Thing1 and Thing2 in conjunction with Thing3 and escalate as need be.
Any suggestions on how to make this happen would be greatly appreciated.
↧
Fast searches for a count of events in various ways
I've been looking for ways to get fast results for inquiries about the number of events for:
1. All indexes
2. One index
3. One sourcetype
And for #2 by sourcetype and for #3 by index.
Here are the ideas I've come up with, and I thought I'd share them, plus give a Splunk Answer that others can add to. If you have something clever in this general area (that's fast) please share it here.
Count of events for an index or across all of them with eventcount:
| eventcount summarize=false index=winevent_index
There is no way to restrict it to a particular sourcetype or source,
and the Time Picker has no effect on it -- It counts all events in
an index for all time.
Here is how to look at all the non-internal indexes:
| eventcount summarize=false index=* report_size=true
Similar search with tstats:
| tstats count where index=* groupby index,_time span=1d
This does respect the Time Picker, so if you do last 7 days you
get a count for each index, for each day.
This gives the count of events for one index, with Time Picker set to Week to date:
| tstats count where index=winevent_dc_index groupby index,_time span=1d
index _time count
winevent_dc_index 2019-02-03 7765708
winevent_dc_index 2019-02-04 9837331
winevent_dc_index 2019-02-05 10624149
winevent_dc_index 2019-02-06 10198089
winevent_dc_index 2019-02-07 5475228
But I hadn't been able to figure this out for a sourcetype-based search
until today. This works great on the main index, which has lots of sourcetypes:
| tstats count where index=main groupby index,sourcetype,_time span=1d
Whereas this search provides the count for a particular sourcetype, by index, by day:
| tstats count where sourcetype=syslog groupby index,sourcetype,_time span=1d
↧
Performance implications of a two node index cluster with ReplicationFactor=2 and SearchFactor=2
We preparing to move from a single indexer to an index cluster. I'm trying to determine the performance implications of a two node index cluster with the replication factor set to two and the search factor also set to two?
In the documentation manual "Managing Indexers and Clusters of Indexers" doc, under the section "How indexer clusters work", the subsection "Buckets and indexer clusters"
https://docs.splunk.com/Documentation/Splunk/7.2.3/Indexer/Bucketsandclusters
under the heading Data files it states:
"If the cluster has a search factor greater than 1, some or all of the target peers also create index files for the data. For example, say you have a replication factor of 3 and a search factor of 2. In that case, the source peer streams its raw data to two target peers. One of those peers then uses the raw data to create index files, which it stores in its copy of the bucket. That way, there will be two searchable copies of the data (the original copy and the replicated copy with the index files)."
I’m reading that as only the raw data is replicated, not the index files. The index files are recreated on the peer. So in a two node cluster with replication factor of 2 and a search factor of 2 both nodes would always be indexing the data.
↧
Create a crosstab with sparse data
My vulnerability data looks like this:
Machine MachineType VulnCode Impact
------- ----------- -------- ------
A X 100 5
A X 101 4
A X 102 3
A X 103 5
B X 200 5
B X 201 3
C Y 101 4
D Y 200 5
D Y 201 3
E Z 103 5
F Z 201 3
I want a result like this:
MachineType Impact=5 Impact=4 Impact=3
----------- -------- -------- --------
X 3 1 2
Y 1 1 1
Z 1 0 1
I tried appendcols with a savedsearch but got `Found circular dependency when expanding savedsearch`.
Thank you.
↧
↧
DB Connect 3.1 login failed
I am connecting to a sql db using active directory password. I have no issues connecting with query manager, however neither the sql MS generic driver or jTDS drivers are working. There are no further logs.
Has anyone been able to use DB Connect with this connection type?
↧
Can you add data models to the Splunk Common Information Model (CIM) app?
I haven't been able to find an answer to this in the documentation. Can you add data models to the Splunk Common Information Model (CIM) app? Or do you always have to use one of the default data models?
↧
Report that compares search performance after reconfiguration
Earlier today we changed the searching (peer config) so that our Search Heads will only perform searches across 4 physical indexers.
We did this in an attempt to reduce what we think are issues with some indexer instances on virtual machines.
What report can I run that will best highlight performance for searches, comparatively? My head's awash in Indexing Performance vs Search Activity vs Search Usage, and I keep sliding down rabbit holes. And no, we didn't do the reconfig based on another report, nor did we baseline anything to work off of, report-wise. :(
↧
problem showing status duration, timechart with stack bars in 30min spans
This could get a little tedious but here goes:
I have call centre data that is giving me the users status whether in a call or other status like in coaching or on a break
I have the start time of the status change and the event time stamp from which I can calculate the duration of the status to determine how long the user was on a call or in a meeting etc.
Here is a typical timeline for a user status over say 2 hours
in a call - 40min
after call work - 10min
in a call - 20min
after call work - 10min
coaching - 20min
break - 20min
each status that runs longer than a minute will have multiple events each one having a timestamp further from the StatusStarttime, so the duration increases until the events finish for that status, this latest event is the event that I grab and plot on a timeline.
it might look a bit like this:
![alt text][1]
[1]: /storage/temp/267596-splunk-timeline.png
The client would like to see a 100% stacked bar in 30 min increments, as you can see from the chart there are many events that have durations that cross the 30 min boundaries so the stacked bars rarely add up.
Is there a way to split the events (like the one with the red arrow) with overlapping durations and divide them correctly into the different 30 min time slots?
I told you it was tedious.
↧
↧
Salesforce Add-on Configuration page hang after upgrade to v 2.0.0
I upgraded salesforce add-on to v 2.0.0. Upon updating the password, the configuration page hang.
What did I do wrong?
↧
Archiving Splunk data from an indexer cluster
Hello,
I have a Splunk Indexer cluster. The cluster consists of 3 peer nodes, with a replication factor of 3.
My issues are surrounding freezing off old log data.
1. I need to be able to archive off old logs. The documentation does not give a definitive way to do this with a clustered environment. I would think that since I have a replication factor of 3 that each indexer has a complete copy of all the data, and therefore I would only need to freeze data from one peer node.
2. If the observation in point 1 is correct, since all configuration should be the same between indexers in a cluster, I don't think I can use the native Splunk config for archiving log data (or can I)?
3. How have others handled this?
Does anyone have any advice on how to best proceed?
Cheers!
↧
Subscription ID not showing on App installed on search head | rest queries localhost
I have the template installed on a SH and the Azure collection apps (MSCS, Azure Monitor etc) are all installed on on a HF
If i run the search to list the subscriptions on the HF all good as thats where the config is .. but running on the SH displays no answer. and an error
Failed to fetch REST endpoint uri=https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/configs/conf-mscs_azure_resource_inputs?count=0 from server https://127.0.0.1:8089. Check that the URI path provided exists in the REST API. Learn More
Failed to fetch REST endpoint uri=https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/configs/conf-mscs_azure_audit_inputs?count=0 from server https://127.0.0.1:8089. Check that the URI path provided exists in the REST API. Learn More
search is :
| rest /servicesNS/nobody/Splunk_TA_microsoft-cloudservices/configs/conf-mscs_azure_resource_inputs | append [ | rest /servicesNS/nobody/Splunk_TA_microsoft-cloudservices/configs/conf-mscs_azure_audit_inputs] | stats count by subscription_id
So i tried adding the HF as a remote host but i still do not get any subscriptions showing :
| rest splunk_server=remote_HF /servicesNS/nobody/Splunk_TA_microsoft-cloudservices/configs/conf-mscs_azure_resource_inputs | append [ | rest splunk_server=remote_HF /servicesNS/nobody/Splunk_TA_microsoft-cloudservices/configs/conf-mscs_azure_audit_inputs] | stats count by subscription_id
↧
Splunk username missing and unable to perform any search
Hi,
Why is that a particular user in my team is unable to see his name on the top in Splunk UI like anyother in my team.
He is also unable to perform searches or vary the time range and anything.
How could I spot the difference that he has with the others in the team.
Kindly help!!
↧
↧
What are the sequence of execution transforms across different stanza and locations?
Hi,
We want to change sourcetype and then send data to two different Splunk Indexers.
What is happening is the sourcetype is getting changed (that means first props.conf stanza is working) BUT the seconds props.conf stanza present in the apps folder is not working (It is only sending the logs to default output group).
Configuration files for change **sourcetypes** are located in the **/system/local** folder and **route data** configuration files are in the **/apps/application/local/** folder.
Does anyone have similar issue? Thanks!
**SPLUNK_HOME/etc/system/local/**
props.conf
[source::/abc/xyz.log]
TRANSFORMS-changesourcetype = st
transforms.conf
[st]
REGEX = \.*\[12345]\.*
FORMAT = sourcetype::sourcetype1
DEST_KEY = MetaData:Sourcetype
**SPLUNK_HOME/etc/apps/application/local**
props.conf
[sourcetype1]
TRANSFORMS-routing = route_data
transforms.conf
[route_data]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = indexer1, indexer2
↧
What are the execution sequence of transforms from different stanza located in the difference configuration files ?
We want to change sourcetype and then send data to two different Splunk Indexers.
What is happening is the sourcetype is getting changed (that means first transform is working) BUT the seconds pros.conf stanza present in the apps folder is not working (It is only send the logs to default output group).
**Transform 1:** SPLUNK_HOME/etc/system/local/
props.conf
[source::/abc/xyz.log]
TRANSFORMS-changesourcetype = st
transforms.conf
[st]
REGEX = \.*\[12345]\.*
FORMAT = sourcetype::my_sourcetype
DEST_KEY = MetaData:Sourcetype
**Transform 2:** SPLUNK_HOME/etc/apps/application/local/
props.conf
[my_sourcetype]
TRANSFORMS-routing = route_data
transforms.conf
[route_data]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = indexer1, indexer2
↧
How to parse RESULTS field from Qualys TA
The Qualys documentation for the TA states that there may be a problem with parsing the RESULTS field because some values are multi-line values. Since this is the only field that may contain a multi-line value, what would be the best way to parse the data?
From props.conf:
[qualys:hostDetection]
DATETIME_CONFIG =
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = true
category = Custom
disabled = false
KV_MODE = auto
KV_MODE = auto parses all the fields correctly, **except** when the RESULTS field has a multi-line value.
Thanks in advance!
↧