Hi All
I am using custom logic in dashboard XML so that Splunk can choose the filter ( AND , OR ) based on the input given.
Here is my dashboard XML which was working fine or giving the right events in these scenarios:
Only value of Transaction Id given with Anonymous Account Id left blank. ( splunk going for OR filter )
Only value of Anonymous Account Id given with Transcation Id left Balnk. ( splunk going for OR filter )
Both Transaction Id and Anonymous Account Id given with having both those values existing in the events. ( splunk going for AND filter )
Not working in these scenarios:
Both Transaction Id and Anonymous Account Id given with not having both those values in the events. ( splunk going for OR filter )
So, In the above not working scenario the Splunk is going for ( OR ) filter which should be an ( AND ) filter.
Thank you in Advance.
↧
Custom Logic of (AND) or (OR) Based on the input.
↧
How to use eval to fulfill the below requirement
Hi,
I am preparing a dashboard where i can show whether the devices are sending logs or not.
In some region device will send logs to 2 server and in some it will send only to 1 server.
Below is the sample file
1.1.1.1 UKPRI LogSending
2.2.2.2 UKPRI LogNotSending
3.3.3.3 UKPRI LogNotSending
1.1.1.1 UKSEC LogSending
2.2.2.2 UKSEC LogSending
3.3.3.3 UKSEC LogNotSending
4.4.4.4 USPRI LogSending
7.7.7.7 USPRI LogNotSending
Now i want the show which all devices are sending logs and which are not sending (Device 2.2.2.2 is sending log to primary and not sending to secondary, it should be show as sending logs) in stats form.
↧
↧
Splunk server doesn't send emails
hi,
I have a problem - my splunk server isn't sending any alert emails.
Here are some details:
I have 2 splunk servers. Both use splunk 6.2, and both run on windows server 2012.
there is no cluster between them, but both are supposed the be the same.
Now that's the fun part- one the the servers is sending mails, and the other one not.
I have searched the python log using this search:
index=_internal source=*python.log*
and I found this error message:
"Sendmail:348 - (421, '4.3.2 service not available, closing transmission channel') while sending mail to ...."
Google suggested that the smtp server is blocking the server's request, but I cant understand why. Both servers are requesting the same smtp server using the same default port, both are sending email to the same mail address, both servers are in the same domain.
The only thing I can think of- maybe the domain user that run splunk is different? is there anyway to check this?
Do you have any ideas how to solve this problem? There are some importent alerts that i'm missing every day because of this.
Thanks!
↧
considerations on using SSD for hot\cold indexes
for a small scale distributed (30GB p/d) splunk instance with indexes currently on one disk.
Planning to introduce SSD for hot\warm index.
I have read various posts and
If we were to configure the indexes for say 30-60 days of hot warm data before being rolled to the slower disks would there be anything to consider such as :
When a premium app such as ES also comes into play and the data model summary ranges are larger than the hot\warm retention.
Eg: hot\warm index on SSD keep for 30 days then move to slower disk - however the authentication data model is configured for 1 year ? Would that be a factor to consider or not ?
Anything else to consider ?
gratzi.
↧
Saved search table value as array for multi-valued search to a new table
Hello I am doing a search that results in a table with these values... "| table _time, recState, context, message.connID, message.timeStamp.timeinSecs, message.agentID, message.aNI, host"
The issue is that I need to know durations which is not captured in the events. But I can do... "index=abc sourcetype=xyz message.connID | stats range(_time) as difference". Which provides an individual view of the events with this one unique ID and provides overall event duration.
I am trying to find a way to use the initial table as array values, run multiple searches like the one above, to produce a new table where each row has all the table columns and a new column for "duration" based one each unique message.connID.
↧
↧
Search output of a stats command
I have a search like below
| stats values(EndPointMatchedProfile) by EndPointMACAddress
Where each EndPointMACAddress may have one or more EndPointMatchedProfile values.
How do I find out EndPointMACAddress that has only one EndPointMatchedProfile value and that value is "Unknown". I do not want to return EndPointMACAddress that has two or more EndPointMatchedProfile values and one of them is "Unknown"
↧
AWS Quickstart for Splunk Enterprise - Direct Connect
Have been able to deploy Splunk using the Quickstart however we use AWS Direct Connect and the QuickStart seems to rely on public I.P Addresses, I'm unable to access the web interface on private addresses.
What's the best method to get Splunk up and running, do I have to perform a manual deployment?
↧
How to break single event(JSON response) to multi events?
Hello guys,
I have some problem with breaking the json event. Where i made some REST API get request to get the data from some other monitoring tool.
I made a request in such a way i want to grab the results in the last 60 seconds.
My response is getting fetched in a single event for 60 seconds time frame. I wanted to break the event in to seperate events. I am pasting the response in below.
If some one helps on this it would be great. Thanks in advance.
MY RESPONSE AS SINGLE EVENT.
https://app.thousandeyes.com/view/tests?__a=11035&testId=140152&roundId=1503878700&agentId=28091"},{"avgLatency":5.0,"loss":0.0,"maxLatency":6.0,"jitter":0.1472,"minLatency":5.0,"serverIp":"206.123.121.1","agentName":"Houston, TX","countryId":"US","date":"2017-08-28 00:05:07","agentId":7056,"roundId":1503878700,"permalink":"https://app.thousandeyes.com/view/tests?__a=11035&testId=140152&roundId=1503878700&agentId=7056"},{"avgLatency":30.0,"loss":0.0,"maxLatency":31.0,"jitter":0.0392,"minLatency":30.0,"serverIp":"206.123.121.1","agentName":"Minneapolis, MN","countryId":"US","date":"2017-08-28 00:05:03","agentId":16931,"roundId":1503878700,"permalink":"https://app.thousandeyes.com/view/tests?__a=11035&testId=140152&roundId=1503878700&agentId=16931"},{"avgLatency":33.0,"loss":0.0,"maxLatency":33.0,"jitter":0.0,"minLatency":33.0,"serverIp":"206.123.121.1","agentName":"Chicago, IL","countryId":"US","date":"2017-08-28 00:05:15","agentId":31,"roundId":1503878700,"permalink":"https://app.thousandeyes.com/view/tests?__a=11035&testId=140152&roundId=1503878700&agentId=31"},
I WANT TO BREAK IN TO MULTI LINE EVENTS AS BELOW.
{"avgLatency":57.0,"loss":0.0,"maxLatency":57.0,"jitter":0.48,"minLatency":56.0,"serverIp":"206.123.121.1","agentName":"Edmonton, Canada","countryId":"CA","date":"2017-08-28 00:05:13","agentId":28091,"roundId":1503878700,"permalink":"https://app.thousandeyes.com/view/tests?__a=11035&testId=140152&roundId=1503878700&agentId=28091"},
{"avgLatency":5.0,"loss":0.0,"maxLatency":6.0,"jitter":0.1472,"minLatency":5.0,"serverIp":"206.123.121.1","agentName":"Houston, TX","countryId":"US","date":"2017-08-28 00:05:07","agentId":7056,"roundId":1503878700,"permalink":"https://app.thousandeyes.com/view/tests?__a=11035&testId=140152&roundId=1503878700&agentId=7056"},
{"avgLatency":33.0,"loss":0.0,"maxLatency":33.0,"jitter":0.0,"minLatency":33.0,"serverIp":"206.123.121.1","agentName":"Chicago, IL","countryId":"US","date":"2017-08-28 00:05:15","agentId":31,"roundId":1503878700,"permalink":"https://app.thousandeyes.com/view/tests?__a=11035&testId=140152&roundId=1503878700&agentId=31"},
↧
Common key in built-in Metadata Streams in Splunk Stream
Is there a common key across all built-in Metadata Streams? Let's say I have a HTTP connection, it would at least generate items in Splunk_HTTPClient, Splunk_TCP, Splunk_IP streams. Can I use the flow_id or other fields to correlate them in order not to store duplicates the IP addresses and ports?
↧
↧
Splunk indexer got shutdown with "AdminHandler:ServerControl" error message in splunkd.log
Hi,
Splunk Indexer got shutdown on its own and found below error messages in splunkd.log
"08-25-2017 16:13:20.777 +0200 INFO DatabaseDirectoryManager - Getting size on disk: Unable to get size on disk for bucket id=cal_a350~80~BFD443F7-AE01-4018-BE92-A82EA7410435 path="/opt/splunk/var/lib/splunk/index_name1/db/hot_v1_80" (This is usually harmless as we may be racing with a rename in BucketMover or the S2SFileReceiver thread, which should be obvious in log file; the previous WARN message about this path can safely be ignored.) caller=getCumulativeSizeForPaths
08-25-2017 16:13:22.627 +0200 WARN timeinvertedIndex - bucket=/opt/splunk/var/lib/splunk/index_name2/db/hot_v1_277 Already running 6 splunk-optimize's, max=6
08-25-2017 16:13:22.627 +0200 WARN timeinvertedIndex - bucket=/opt/splunk/var/lib/splunk/index_name2/db/hot_v1_278 Already running 6 splunk-optimize's, max=6
08-25-2017 16:13:22.627 +0200 WARN timeinvertedIndex - bucket=/opt/splunk/var/lib/splunk/index_name2/db/hot_quar_v1_276 Already running 6 splunk-optimize's, max=6
08-25-2017 16:13:26.626 +0200 WARN timeinvertedIndex - bucket=/opt/splunk/var/lib/splunk/index_name3/db/hot_v1_153 Already running 6 splunk-optimize's, max=6
08-25-2017 16:13:26.849 +0200 ERROR AdminHandler:ServerControl - forcing shutdown since it did not complete in 360 seconds"
I have searched online for the cause behind this shutdown, but didn't find anything useful.
Please let me know if you have any information on the same or if any of you have faced this issue?
Thanks,
↧
Timechart overlay special calendar
Hi,
is there a way to overlay a timechart of logon counts, for instance, with a different calendar, for example, a weekend change window or a public holiday. This would be good when identifying patterns outside of these dates, when the count is likely to increase (change weekend) or decrease (public holiday). Maybe the overlay could be a full bar or a different background colour for the days/hours in question ....
Many thanks,
Rob
↧
Search blocks of time
Hi there,
I'm trying to get a chart of total firewall connections dropped (action=dropped) from the checkpoint firewall of the last 14 days but in two blocks of time: one between 07:00 – 19:00 (7am-7pm) and one between 19:00 – 07:00 (7pm-7am) and then a mediaan of both of the two weeks.
How do I search for those two blocks of time in my query over 2 weeks?
↧
Column Chart Legend always as count?
Hi,
I'm trying to plot a bar chart to show the number of protocol scan in a network but my chart always shows my legend as count instead of protocol. How should I search so that the bar chart will show legend as many protocol and my column chart will see different colours for each different protocol?
![alt text][1]
[1]: /storage/temp/210684-capture.jpg
↧
↧
Search blocks of time (certain hours)
Hi there,
I'm trying to get a chart of total firewall connections dropped (action=dropped) from the checkpoint firewall of the last 14 days but in two blocks of time: one between 07:00 – 19:00 (7am-7pm) and one between 19:00 – 07:00 (7pm-7am) and then a mediaan of both of the two weeks. This in a datamodel (network traffic) so the general date_hour>=7 doesn't seem to work.
How do I search for those two blocks of time in my query over 2 weeks?
↧
Custom search command is not distributed albeit being "streaming"
I am trying to implement a custom streaming search-command right now. I would like to use the SCP v2 protocol with the splunklibs python interface. The command itself is running fine but the "remoteSearch" field and the overall performance indicates that it is not distributed to the indexer-cluster but rather executed on the searchhead.
I tried different options for the Configuration Decorator (e.g. distributed=true) but it did not have any effect.
I am inheriting from StreamingCommand and so I am not able to mutate the "type" field in the configuration. The type is said to be "streaming" in the documentation but it turns out to be "stateful" when read out from "self.configuration" (from inside the commands class)
To me this seems to be the cause for my command not being "distributable streaming" but rather "stateful streaming". How can I tackle this to be able to distribute my command to the indexer-cluster for optimal performance?
↧
how to limit metrics.log to be forwarded to indexer ?
Hello All,
I am facing issue with the size of traffic sent by metrics.log from a server to indexer, which is more than 85% of total logs forwarded to indexer from that server. The actual logs from that server are not more than 300 MB of logs (All windows system, Application, security) in 24 hrs but the traffic between the source server and indexer is reaching 8-10 GB.
How can I limit this traffic ? I will also need the usage logs from the source to calculate the license usage.
Thanks,
-Sunil
↧
Why do some users not inherit the right default app?
In our setup, we've configured the following:
role: User
default app: search
role: someapp_user
inherit from: User
no default app set
I was expecting the users with "someapp_user" role to get "search" as the default app. Yet they are listed with "launcher", inherited from "system".
What did I miss?
↧
↧
Pie chart - 2 sets of others
Hi,
I am creating a pie chart which shows the top logon count but unfortunatelly the system is showing two different types of "Others", one if "OTHER" and "other (n)".
This is my query:
... base search | top 10 User useother=true.
![alt text][1]
[1]: /storage/temp/209691-capture.png
Does anyone know why this happening? It doesn't happen all the time, though. Sometimes only the "OTHER" value is present.
Thank you.
↧
Help finding all users on my system
I have splunk enterprise running on a linux box and I also have splunk universal forwarder running on a second linux box. How can I write a search that will display all currently existing users on my universal forwarder? I'm not talking about showing logs that are associated with all users...I simply want a list of all users on my forwarder that exist at the time the search was ran.
How can I accomplish this? What data do I need to get from my forwarder?
↧
What will happen when the replication factor is changed?
Say if initially the replication factor was 2 and later I change it to 3. Will the old data also be replicated with replication factor as 3 Or only the new data will have the replication copies as per replication factor as 3.
↧