Hi,
I would like query all data over the past year and then use "stats count by some fields" to calculate the counts.
However, the data is too large (at least a few millions) and Splunk truncates data when querying, so the number of counts is inaccurate.
Does anyone know a good way to fix it?
PS. I tried 'sistats' and set a report run every hour to query data from the previous year.
Ideally, I hope the report can collect data in a smaller time interval accurately, and the aggregate the result.
However, in each hour, the report query the whole previous data inaccurately and then added up all counts as the result.
↧
Issue with count -- How can I search a large data set without Splunk truncating the data?
↧
How to specify the time range chosen on a chart's y-axis?
For a timechart such as " .. | timechart count", there will be an arbitrary bucket size selected depending on certain values, including the time range chosen. (To be as flexible as possible the span= option will NOT be used.)
However, this can lead to a misleading value of "y" depending on the bucket size, e.g.:
Does "y" represent the count per HOUR? Per Minute? Per Day?
How can the "y" axis be corrected to "per HOUR" for ANY bucket size automatically selected by the timechart command? Currently I've used some manual hard-coded math evals in some charts, but this feels unnecessarily complex and tedious, and relies on a fixed SPAN size.
↧
↧
Mimecast for Splunk v2: powershell error when setting up the app
I am trying to setup Splunk for Mimecast using the Addon. Following the directions here https://community.mimecast.com/docs/DOC-2142#jive_content_id_Installation_and_Configuration I get stuck at the login process. When I run the login command it comes back with blank key and a powershell error
PS C:\Program Files\splunk\etc\apps\Splunk_TA_mimecast_for_splunk_v2\bin> .\login.ps1
***************************************************
Welcome to the Mimecast for Splunk setup assistant.
***************************************************
Please enter the email address and Mimecast cloud password of the Mimecast administrator you would like to use for Splunk to connect to Mi
mecast.
cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
Credential
Requesting access key and secret key
Invoke-RestMethod : Invalid URI: The hostname could not be parsed.
At C:\Program Files\splunk\etc\apps\Splunk_TA_mimecast_for_splunk_v2\bin\login.ps1:24 char:21
+ ... ponseData = Invoke-RestMethod -Method Post -Headers $requestHeaders - ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Invoke-RestMethod], UriFormatException
+ FullyQualifiedErrorId : System.UriFormatException,Microsoft.PowerShell.Commands.InvokeRestMethodCommand
Request completed successfully. Extracting values from response.
Access key:
Secret key:
Use these values in your Mimecast for Splunk v2 Data Input. Please ensure you remove any line breaks from the access key and secret key va
lues when using copy / paste from the PowerShell window.
↧
Why aren't these two time fields matching?
Hello All
I have a query as below :
search|fields *|rename eventTime.$date as eventTime|eval eventTime1=(eventTime/1000)-25200|rename eventData.nearTalk as nearTalk|eval eventTime=strftime((eventTime/1000)-25200, "%Y-%m-%d %H:%M:%S")|rename eventData.farTalk as farTalk|rename eventData.overTalk as overTalk|eval silence=1000-nearTalk-farTalk-overTalk|bucket eventTime1 span=1s|makecontinuous eventTime1 span=1s|eval eventTime1=strftime(eventTime1, "%Y-%m-%d %H:%M:%S")|table _time eventTime1 eventTime reduction postLimiterSplEstimate preLimiterSplEstimate|sort eventTime
Now here eventTime1 and eventTime is same according to my understanding. eventTime1 I have directly first converted epoch time to timezone and making it continuous, and eventTime other one is directly I am converting to human readable format.
The data should be same for both at the end result.
But I am getting something as below:
![alt text][1]
As seen eventTime1 and eventTime are different. Can someone tell me where am i going wrong in this ?
They should be the same.
Regards
Shailendra Patil
[1]: /storage/temp/210701-screen-shot-2017-08-28-at-61300-pm.png
↧
Passing event time from subsearch to main search
I'm looking to take events from a subsearch to find correlating events in a main search. The scenario is something like this:
subsearch will find an event of interest and pass certain fields to the outer search. notably the time of the event, hostname.
then using the event time, go back 2 minutes and search for username logging into the hostname from the subsearch.
I'm stuck on how to the 1. pass the time from the inner search to the outer search and using it as my time picker search for the user.
example:
index="syf_pcf_*_sys_*" sourcetype="pcf_*:syslog"
type=USER
[search index="syf_pcf_*_sys_*" sourcetype="pcf_*:syslog" filesnitch opname=CREATE NOT ("/etc/mtab*" OR CHECKIN) | rename fname AS path | rename hostname AS node | search path!=/etc* | table _time,index,sourcetype,host,node,file_name,path,opname | fields _time index node]
Thanks in advance for the help.
↧
↧
Is there a meta command to find old lookup files?
hi,
I created a lookup file a long time ago but I don't remember where it is,
is there a meta command that can find the file if I give it the name of the lookup?
↧
Is there a way to convert a scheduled report to an alert? (6.6.3)
If a saved search is initially created as an alert, I get the option to "Edit alert". But if it's saved as a report, that option is not there and Edit Schedule does not offer the same options. I can't see any way to modify a report to have a conditional alert. I can schedule a report. And I can assign an email action to a report. But the GUI offers no way to assign a conditional action to a report. In order to get the conditional verbiage, I have to recreate the saved search explicitly as an alert. Or edit config files directly.
The new paradigm of reports vs alerts is not ... handy. Maybe I'm just not used to it.
v6.6.3, Linux
↧
Website Input URL Filter Issue
In Website Input, I'm having an issue using the URL filter to monitor multiple applications under the same base url.
I have two applications:
http://mysite.com/app1
http://mysite.com/app2
There is a healthcheck page underneath each:
http://mysite.com/app1/tools/healthcheck
http://mysite.com/app1/tools/healthcheck
When defining the input, I can get the fields I need if I set up an input for each app going all the way to the healthcheck page in the URL. What I want to do, though, is set up one URL (http://mysite.com) and use the url filter to get results from both healthcheck pages. I've tried http://mysite.com/* and http://mysite.com/*/tools/sitecheck, and neither gets matches.
What am I doing wrong?,I'm having trouble getting the URL Filter to work as expected in Website Input. Let's say I have two applications under mysite.com:
http://mysite.com/app1
http://mysite.com/app2
Each of those has a health check page under it.
http://mysite.com/app1/tools/healthcheck
http://mysite.com/app2/tools/healthcheck
When setting up the input, i use http://mysite.com as the url. For the URL I've tried http://mysite.com/* and http://mysite.com/*/tools/sitecheck, depth limit is 5, page limit I set up to 20 just to give myself some wiggle room.
I still get zero matches unless I go back and set the URL all the way down to the healthcheck page without any URL filtering.
What am I doing wrong?
↧
How to remove custom correlation searches in Splunk Enterprise Security?
IF an error is made when creating a correlation search - like using the wrong app context, and you'd like to remove that search and re-apply it.
What are the steps pls ?
Found a couple of answer posts referring this but they are quite old and list nothing stepwise.
gratzi
↧
↧
Where should I point my REST API requests in a distributed deployment?
Hi Splunk Experts,
I am writing a script that aims to do a periodic reachability and config check on my Splunk deployment from a remote Linux machine. I'm mostly doing it by issuing REST API calls to retrieve the status of my indexes, data inputs and searches. I issue REST API requests to the single Splunk Enterprise server and can get all the data by sending this to a more or less static, user-configured host/port.
This works fine in a standalone non-distributed Splunk Enterprise environment, but I'm wondering what changes would be needed to make it work in a distributed Splunk environment. Would I need to ask the user to provide details (ip/port) of all components of his Splunk distributed environment? Is there a component in Splunk distributed deployment that can consume all REST API requests and route them to the correct machine?
Thanks.
↧
No Results from External Lookup with Leading or Trailing Spaces
I created an external lookup to decode MIME encoded strings, but the lookup fails from the web interface when it contains leading or trailing whitespace and a character contained in the initial string doesn't trigger addition of quotes to the lookup field.
This succeeds:
| stats count
| eval subject="CC: =?ISO-8859-1?Q?Andr=E9?= Pirard "
| lookup mime_decode encoded AS subject OUTPUT decoded AS subject_decoded
But this fails (with leading and/or trailing space):
| stats count
| eval subject=" CC: =?ISO-8859-1?Q?Andr=E9?= Pirard "
| lookup mime_decode encoded AS subject OUTPUT decoded AS subject_decoded
However, this succeeds (comma in string):
| stats count
| eval subject=" CC: =?ISO-8859-1?Q?Andr=E9?= Pirard , "
| lookup mime_decode encoded AS subject OUTPUT decoded AS subject_decoded
I tested from the command line, and the following CSV is extracted correctly.
Input:
encoded,decoded
CC: =?ISO-8859-1?Q?Andr=E9?= Pirard ,
CC: =?ISO-8859-1?Q?Andr=E9?= Pirard ,
" CC: =?ISO-8859-1?Q?Andr=E9?= Pirard , ",
Output:
encoded,decoded
CC: =?ISO-8859-1?Q?Andr=E9?= Pirard ,CC: André Pirard
CC: =?ISO-8859-1?Q?Andr=E9?= Pirard ,CC: André Pirard
" CC: =?ISO-8859-1?Q?Andr=E9?= Pirard , ","CC: André Pirard , "
Others have also had this problem when they had leading spaces, but the email in Python 2.7.13 strips leading and trailing whitespace. This also suggest that it not a Python issue.
/answers/139772/splunk-python-script-to-decode-mime-headers-in-email-subject.html
There are other issues with the Python 2.7.13 email module related to RFC 2047, but handling leading and trailing whitespace is not one of them.
↧
Changing timechart x-axis from _time to another field
I am looking to chart my data based on another time field than the default _time that splunk uses.
is this possible? how do i go about doing this?
↧
Do SplunkBase subscriptions still email when a new version of an application is released?
I'm unsure if this relates to some form of email filtering or if the subscription to SplunkBase application updates is no longer working, I have not seen any application update emails for the past few months yet many applications I'm subscribed to **have** been updated.
Is the subscription system still working as expected for anyone else?
↧
↧
Finding the 1st logon and logoff event times for a single user from March 2017 to present.
Hello,
I've been asked to find the 1st login time of a user and the time they logged out over a specific date range. (march 1st 2017 - present)
The environment is a Window's terminal services environment (Windows Server 2008 R2) and is being indexed into Splunk. The index is **index= index="wineventlog"**
I'd like to see something like this; *(if possible)*
- username: user
- date: 01/MAR/2017
- Logon: 07:30:00
- Logoff: 15:30:00
For each day from march 1st to present, I can find the events but can't order them or filter them to show just the 1st login and last logoff of that day.
Appreciate the help,
Jake
↧
How to create a graph or table for the following query according to the Status Code ?? Please refer the result below.
{"StatusCode":200,"ReasonPhrase":"OK","Method":"POST","PathAndQuery":"}
{"StatusCode":404,"ReasonPhrase":"Not Found","Method":"GET","PathAndQuery":"}
{"Message":"Completed request to Create Position Events.","}.
For the above I have three categorize like status code 200, 400 and NONE. So I want to create a graph or count on the basis of Status Code. How to do it ??
↧
Top 10 as a single value
Hi,
I am trying to get a pie chart which shows the Top 10 users logon count as a single slice, then the next 10 following users (one per slice) and then the others....
Is there any way to do this?
Thank you!
↧
geo mapping down to the lowest level of detail
I have a dataset of events around a particular city which I which to represent on a heat map. I have a lookup to each latitude and longitude, but when I try and produce a map it seems to combine all the events into 1 lat and long location.
How can I drill down further?
my search code looks like
index=edisyslogdata exEventType="Area Change" streetName!=NULL | lookup EdiStreetAssets StreetAsset as apId | table apId, streetName, lat, long | geostats latfield=lat longfield=long count BY apId
↧
↧
_internal index data not archiving/deleting after 30 days.
Hi guys,
I was wondering if anyone knows why my _internal index information is not archiving/deleting from frozen after 30 days
It wont let me attach a screenshot but in the DMC it shows that the "Data Age vs Frozen Data (days)" is 103/30... Which isn't right!
I can see that the value of frozenTimePeriodInSecs in system/default/indexes.conf is 2592000 (30 days) and using btool shows that the value is being taken but I don't know why it isn't working? Any ideas?
I was thinking of making a new app for config and change it to 31 days to see if it changes anything? Does anyone think this would work? I'm in a clustered environment so I'm a bit worried to make any changes in case it makes it worse!
Any help will be appreciated.
Cheers!
↧
General question regarding a possible dashboard creation
Our company has multiple location globally and have scheduled maintenances on the weekends at specific details. Now we get an email regarding the scheduled maintenances.
Is it possible create a dashboard that has 2-3 panels that say "In-Progress" , "Upcoming" and "Completed".
The panels show what kind on maintenance it is , the ticket # , the contact person .
All scheduled maintenances show up under upcoming and as soon as the time hits for a specific one , that event moves automatically to progress panel and once its done it moves to completed.
wanted to see if this is possible and is yes , how and where to start ?
↧
Splunk Heavy Forwarders Issue
Hi All,
In our environment we have 1 Cluster Master server, 1 Deployment Master server, 8 indexers & 6 Search Head servers. Recently we have installed heavy forwarders in two of our servers. And usually all the configurations will be done in Deployment master server so from there we will push the same to Cluster master & this is how our environment has been setup.
For both the heavy forwarders we have opened required ports to listen from our deployment master and so on. And when we tried to push some general apps from Deployment master server to both of the heavy forwarder servers , we can able to see that one of the heavyforwarders can able to receive the apps and their configuration files which has been pushed from deployment master server and another heavy forwarder server doesnt able to receive any of those.
Even though the ports has been opened and updated the server details in serverclass.conf file with the app details but still for one heavyforwarder server still we couldn't able to communicate with deployment master. We have done restart of Splunk Forwarder too but still the issue persist so kindly help on the same.
↧