Hi,
I was wondering how I can reference the time picker on load for a dashboard and make sure that it's the right format. I am currently using two separate time pickers to reference two time periods for a table. The idea is to compare two different time periods and see the differences. What I would like to do is have the human readable date as the column name so if I had two different columns x and y, it would look like x (10-05-2018 to 10-20-2018) | y (11-05-2018 to 11-20-2018) . The current issue I'm running into is the fact that the time can be in a couple different formats, either epoch time or the relative notation (-1d@d, now()).
I'm probably just lacking knowledge about something but I'd love it to set a token that is human readable right on dashboard load and then update that human-readable token to be used in the queries everytime the user changes the timepicker.
Any help would be much appreciated.
Thanks!
↧
Reference Time on Dashboard Load (and adjust to time change)
↧
Issue with monitoring files which has Log rotation after certain size
We noticed that, right after a log rotation, the data is not being indexed untill the next log rotation. That is lets say one file was rotated at 8 AM (untill which the data was already indexed). The next file is written from 8 AM to 7 PM. But this file is not indexed untill around 7 PM.
We are on Universal forwarder 7.0.3
Below is the monitoring stanza
[monitor:///opt/mapr/hadoop/hadoop/logs/*nodemanager*]
sourcetype = my_st
index = my_index
disabled = 0
ignoreOlderThan = 2h
We added `ignoreOlderThan = 2h` recently to see if it helps. But the issue still persists.
The latest file will be with `yarn-mapr-nodemanager-host_name.log` and the latest archived file be with `yarn-mapr-nodemanager-host_name.log.1`.
What is interesting is intermittently on certain servers, the current file gets indexed only at the time of its roll/archival i.e. lets say after 10-11 hours but with actual file name but not archive file name. And the issue of live/current file not getting indexed on time does **not** happen all the time. The next live file might get indexed on time. There should be ideal settings to avoid this. Any insights on this will be helpful.
Whatever Splunk says about handling log rotation files, seems to have some bug. Are we missing anything here? Please suggest.
↧
↧
Why Are My Search Results Truncated?
Hello,
I'm running into behavior I don't quite understand and was hoping someone might be able to shed some light on it.
1.) I'm running a search as an admin on a default install of 7.2.0 Splunk (no changes to limits.conf). I perform that search on an index that would return over 40k events if it were to return every matching result of the query.
2.) If I run that search as is in the Splunk search bar, it shows the right number of events (as it does in the Job Manager as well). But if I try to navigate through all those results, on page 25 (listing 50 events per page) I get the following warning message in the pager: "Currently displaying the most recent 1250 events in the selected range. Select a narrower range or zoom in to see more events.". I have no ability to navigate beyond page 25 at that point.
3.) If I run that search with "| head 12626", all 12626 events are returned and can be navigated (allowing me to go well beyond page 25).
4.) If I run that search with "| head 12627", I get the "most recent 1250 events" warning message.
5.) If I compare the search job log file for the "| head 12626" and "| head 12627" searches, they are essentially identical. There are no indications that anything was truncated in either case. No mention of any limits being exceeded. The "| head 12626" search actually ends up showing more memory used in the job manager.
6.) If I run that search using a SearchManager and put the results into a TableView on a custom Splunk dashboard, the results are also truncated but differently. For instance, with the "| head 12627", I can navigate to page 229 in my TableView (which is still short of the 12627 events but considerably more than 1250).
7.) If I check the SearchManager when results are truncated for the "| head 12767" search I see: "eventCount: 12627", "eventIsTruncated: true", and "eventAvailableCount: 1227" (considerably less than the 11444 events that appear in my table).
I'm curious if anyone knows why I would be running into this behavior and if there is anything I can do to get around it? I'm specifically hoping for a solution that allwos me to display all the results of the search in the table on my custom dashboard.
Thank you very much for any help you can provide.
↧
How to install Splunk on a Cisco UCS box ?
I have to setup a Splunk Indexer on a Cisco UCS box.
Please advise how this can be achieved. Thanks
↧
Can you help me with my email alerts issue?
Hi,
I'm trying to configure some alerts by email, but I got the following error:
Sending the test email failed: command="sendemail", (550, '5.7.1 Client does not have permissions to send as this sender') while sending mail to: myemail
The following search command works fine:
head 100 | top 2 host | sendemail to="myemail" server=myserver:25 from=emailalerts
Any suggest? Thanks
↧
↧
How do I index only critical events?
I'm trying to use advanced whitefilter, but I'm coming up short. Basically, I want to index all Windows event logs that have a Type of Critical. I see EventType and Type, but both aren't what I'm looking for.
Perhaps I can do transforms?
↧
Pulldown doesn't work the first time (With a trivial Example!)
Hello,
I have a really simple dashboard with a single pulldown. I notice that it never seems to take effect the first time I select a value. Only the second time.
Here is the code with a base search and a simple table panel over the past hour. If I load this in Splunk (6.5.2) and run it I will see the dropdown populated. If I choose one sourcetype, it will immediately update the pulldown options but NOT update the table results. If I reset and chose another sourcetype it then takes effect. Hypothesis, this is because of a base search and filtering search for the dropdowns? I'd like for it to work the first time without requiring a Submit button. Note: I tried setting the option searchWhenChanged - but it didn't help. (searchWhenChanged="true")
↧
add dynamic overlays to chart
Just wondering if there's a way to get a handle to the Highcharts javascript object that might have been created when generating the splunk chart? I was hoping to be able to dynamically show and hide additional overlays on a specific chart via javascript.
↧
how to add eventdata in splunk
Hi,
By mistake i ran the splunk clean command eventdata is deleted from database.
.Command i ran : /splunk clean eventdata -index main -f
Cleaning database main.
How to add again can someone please help me.
Thanks,
↧
↧
Splunk not picking up the first few lines (3-5 line) of log files
Hi,
I have an issue where Splunk is not picking up the first few lines (3-5 line) of log files when doing a search. There is no customization done via the props and transforms.
I have also checked and didn't find any messages in $SPLUNK_HOME/var/log/splunk/splunkd.log on the forwarder that pointed to any issue of these lines being skipped.
Any suggestions?
Regards,
AKN.
↧
Retention period need to set for DB connect app data
Hi Team
I have 3 queries in DB Connect App
1) Runs once and pull 13 months of data, 2) second also runs once and pull 13 months of data 3) runs from 1st to 7th of every month and freeze for remaining days. As per infra team they can retain data for only 1 month in Dev env and 3 months in Prod env
but my req is to maintain 13 months of data for all queries and it should purged data of 1st month when 14 month is started
↧
Props.conf Source stanza on Universal Forwarders
Currently looking at deploying some changes to ease management of input files in our environment. I've confirmed that the only way to bring in multiple whitelisted files and think them with a sourcetype is to use a source stanza under props.conf. From what I've read, and tested, the sources props.conf would have to run on the forwarder instead of the indexers.
Has anyone tested the affects of the source stanza and resource utilization on a forwarder?
Here is an example of our configs.
INPUTS.CONF
[default]
index = my_index
[monitor:///export2/MyApp/*/logs/]
whitelist = MyApp[^/]*\.log|perflog\.txt[^/]*
followSymlink = false
disabled = 0
----------
PROPS.CONF
[source::.../MyApp*]
sourcetype = my_index:agent
[source::.../Auto*]
sourcetype = my_index:auto
[source::.../MyAppManager*]
sourcetype = my_index:manager
[source::.../MyAppWeb*]
sourcetype = my_index:web
[source::.../perflog.txt*]
sourcetype = my_index:perflog
↧
Could I know about web service of Splunk?
I just started to use the Splunk and also bought annual license.
But, I stuck to confirm to regularly use for security reasons.
They are thinking about some suspicious that all the http methods works.
So, I have to make let them understood why I have to use all http methods.
However I have lack of knowledge about the splunk architect and framework.
Could you explain me? The web framework and architecture about Splunk Enterprise and Why is it to use all http methods ( GET, POST, PU, DELETE, TRACE, etc )needed.
PS.
As I asked questions before that ( https://answers.splunk.com/answers/719361/how-to-disable-methods-from-the-httpslocalhost8000.html )
It was not possible to make limitation of http methods using internal web service.
The other way is that reverse proxy using nginx or apache such as using the other http service.
↧
↧
サーチ時の時刻について
お世話になってます。
サーチ時の時刻がずれているので直したいのですが、どこで直したらいいでしょうか?
ユーザー情報のタイムゾーンを変更するという記事を見かけるのですが、
ライセンスの関係上ユーザーは作れないので、初期ユーザー?を使用してサーチをしています。
↧
Why '[indexer] Eventtype 'wineventlog-ds' does not exist or is disabled' still showing on my SH even I already installed the Splunk Add-on for Microsoft Active Directory on the indexer?
Splunk Add-on for Microsoft Active Directory installed on the sh and indexer is an updated version. We get to see results on the dashboard, but we are bothered by that yellow warning icon. Is there anything we can get rid of the warning? Are we missing something? Thanks in advance
↧
False alert - delay in log writing?
We are getting a random false alert from Splunk (6.5.2) search that's looking if certain string is not found in a logfile within the last 15m.
When we did an investigation and try to search, the string were there for the alert period so it shouldn't have triggered any alert.
We couldn't find any relevant error in the splunkd log on the forwarder, but I did notice the two consecutive entries on the metrics.log:
1/25/19 4:55:01.800 PM 01-25-2019 16:55:01.800 +1100 INFO Metrics - group=per_source_thruput, series="/XXX/systemerr.log", kbps=10.196221, eps=0.193555, kb=316.072266, ev=6, avg_age=1389.166667, max_age=1667
1/25/19 4:22:59.801 PM 01-25-2019 16:22:59.801 +1100 INFO Metrics - group=per_source_thruput, series="/XXX/systemerr.log", kbps=6.268667, eps=0.161285, kb=194.334961, ev=5, avg_age=211.600000, max_age=265
We got the false alert around 4:54, so if I understand correctly by looking at the time gap and the "avg_age" value, it might be possible that the alert was triggered because the data was only being read after 4:55; there was no update (new lines) on the file from 4:22 until 4:55.
So the question is, is my understanding correct? Is the problem caused by delay in writing the data in the source logfile or is it because of processing delay in Splunk itself?
Appreciate any advise,
↧
Line Chart over _time by fieldname
Hi I am trying below query to plot line chart-
index=abc |eval Time=round(endtime-starttime)|chart values(Time) as Time over _time by Type
Here there can be multiple Type values.
my problem is some _time field has multivalue Time field due to which it is not plotted on graph and I am not able to use mvexpand on Type because Type value is not fixed .
my output looks like below
_time Build
29/01/2019 12:01 2
3
4
29/01/2019 12:12 5
from above only 5 value is getting plotted and others are not seen due to multivalue and here I can not apply `mvexpand Build` as this value can change
↧
↧
save panels after reloading the page
Hi all, i have some checkboxes which display single value panels by click/unclick. It works only untill page is not reloaded. May anybody know how to save it so my checked panels would be displayed after reloading page ?
↧
charting.fieldDashStyle error
I have a chart that needs only one field (percentage field) to have a dotted line property.
I need to specify the percentage field to have the shortDash property using fieldDashStyles.
Here is my chart:
index="singers"
| top 20 singers
| eventstats avg(count) as average
| eventstats sum(count) as total
| eval percentage = round((count / total) * 100, 0)
| fields - total
.......
.......
In the source view of my dashboard, an error message shows up that says, 'Unknown option name="charting.fieldDashStyle" for node="chart"'.
I'm unable to save the dashboard.
Any help would be appreciated. Thank you.
↧
Get the total number of events
Hello !
I'm trying to calculate the percentage that a field cover of the total events number, using a search.
This is my search :
[some search]
| fieldsummary
| rename distinct_count as unique_values
| eval percentage= (count /** [total]**) * 100
| table field count unique_values percentage
| fieldformat percentage = printf("%.2f",percentage)."%"
I'm trying to get the [total] of the events, regardless to the number of results found.
![alt text][1]
stats count can't help me because it is not relevant after fieldsummary.
If you know any way to just get the field coverage percentage without calculate it, that is even better.
[1]: /storage/temp/263767-total.png
↧