Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Convert time to epoch time & time zone

$
0
0
In my index, I have a field that has been extracted for a "last checkin time". The time shown is GMT and I need to use this field when using a dashboard to accurately show data. I am having a problem with my strptime in that it is not working. An example is an extracted field ==> 2020-02-13 05:00:29.0 The time is GMT (and it needs to be GMT+8) I have done the following: index=someindex source="mysource" | eval epoch_time=strptime("last_checkin_time", "%Y-%m-%d %H:%M:%S.%3N") I have tried adjusting the value of eval to use the %Q options but that has not been able to generate anew field that I can use. I have also tried to use %Z at the end of the strptime to try and force timezone but to no avail I would like to use this time instead of the ingest time (or _time) to drive my dashboard. Thanks in advance

Splunk Data and apps all got deleted

$
0
0
All data and apps from our distributed architecture suddenly got deleted, including indexes and other configurations. Anyone faced this issue before? Any way to check how this happened?

How to fix "Invalid key in stanza" issue?

$
0
0
Hi all, I have little issue with input made via Add-on Builder (Python3). I have been made some inputs and all other inputs work correctly, but only one does not run, and in _internal a have error "Invalid key in stanza [input_http] in /opt/splunk/etc/apps/application-for-splunk/default/inputs.conf, line 68: python.version (value: python3)." I use Add-on builder version 3.0.1 and Splunk Enterprise 7.3.1.1. How I can fix it? Thanks.

Analysis Of SplunkBase Apps for Splunk; py appears to run well but get alot of errors on scheduled runs

$
0
0
Any Ideas how to fix? The code appears to run ok, /opt/splunk/bin/splunk cmd python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py [I see lots of json meta for apps start flashing in] But if i let the schedules run I am finding some info like this every 4 hours; # cat /opt/splunk/var/log/splunk/splunkd.log|grep -i getSplunkAppsV1 02-13-2020 03:34:17.455 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" Traceback (most recent call last): 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py", line 90, in 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" if __name__ == "__main__": main() 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py", line 61, in main 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" for app_json in iterate_apps(app_func): 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py", line 51, in iterate_apps 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" data = get_apps(limit, offset, app_filter) ### Download initial list of the apps 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py", line 18, in get_apps 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" data = json.load(urllib2.urlopen(url)) 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 154, in urlopen 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" return opener.open(url, data, timeout) 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 435, in open 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" response = meth(req, response) 02-13-2020 03:34:17.456 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 548, in http_response 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" 'http', request, response, code, msg, hdrs) 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 473, in error 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" return self._call_chain(*args) 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 407, in _call_chain 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" result = func(*args) 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 556, in http_error_default 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) 02-13-2020 03:34:17.457 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/analysis_of_splunkbase_apps/bin/getSplunkAppsV1.py" urllib2.HTTPError: HTTP Error 503: Service Unavailable

Is it possible to include Dashboards part of email body in scheduler instead of PDF attachment?

$
0
0
Hi All, Could you please help me if there is any possibility of including the dashboard content into email body when we schedule the body? I know that we can include Report content part of email body. However, unable to do the same for Dashboard content. Thanks.

How to capture CPU Load Average shown in Windows Task manager in Splunk

$
0
0
Hi, I am trying to create a report to capture overall CPU Load average. I have created a search query in splunk using perfmon counter but that does not represent the overall CPU load as using individual counters give separate values. I want to capture overall CPU load as displayed in Windows Task Manager. Please help to provide a search query for overall CPU usage. I am using the below search query : host="*" source="Perfmon:Processor" counter="% Processor Time" instance="_Total" object="Processor" | bucket _time span=1d | chart limit=0 avg(Value) over _time by host | eval Time=_time | convert timeformat="%d-%b %H:%M:%S" ctime(Time) |fields - _time|table Time, *

90 day average per field using Summary indexing and outputlookup

$
0
0
I have a requirement to get the average of the count of the IPs over the last 90 days. I have thought of 2 approaches to distribute the query overhead across a span of 90 days. 1. Will schedule a query to run everyday at the end of the day and collect the result in a csv file using outputlookup. Will keep appending the results every day to the existing file. Then use this file to get the 90 day average. 2. Use summary indexing and schedule a query to run everyday at the end of the day and collect the result in the summary index specifying the search a name. Will keep appending the results every day to this index . Then use this index to get the 90 day average. Question: Is there a way to restrict the lookup csv file or the summary index to have just the latest 90 days record. Meaning , I want to purge the rows in the index or the file which are greater than 90 days. How can I do it?

Monitoring Windows Service State History

$
0
0
Hi fellow Splunkers, Sorry I dont have enough karma points to post a link. I followed a Splunk blog post about monitoring windows service by Jason Conger. TIPS & TRICKS Monitoring Windows Service State History I used wmi.conf to monitor my services on my servers. In this snippet below for server1 the results turn out great I have a full service state history of the server1 for past 1day index=windows sourcetype="WMI:Services" host=server1 earliest=-1d@d latest=now | streamstats current=false last(State) AS new_state last(_time) AS time_of_change BY DisplayName | where State != new_state | convert ctime(time_of_change) AS time_of_change | rename State AS old_state | table time_of_change host DisplayName old_state new_state In this snippet below for I would wish to have a service state history of all my servers in my enviroment for past 1day. However the results turned out not the way I expected it to be. index=windows sourcetype="WMI:Services" host=* earliest=-1d@d latest=now | streamstats current=false last(State) AS new_state last(_time) AS time_of_change BY DisplayName | where State != new_state | convert ctime(time_of_change) AS time_of_change | rename State AS old_state | table time_of_change host DisplayName old_state new_state Did I miss out anything? Would be grateful if somebody pointed me in the right direction. Thanks!

How to highlight only the maximum value of a cell in table representation

$
0
0
In my dashboard, a table panel which have the percentage of a metric for each month is displayed. Below is the query details: | stats avg(metric_perc) as Metric over Period by Host Below are the results of my search query: Sl.No, Period, host1, host2, host3 1 Jan 36 52 64 2 Feb 43 69 66 : : 12 Dec 26 45 58 I want to highlight only the maximum the values of each host for each period. [Note: Here host1, host2 and host3 are not column names they are values of the column called Host. As I used the keyword `over`, I got the above table representation with the combination of 3 fields such as Metric, Period and Host] Could someone help me in finding how can I highlight or color code only the maximum value of Metric for each host and period.

How to drill-down and see the events in the dashboard panel?

$
0
0
My dashboard has multiple panels, One particular panel contains a linechart indicating avg response time spanned by 5 min. I want to be able to click any point in the line chart and it should show me the associated events in that same panel responsible for generating that point. Basically I want to be able to drill down and see the resulting events in the same place/panel itself. Is it doable in Splunk?

Lookup Definition - Default Matches not working as expected

$
0
0
Hi, I have built a lookup table, definition & automatic lookup. I've set the definition to; Min Matches - 1 Max Matches - 1 Default Matches - None The additional lookup fields appear in the appear data as expected with 1 result having the value of "None". However, when I click the "None" value it appears as no results found. If I then add a wildcard value before the "*None", the one result in question appears. Has anyone else come across same issue? Thanks

Exporting dashboard as a PDF on a custom template

$
0
0
Hi, We have a custom template for PDF documents, And we wanted to print / download dashboard in the same internal template. is there any way that we can export the dashboard as PDF from Splunk in our PDF template / format thanks

RHL version on all the UF

$
0
0
HI All , Could you please help me in getting the query to get red hat linux version on the all UF , i have checked many splunk answers the query uses metrics logs and i got only the version of the splunk and os as Linux , but not the actual linux version on the host .

Smart pdf export issue

$
0
0
Hello, I am using smart pdf export app (https://splunkbase.splunk.com/app/4030/) . when I am trying to export my dashboard, it stuck and won't generate pdf. My dashboard has 3-4 tables and couple of charts. I have also increased row count in limits.conf. limits.conf [pdf] max_rows_per_table = 50000 Tried with different browser as well but no luck. Is there anything we can do here ? ![alt text][1] [1]: /storage/temp/282629-2020-02-13-12-11-07-mte-runtime-splunk-726.png

WARN: path=/masterlm/usage: Signature mismatch between license slave=x.x.x.x and this LM. Plz make sure that the pass4SymmKey setting in server.conf, under general is the same for the License Master and all its slaves from ip=x.x.x.x

$
0
0
Hi Team, I am getting error notification on Deployer, i have checked with pass4SymmKey there encrytion is same for deployer and License Master, and there is no change in password. KIndly assist to resolve the error. we are running with 6.5.0, Before upgrade i need to remove all error notification.

How to declare the timerange in a splunk report, which will be generate once a week?

$
0
0
Hello there There is a report, which shows some useful informations about some Application. Whatever. Now I want to declare in the the report the timerange (last week, example 03.02.2020 00;00 until 10.02.2020 00:00). Or maybe there is possibility to declare the timerange in the description of the report, like a variable or something like that. Here is my string, maybe i can build in something: index=smsc tag=MPRO_PRODUCTION DATA="*8000000400000000*" OR "*8000000400000058*" | dedup DATA | chart count by SHORT_ID, command_status_code | search NOT ESME_RTHROTTLED=0 | eval "THROTTLING %"=(ESME_RTHROTTLED/(ESME_RTHROTTLED + ESME_ROK)*100) | sort - ESME_RTHROTTLED | head 15 Thanks for your help!

Passing multiselect in Where clause

$
0
0
I've a multiselect.9,6,7grade_namegrade_name9,6,7 I want to pass these selected values in a where clause: ] |inputlookup prod_students.csv | table school_name, school_id,grade_name |eventstats max(grade_name) as mx min(grade_name) as mn by school_name,school_id| where grade_name IN ("$grade_name$") The above query is not working. I can't move the where next to lookup as I'm doing it on the eventstats field.

SAI compatibilty with AWS App and AddOn

$
0
0
Hi, is the viable to install the AWS AddOn and SAI 2.0.2 on a standalone Enterprise instance? According to documentation SAI 2.0.2 cannot work with the AWS AddOn to monitor AWS events. But can they coexist on the same instance if AWS App and AWS AddOn are used for monitoring AWS and SAI 2.0.2 is used to monitor Windows and Linux servers?

Filtering data from lookup csv file based on time difference

$
0
0
I have a lookup csv file which has the following data. Day Messages 12/02/2020 1571 12/02/2020 302 12/02/2020 1 What I want to do is read the Day column and then subtract the day from today's date to check if the difference is greater than 30. If the diff > 30 filter it out. I tried the following query and it doesn't work. | inputlookup messages_per_day.csv | eval today=strftime(now(), "%d/%m/%Y") | eval diff=today - Day

Setting up "Windows Host Information" gathering with universal forwarder?

$
0
0
Good Morning I wanted to ask if i could get some assistance/clarification on setting up the Windows Host Information gathering function in Splunk not just for local hosts but remote hosts also, via the universal forwarder. I am trying to follow the following document but I am not clear on how to set things up with a remote server and the Universal forwarder: Splunk® Enterprise - Getting Data In- Monitor Windows host information located here: "https://docs.splunk.com/Documentation/Splunk/7.2.6/Data/MonitorWindowshostinformation" In the section called Use Splunk Web to configure host monitoring subsection Select the input source It describes choosing the Local Windows host monitoring option. I have performed the steps outlined and indeed I am getting information from my Splunk server but it is not entirely clear in the documentation on how to perform this on remote servers. When going into Settings> data inputs> Forwarded Inputs (as opposed to local inputs) > Files and directories > New remote file and trying to setup a new data input there is no option to setup windows host information, it appears to be available under the local inputs only. I am sure I am missing something but I am not sure what that step is? Any guidance/information on how to set this up would be helpful Thank you Dan
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>