Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Results for each minute in an hour (even if there's no data)

$
0
0
Hello All, Suppose I want a search results for past 60minutes, how spunk works now is if there is any event in past 60mins then that is displayed. But what i want is Suppose time is 4pm and I give past 60mins, Splunk should start the data from 4:00, 4:01...... and so on till 5:00 irrespective of data is present or not, if data is not present then the result should give time with corresponding columns blank. Can someone please help mw on this.

Extraction regular expression

$
0
0
I am using the extraction (regular expression) option to extract a particular field from the events. The issue I am having is the extraction works only for the previous events and not for the current ones coming in. Need some help.

Lookup Fields not updating in the Datamodel

$
0
0
I have built an accelerated datamodel with lookup fields. There is a report that is scheduled to run everyday to populate the lookup. The datamodel does not get updated when lookup file is updated. But if I disable acceleration, I can see the fields updated. Do I have to rebuild the datamodel every time lookup file is updated or it is rebuild automatically?

Count events per month until a certain day

$
0
0
Hi community, I need your help!!! It is possible to make a report that counts the number of events grouped by month but until a certain day, that is, if the current day is 9 then the events are counted until the 9th day of each month. Example: _time - count 09/09/2017 - 4567 08/09/2017 - 2346 07/09/2017 - 5678 06/09/2017 - 4789 05/09/2017 - 8965 04/09/2017 - 4567 03/09/2017 - 6789 02/09/2017 - 3456 01/09/2017 - 9087 Thanks

Output for scheduled saved report

$
0
0
Hi, I am new to Splunk. Trying to understanding the scheduled saved reports. What will be the output of scheduled saved reports? will it returns a fresh results or returns the last time the report what I was saved? Thanks, Chandana

How to get earliest datetime

$
0
0
I have a field which contains first_found_date and due to some reason it keeps on changing for some of the assets. Example: If an asset "A1" has 3 first_found_date over a period of time: 2017-06-20 22:30:30 2016-05-25 22:30:30 2017-01-25 22:30:30 I want to use earliest first_found(i.e 2016-05-25 22:30:30) in all my reports. If I use the following query to check the earliest first_found , it takes more than 1 hour to get the value because it has to go through all the records over the period of time. sourcetype=a |eval combo = Asset+"_"+ID |stats min(FIRST_FOUND) as earliest_ff by combo Is there any way we can correlate the asset with its earliest first_found_date without editing props.conf?

How can I get splunk to run "ps aux" and check for a specific process?

$
0
0
Hello all, I have a simple flask webhook running on my splunk server that is managed by supervisord. Since I'd like to know whether the supervisord process is running, I'm looking for a way to get splunk to call the `ps aux | grep supervisord | grep -v grep` command and send an alert when there are no results. Is there a way to get splunk to do that, or are we looking at an alert that calls a python script that writes to a log file that is in turn indexed by splunk? Is there a way to get this process information into the `_introspection` index by updating some config files? Before setting off on this journey I'd like to get some input from the experts! Best regards, Andrew

I want to use JQuery in Splunk Dashboards.In which directory I need to keep the JQuery files and what changes need to be done in dashboard XML? Please explain with an example. Thanks

$
0
0
I want to use JQuery in Splunk Dashboards.In which directory I need to keep the JQuery files and what changes need to be done in dashboard XML? Please explain with an example. Thanks

Count in message string

$
0
0
Hi there This a part of my logs: message="Databases are old: the latest database file is 272 days old." I want have top hosts that database are old more than 7 days. How I can do this? Thanks

Why are some of my log file data are indexed multiple Times in Splunk

$
0
0
I have a file, service.log, that is configured to be monitored and indexed in Splunk. When checking in Splunk, some of the events in the log file are indexed multiple times. The Splunk version of my forwarder is 6.5.3. I have already checked that the events in my log file are unique. Same in inputs.conf with a single entry. Can someone help advise? Thank you. Responses are appreciated.

Bucket repair while Splunk is running

$
0
0
We have a clustered environment and users experience JournalSliceDirectory errors. Reference documentation states that this is due to corrupt buckets and that a fsck repair is the solution. Accoring to DBinspect a lot of buckets over a various of indexes and indexers are corrupt. Therefore we decided to fix all the buckets per indexer. In order to this we have to set the cluster in maintenance mode, stop splunk on a specific indexer and start the repair. We tried this for one indexer. However the time for a single (large) index was already over 30 hours. Se we decided to stop the process since it is unwanted to have the cluster in maintenance mode for such a long time. My question: is there a different way to fix these buckets while splunk is not in maintenance mode and with all the indexers running?

Is the Splunk predict command useful?

$
0
0
So, I have a graph that shows the total user logins per day for an application and I thought it would be cool to show the ability to predict what the total number of logins for the next month would be. So the current graph just shows the previous month of total user log ins each day and when I use predict: | predict Users period=30 future_timespan=30 It basically just mirrors the previous month to the future month since it is looking at the past 30 days. Is there a way to grab more "before" data than what I am displaying so that the predict doesn't just mirror the previous 30 days?

Visualize json array of array

$
0
0
Hi guys, I would like to convert the following event into a table: { Id: 1505207351 Start: 1505207651 Resource: res Nodes: [ [ res1, 1 ] , [ res2, 3 ] ] } The output should be a table like this: Id | Start | Nodes 1505207351 | 1505207651 | [res1,1] , [res2,3] Or even better, display a subtable in the Nodes column: Id | Start | Nodes | | Res | Rank ------------------------------------- 1505207351 | 1505207651 | res1 | 1 | res2 | 3 ------------------------------------ 2305207351 | 2305207651 | res3 | 4 | res4 | 3 The event sourcetype is _json My actual query to search the events is this: index="myindex" | spath | table Id, Start, Nodes The result is a table but the Nodes column is empty Thanks

[SPLUNK4JMX] add Customer MBeans

$
0
0
Hi, I have a customer, with some customer Java MBeans with a hierarchy in 3 levels. This looks like root-Level 1. Sublevel 2. Sublevel and the MBean with attributes and values The configuration for MBeans domain java.lang was no problem and works fine. The test with the wildcard failed. How is it possible to connect to the customer MBeans?

Splunk Hadoop Connect: Field names missing during export

$
0
0
I am fairly new to Splunk Hadoop Connect App. Have installed it on Splunk Enterprise on Ubuntu (16.04). Also using Apache Hadoop 2.8.1 in my environment to save data. I am able to connect, export, import and explore data from Splunk Hadoop Connect App successfully. During scheduled exports to Hadoop, I observed that field names/column names are not included in the file saved on Hadoop. For e.g: I exported a search result (output format - CSV) to Hadoop. When i open the output file (saved on Hadoop), file has the required data but the field names/column names are missing. I was expecting the first line of the output file to have field names (E.g. SourceIP, SourcePort, DestinationIP, DestinationPort etc.) Is this expectation wrong? If yes, is there a way, the field names can be exported during export as well (from Splunk Hadoop Connect App or any other way)? Note: I tried exporting in XML and RAW format as well, but in each case field names are missing from the output file.

dashboard drilldown to execute a query with selected value

$
0
0
Hi, I need to create a drilldown for my dashboard. I need to give the ability to the user to click on a value, and then run a new query and use the value that returned from the query to open a new web page with the value returned from the new query. example: I have the following table: ![alt text][1] I want to exclude the "Drilldown ID" field from the table, and give the user the option to click on a record and another web page will be opened. I need the "Drilldown ID" field to be in the url of the new web page opened, but I dont want it in my table results. Is that possible? Thanks [1]: /storage/temp/211677-capture.png

Plugin for Internet Explorer to get performance metrics on user behavior.

$
0
0
HI We would like to monitor the end users experience in Internet Explorer, primarily to find response times including page load on SaaS like solutions where we are unable to get data from the underlying infrastructure, think things like gmail or Office365. I have found uberAgent and Layer8Insight, but that has been more luck than anything else. My imagination stops with an agent/plugin/addon to Internet Explorer, that are able to pull performance metrics from the browser and send them to Splunk, so they can be used in dashboards or ITSI. My inspiration is something like Dynatrace, where there are metrics for server time, network time, full page load time and time for the page to be visible and actionable - the screen has been filled with data, and you can press on a button. Any help will be much appreciated. Kind regards las

Search pattern from one file in another file in same time frame

$
0
0
Hello, I have a pattern in one file that I need to check if it has occurred in another file. The two files are like: file1: aaa bbb ccc STRING I NEED 1 ddd some random text aaa bbb ccc STRING I NEED 2 ddd some random text aaa bbb ccc STRING I NEED 3 ddd some random text file 2: www xxx PATTERN FROM FILE 1 yyy zzz www xxx PATTERN FROM FILE 1 yyy zzz I tried something like this but doesn't return anything source="file2" [search source="file1" "aaa bbb ccc" | rex "aaa bbb ccc (?.*) ddd"] though I admit I don't fully understand the above query. Help would be appreciated, thanks.

Forescout compatibility

$
0
0
Hi, I want to install Forescout app in my splunk enterprise 6.6 but I see in base splunk that it is compatible but in the documentation https://www.forescout.com/wp-content/uploads/2016/11/ForeScout-App-Splunk-2.5-Guide.pdf I see that is compatible only with version 6.4 and 6.5 so does anyone know if it supports version 6.6?

Calculating percentage

$
0
0
I have a below query: index=idx1 | search 'apiname' = AccountSec | eval TotalTime=Start-End | stats count as "TotalRequests",count(eval(StatusCode like "2%")) as "SuccessCount",count(eval(StatusCode = "500")) as "Error 500",count(eval(StatusCode > "200")) as "Total errors" ,count(eval(TotalTime>500)) as "Count1"by apiname Kindly let me know how can I get the percentage of requests taking less then 500ms i.e. ((TotalRequests-Count1)/TotalRequests)*100.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>