Quantcast
Channel: Questions in topic: "splunk-enterprise"

I want number of days between two events in splunk search?

$
0
0
## My query index=main source=secure.log sourcetype=* | stats earliest(_time) as start, latest(_time) as stop | eval start=strftime(start, "%m/%d/%y") | eval stop=strftime(stop, "%m/%d/%y") | eval days = round((start-stop)/86400). Please refer my below result. start stop 11/16/18 11/23/18 Here i can see start and stop date but want to find difference between start and stop so i can found number of days gap between them. So in above result i wants days column and difference is 7 days. But days column is not coming here. Please suggest.

Splunk Fudamentals 1 Uploading Module 4 Lab files

$
0
0
I'm unable to view Module Lab 4 files after uploading them via Splunk Enterprise.

What permissions are required to use the Lookup File Editor App?

$
0
0
Hello, I installed the Lookup File Editor app, https://splunkbase.splunk.com/app/1724/. It seems to work fine for admin role users, slight delay in populating when I use ... |inputlookup but it works. One of the power users is unable to edit the lookups that he owns and shared globally. Are there specific permission requirements for the Lookup Editor App and the Lookup that need to be set? Thank you

drill-down doesn't work with base query for calendar custom visualization

$
0
0
We are using calendar visualization for showing events in dashboard. I have tried to add drill-down behavior by using click.value. This works perfectly if i don't use base search. Once i switch to base search it click.value works only for first event. Is there a work around for this issue ? Calendar View - click_value: $selected_value$ date: $date$|eval _time=time | search dataType=ptoData | timechart span=1d count by resourceName$click.value$strftime($click.value$,"%d-%b-%Y")

How can I highlight table cells that can either be multi-value or single value?

$
0
0
Hi, I am trying to highlight values in my table but I am having trouble implementing it because the table cells can either be single-value cells or multi-value cells. If I only needed to highlight single-value cells then I can use the Splunk example "Table Cell Highlighting" from the "simple_xml_examples" Splunk app. This works fine for me when highlighting table cells that only have one value. If I only needed to highlight each value in multi-value cells in my table, then I could use the example from the link below which also works perfectly: https://answers.splunk.com/answers/694420/is-it-possible-to-highlight-a-value-within-a-multi-1.html My problem is that my cells can either be single-value or multi-value so I have to write a script that will be able to highlight the cell/value whether it is a multi-value or single-value cell. For example, if I had the following field/values: Field_A = Apple Field_B = Banana Field_C = Orange, Apple (lets say this is a multi-value field). If I wanted to highlight all "Apple" values in my table, I would expect to see the following: ![alt text][1] In a single-value cell, you can see that the whole cell is highlighted (Field_B). In a multi-value cell, you can see that the just the value is highlighted (Field_D). I've tried combining code from both js scripts but have had no luck so far. I've also tried using the two separate js files on the dashboard which worked at the beginning but later I noticed that, in some cases, it was displaying multi-value cells as comma separated single-value cells. Did anyone implement this before? Thanks! [1]: /storage/temp/291967-splunktable.jpg

Sunburst Viz: How to increase number of levels displayed initially when choosing "Zoom in" as an action on a chart?

$
0
0
Hello, I am using Sunburst Viz for one of my charts. When I choose "Zoom in" as an action, I can only see 2 layers initially. When I click on anything it zooms into more layers. Can I increase the number of layers I see initially? Ex: I want to see 3 layers initially. Thanks in advance!!

How to add a value from a lookup table to results, by using a field value from the search?

$
0
0
I want to include a value from a lookup table in search results, by using a field value from the main search.

How to set the default host name in the url link to a report?

$
0
0
The certificate has `hostname.domain.local` and the scheduled reports are coming out with `hostname:port/PathToReport` minus the `domain.local`. I have checked the `etc/system/local/server.conf` and it has the fully qualified domain name in there, but it is not being input to the report links.

How to ensure no data is lost (add back the databases) if a server is rebuilt using an Ansible script?

$
0
0
We have an Ansible script that rebuilds/reindexes etc a Splunk indexer, if for some reason it implodes on itself. We also have incremental backups of the Splunk databases (for this question lets say "Data1"). While the script can rebuild the server, what is the best way to add back those databases if a server is rebuilt so we do not lose all the data we have saved? Thanks in advance for any assistance.

Get events of first day of each month

$
0
0
Hi, We have a report generating data on first day of each month and also on first day of each week. We need to get the data of first day of each month. We have the query as below |eval assetCount=tonumber(substr(Message,42))| eval month = strftime(_time, "%m") |stats max(assetCount) as "Total Count" by month|sort month desc But, this gives the last data of each month. Can you please help in getting the first data of each month. That is the report generated on the first day of each month.

Writing a Splunk Query - Unique Count of Initial Access Key Usage from Cloudtrail

$
0
0
I have a use case to write a splunk query to display in a line or area chart the unique and initial AWS access key usage by IAM users in our org trending for the past year. Management want to be able to visually show increased cloud adoption numbers over time. Any ideas on how to display this? I feel like I'm almost there with stats but not quite. index=blah sourcetype=blah user_type=SAMLuser | stats earliest(eventTime) by userIdentity.userName This almost gets me there, but it won't depict the stats in a pretty line chart. Thanks!

TailReader - Insufficient permissions - Reindexing

$
0
0
TailReader - Insufficient permissions - errors in my logs - will splunk attempt to re-read those at some interval? thus far I only see it doing it once a few hours back and not since :( I also see several databaseDirectory events in the splunkd log that relates to the index that these logs should of went to so I'm not sure whats going on, perhaps just a delay? 00 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 19:43:49.481 +0000 INFO HotBucketRoller - finished moving hot to warm bid=kinesis~20~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF idx=kinesis from=hot_v1_20 to=db_1590613020_1589312100_20 size=956243968 caller=size_exceeded _maxHotBucketSize=786432000 (750MB), bucketSize=1015918592 (968MB) 06-04-2020 19:43:49.483 +0000 INFO IndexWriter - Creating hot bucket=hot_v1_21, idx=kinesis, event timestamp=1590429480, reason="suitable bucket not found, number of hot buckets=1, max=3; closest bucket localid=0, earliest=1577836800, latest=1577836800" 06-04-2020 19:43:49.484 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=0 . Reason='Adding bucket, bid=kinesis~21~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF' 06-04-2020 19:43:49.485 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 19:44:15.461 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=0 . Reason='Buckets were rebuilt or tsidx-minified (bucket_count=1).' 06-04-2020 19:44:15.463 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 19:44:16.399 +0000 INFO IndexerIf - Asked to add or update bucket manifest values, bid=kinesis~20~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF 06-04-2020 19:44:16.454 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=1 . Reason='Updating manifest: bucketUpdates=1' 06-04-2020 19:44:16.458 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 20:22:02.413 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=0 . Reason='Updating bucket, bid=kinesis~21~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF' 06-04-2020 20:22:02.415 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 20:22:02.417 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=0 . Reason='Updating bucket, bid=kinesis~21~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF' 06-04-2020 20:22:02.418 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 20:22:02.419 +0000 INFO HotBucketRoller - finished moving hot to warm bid=kinesis~21~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF idx=kinesis from=hot_v1_21 to=db_1590613020_1589312100_21 size=789688320 caller=size_exceeded _maxHotBucketSize=786432000 (750MB), bucketSize=789729280 (753MB) 06-04-2020 20:22:14.438 +0000 INFO IndexWriter - Creating hot bucket=hot_v1_22, idx=kinesis, event timestamp=1590605700, reason="suitable bucket not found, number of hot buckets=1, max=3; closest bucket localid=0, earliest=1577836800, latest=1577836800" 06-04-2020 20:22:14.439 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=0 . Reason='Adding bucket, bid=kinesis~22~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF' 06-04-2020 20:22:14.440 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 20:22:18.375 +0000 INFO IndexerIf - Asked to add or update bucket manifest values, bid=kinesis~21~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF 06-04-2020 20:22:18.455 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=1 . Reason='Updating manifest: bucketUpdates=1' 06-04-2020 20:22:18.457 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 20:23:15.459 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=0 . Reason='Buckets were rebuilt or tsidx-minified (bucket_count=1).' 06-04-2020 20:23:15.460 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db

How to add a conditional statement in searchmatch?

$
0
0
Hello, I'm new to Splunk, so please pardon me if this is too easy of a question. I'm trying to list attempted operation vs. passed operation and categorize it by apps. Below is the search that I have: index="cts-test-app" source=*PERF* | rex "DN: (?.*?)[}\s]" | stats count(eval(searchmatch("GET /Refid"))) AS "Attempted" count(eval(searchmatch("POST /refid"))) AS "Passed" Now, for both operations, there could be another string indicator. Essentially I want to insert OR operation, something like this: index="cts-test-app" source=*PERF* | rex "DN: (?.*?)[}\s]" | stats count(eval(searchmatch(**"GET /Refid" OR "GET /SomeId"**))) AS "Attempted" count(eval(searchmatch(**"POST /refid" OR "POST /SomeId"**))) AS "Passed" Is there a way to do this with `searchmatch`? If not, can this search be rewritten in a way that would achieve this objective? Any help will be much appreciated.

How to delete directory in /bin of my app during upgrade

$
0
0
Is there a way to delete a directory in the /bin directory of my app during the upgrade process? I have an app that contains the /splunklib in the /bin directory, to be compliant with app inspect I have moved it to /lib. When I install the new version of my app with the upgrade option selected the existing /bin/splunklib directory still remains. After installation of the new version of my app there are now two copies of the splunklib, one in /bin and one in /lib directory. So far the way I have been able to resolve the issue is to delete my app using: ./splunk remove app [appname] -auth : Then install the new version of my app. I would like my app upgrade process to take care of the work rather than requiring command line access to the splunk server.

How to automate default values to populate in a panel when the dashboard is opened?

$
0
0
I have a link list with three tabs (A, B, and C). When A is clicked three panels open (X, Y, and Z) and one drill-down (that doesn't show values unless one of the panels (X, Y, or Z) is clicked on). How do I get the drilldown to automatically be filled with values for the X panel? So, it would be: When A is clicked, I have X, Y, Z, and drilldown, with X values open at the same time. Very appreciated! Thank you.

How to apply a regular expression that pulls multiple values from application log and show them to the given field name

$
0
0
Hi all, I've been struggling to extract certain values from application logs and assign them to the given field name. As I don't know how to use or write regular expression in splunk, I need help to write a query to get the desired output. Here is my base search query: https://www.myapplication.com/myapi/version5/autofill/ "ERROR" here is the output log: "ERROR" "store.view.app.api.controller.myClientLoggingController" "viewhost02" "myview2_2" "catalina-exec-7" "requestId=d4s6666-9d6e-2c0g-7c20-6e9f7wfa7f6" "clientIp=234.234.234.22" "store.view.app.api.controller.myClientLoggingController.logError(?:?):My-AngularApp xxxxxxxxxxxxxxxxxxxxxxxxxxxxxcxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ----------------------------------------------- **NOTE:** in above log I have replaced the brackets with quotes "" Now I want to extract the "requestId", "clientIp" and "My-AngularApp" and assign them to field name as "Req_ID", "Cust_IP" and "App_Name" respectively. Can someone please help with the query to achieve the desired output, as I always struggle with REX syntax and can't write the query by my own. Thank you in advance.

How to resolve TailReader errors and data loss using universal forwarder (bug during applyPendingMetadata, header processor does not own the indexed extractions confs)?

$
0
0
I've been dealing with this TailReader error for a while and was not able to fix it despite reading all answers and similar questions. I'm still experiencing data loss every day. As you can see in below `.confs` I already disabled `indexed_extraction` since the universal forwarder doesn't extract field at index time, but still getting that error. I was told to migrate to a heavy forwarder, but I prefer to solve it on UF if possible. I appreciate any help. `inputs.conf` [monitor:///home/audit/oracle/*/v1[12]*.log] disabled = 0 index = ora sourcetype = oracle:audit:json blacklist = (ERROR|lost|ORA|#|DONE) crcSalt = initCrcLength = 1000 ignoreOlderThan = 4h alwaysOpenFile = 1 interval = 30 `props.conf` [oracle:audit:json] DATETIME_CONFIG = CURRENT #INDEXED_EXTRACTIONS = JSON KV_MODE = none MAX_EVENTS = 5 TRUNCATE = 0 TRANSFORMS-TCP_ROUTING_GNCS = TCP_ROUTING_GNCS TRANSFORMS-hostoverride = hostoverride TRANSFORMS-HOST_JSON = HOST_JSON TRANSFORMS-sourcetype_json11 = sourcetype_json11 TRANSFORMS-sourcetype_json12 = sourcetype_json12 TRANSFORMS-sourcetype_sql11 = sourcetype_sql11 TRANSFORMS-sourcetype_sql12 = sourcetype_sql12

How to abort a search if lookup file is causing errors and incomplete results?

$
0
0
Hello all, I'm using a search that baselines user activity (looks back in time). But I've noticed that sometimes the results are incomplete, and this messes with the next search in the pipeline. Does anyone know how to "abort" (and not update) the lookup file if any errors occurred during the search? Thanks so much. ![alt text][1] [1]: /storage/temp/291968-index-cluster-errors.jpg

Triggered alert on scheduled search didn't send email.

$
0
0
Greetings! I have a scheduled rule that runs every closed minute and it matched an event at 1:30:03PM which was supposed to send an email but it hasn't. What could be the cause of this? ![example][1] Any suggestions will be appreciated [1]: /storage/temp/291969-log.png

How to build a lookup table based on a condition?

$
0
0
Hello all, I can't figure out how to build a lookup with a condition. I have the following table which is my base search: SubnetName ip_address Subnet_ABCD 10.177.99.53 Subnet_1234 10.8.183.3 Subnet_1234 10.8.182.233 Subnet_ABCD 10.177.83.244 And the following lookup table: Last_SubnetName SubnetID NetStart NetEnd Subnet_A 10.177.0.0/16 10.177.0.1 10.177.255.254 Subnet_B 10.8.0.0/16 10.8.0.1 10.8.255.254 Subnet_B 192.16.0.0/24 192.168.0.1 192.168.0.254 This is the closest I got after reading several articles, but as you can see, I got no luck. The result is simply blank every time I try it. index=mybasesearch ( [| inputlookup myLookupTable.csv | table Last_SubnetName,SubnetID,NetStart,NetEnd ] AND last_ip_address >=NetStart AND last_ip_address




Latest Images