I installed the APP on a stand-alone server; it worked fine. When I installed in on our SHC, we are not able to view any of the data under the Data Source Check tab. Even clicking "Start Searches" yields no action. What's the fix? Thanks.
↧
Splunk Security Essentials
↧
Rex is not matching as expected
I have a field user= xyz\user11 and i need to match user11 ignoring xyz in the user filed
below is the regex expression we have been trying but it gives error as unmatched parenthesis or some other and Result field is not available in the logs if it runs successfully .
rex field=user (?\w+\\(\w+))
↧
↧
Checking if a value is between a list of values
Hello!
Is there a way to check if a number is between a list of ranges in a multi value field?
For example on this table I would want to create a new true/false field based on if Value is between one of the values in the Ranges column. I know this should be possible with mvexpand but that would get quite verbose especially if there were multiple sets of Ranges.
![alt text][1]
I tried looking but I couldn't find a 'for each' equivalent for multi value fields though maybe there is something I missed.
Thanks for the help!
[1]: /storage/temp/254966-screen-shot-2018-09-17-at-13217-pm.png
↧
Browser Static Content Caching?
There is one setting for browser caching in the web.conf:
use_future_expires = [True | False]
* Determines if the Expires header of /static files is set to a far-future date
* Defaults to True
Oddly, overriding this setting to False does not appear to change the behavior of the web server:
![HTTP Headers][1]
Splunk (7.1.2) was restarted between web requests and an 'Empty Cache and Hard Reload' was performed in the browser (Chrome). I was reviewing the headers of the default.css file.
Putting this behavior aside (which may be due to a simple misunderstanding on my part)...
The current default value for max-age in the Cache-Control header is so large (1 year) that changes to static files do not propagate to the users quickly forcing them to clear their browser caches.
I'm hoping there is a way to override the default value of max-age?
The idea would be to set it to a value of 86400 or 172800 (1 or 2 day(s)) to allow changes to flow out to users without disruption in about a day or so but not disable browser caching entirely.
[1]: /storage/temp/256018-use-future-expires.png
↧
Log retention by Index
Hello,
I went through few forum posts and Splunk documentation on retention settings but its still not 100% clear on which properties are needed and what their values should be. Would greatly appreciate everyone's help with this topic.
For example: lets say on an average, Index X stores 1 GB data/day and we want to keep the data in Hot/Warm for 5 days and in Cold for 365 days then will the properties below help in achieving the data retention goal?
We have 3 nodes with RF and SF set to 3. The properties below were generated by the sizing app.
**[X]**
homePath = volume:primary/X/db
coldPath = volume:secondary/X/colddb
homePath.maxDataSizeMB = 2559
--> ~2.5 GB
--> Specifies the maximum size of the directory <>/X/db and if this size is exceeded, Splunk will move buckets with the oldest value of latest time (for a given bucket) into the cold DB until homePath (<>/X/db) is below the maximum size
coldPath.maxDataSizeMB = 184319
--> ~184 GB
--> Specifies the maximum size of the directory <>/X/colddb and if this size is exceeded, Splunk will freeze buckets with the oldest value of latest time (for a given bucket) until coldPath is below the maximum size.
maxWarmDBCount = 100
--> The maximum number of warm buckets default it 300 but we want to limit it to 100
frozenTimePeriodInSecs = 31536000
--> 365 days
--> Number of seconds after which indexed data rolls to frozen. If you do not specify a coldToFrozenScript, data is deleted when rolled to frozen
maxDataSize = auto
--> The maximum size in MB for a hot DB to reach before a roll to warm is triggered. Default will be 750MB/bucket
maxHotSpanSecs=432000
--> 5 days
--> Upper bound of timespan of hot/warm buckets in seconds.
Thanks!
↧
↧
Is there an app for Apigee so I can see stats of sourcetypes, indexes on search head?
I've Apigee add on installed on the deployment server, is there an app that can be installed on the deployer to view all the stats collected from that add-on..
↧
Splunk Security Essentials: After installing the App on our Search Head Cluster, why can't we see any of our data?
When I installed the App on a stand-alone server, it worked fine. But, when I installed it on our Search Head Cluster, we are not able to view any of the data under the Data Source Check tab. Even clicking "Start Searches" yields no action. What's the fix? Thanks.
↧
Why is the following regex expression not matching as expected?
I have a field user= xyz\user11 and i need to match user11 ignoring xyz in the user filed
below is the regex expression we have been trying but it gives error as unmatched parenthesis or some other and Result field is not available in the logs if it runs successfully .
rex field=user (?\w+\\(\w+))
↧
Will the below properties help us reach the following log retention goals?
Hello,
I went through few forum posts and Splunk documentation on retention settings but its still not 100% clear on which properties are needed and what their values should be. Would greatly appreciate everyone's help with this topic.
For example: lets say on an average, Index X stores 1 GB data/day and we want to keep the data in Hot/Warm for 5 days and in Cold for 365 days then will the properties below help in achieving the data retention goal?
We have 3 nodes with RF and SF set to 3. The properties below were generated by the sizing app.
**[X]**
homePath = volume:primary/X/db
coldPath = volume:secondary/X/colddb
homePath.maxDataSizeMB = 2559
--> ~2.5 GB
--> Specifies the maximum size of the directory <>/X/db and if this size is exceeded, Splunk will move buckets with the oldest value of latest time (for a given bucket) into the cold DB until homePath (<>/X/db) is below the maximum size
coldPath.maxDataSizeMB = 184319
--> ~184 GB
--> Specifies the maximum size of the directory <>/X/colddb and if this size is exceeded, Splunk will freeze buckets with the oldest value of latest time (for a given bucket) until coldPath is below the maximum size.
maxWarmDBCount = 100
--> The maximum number of warm buckets default it 300 but we want to limit it to 100
frozenTimePeriodInSecs = 31536000
--> 365 days
--> Number of seconds after which indexed data rolls to frozen. If you do not specify a coldToFrozenScript, data is deleted when rolled to frozen
maxDataSize = auto
--> The maximum size in MB for a hot DB to reach before a roll to warm is triggered. Default will be 750MB/bucket
maxHotSpanSecs=432000
--> 5 days
--> Upper bound of timespan of hot/warm buckets in seconds.
Thanks!
↧
↧
Single value: trellis - how can I increase the number of single values per page?
Hi,
I would like to increase the number of single values per page in the trellis view. It seems that the limit is 20.
Any ideas, please?
↧
How do I make it so links only display when clicking on the h1 tag?
I have a dashboard with a panel which has a heading(h1 tag) and two links under it . Now, I want to display those two links only when I click on the h1 tag, which means by default links should not be shown. The sample code which I use is shown below:
Click Here
link to dashboard 1
link to dashboard 2
↧
Is there a way to check if a value is between a list of values?
Hello!
Is there a way to check if a number is between a list of ranges in a multi value field?
For example on this table, I would want to create a new true/false field based on if "Value" is between one of the values in the Ranges column. I know this should be possible with `mvexpand` but that would get quite verbose especially if there were multiple sets of Ranges.
![alt text][1]
I tried looking but I couldn't find a 'for each' equivalent for multi value fields though maybe there is something I missed.
Thanks for the help!
[1]: /storage/temp/254966-screen-shot-2018-09-17-at-13217-pm.png
↧
Data display in Splunk
I am new user to Splunk Enterprise and have a basic question on how Splunk parses and displays data.
I am feeding a few .csv files (timestamp, kv pair) as my input. I was hoping that Splunk would automatically detect the "key" and show it as a field on right hand side (under Interesting Fields). And that's what is happening for the most part but it is also appending a value with _. e.g. One of the field is ProductType and it can appear as ProductType=abc, or ProductType=cde or ProductType=xyz. What I have noticed is that if there is only one iteration of ProductType=abc and multiple iterations of other two, Splunk will shows "ProductType_abc" under "Interesting Fields". But when I click on it, it does show all three so I can still sort.
I learned that we can change config files, and also pre-define source fields, but my access is pretty locked down and don't have direct access to config/sys data. Is there anything I can do in my source file that will make Splunk show just the "Keys" under Interesting fields and not club them with any of the values?
↧
↧
Which capability should I use for editing the permissions of a report?
I have created a report and i want to change the permission of it. The permissions should allow the users to read the report but not edit it.
How do I achieve this? Which has the capability to do this? I do not want to use admin role or (admin_all_objects) capability.
Thanks.
↧
Why aren't the Palo Alto App and Palo Alto Add-on transforming global protect user?
Hi
We have noticed that within the Palo Alto app-->Activity-GlobalProtect that "user" is always unknown.
In the transforms:
[extract_globalprotect_user]
SOURCE_KEY = description
REGEX = User name: (?[^,]+)
[extract_globalprotect_ip]
SOURCE_KEY = description
REGEX = Private IP: (?[^,]+)
The user should be extracted out of the description.
Within the props.conf in traffic section
EVAL-user = coalesce(src_user,dest_user,"unknown")
has anyone found this issue and resolved it?
↧
How do you use a range with the where command?
TransactionName=WPP* | stats count(TransactionStatus) as TOTAL count(eval(TransactionStatus == "true")) as SUCCESS count(eval(TransactionStatus == "false")) as FAILURE by TotalNoOfThreadsInGroup | where TotalNoOfThreadsInGroup=25 OR TotalNoOfThreadsInGroup=50 OR TotalNoOfThreadsInGroup=75
The above query gives the data for Thread groups 25,50,75 in each row.
Ideally, the data i need should be like Threadgroup 1 to 25 as one row , 25 to 50 as another and 50 to 75 so on.
Any Insight will be helpful.
Thanks for looking.
![alt text][1]
[1]: /storage/temp/256021-screen-shot-2018-09-17-at-11529-pm.png
↧
Can you help me translate/transcribe ssl_version values in Stream app SSLActivity source?
I can't find an affirmative document / release note, so if you know, please clarify when this ssl_version field was added to the Splunk Stream app.
I am trying to add the ssl_version field to a dashboard, But the values showing in this field do not match up to SSL/TLS versions I recognize.
We're running Splunk Stream 7.1.2 on Splunk Enterprise 6.6.7. I don't find any field reference in the current [Stream App documentation](http://docs.splunk.com/Documentation/StreamApp/7.1.2/User/InformationalDashboards#SSL_Activity "SSL Activity"), or in [Stream Field Details](https://docs.splunk.com/Documentation/StreamApp/7.1.2/User/StreamFieldDetails).
The sample events I'm seeing are all showing a value of "3.3".
↧
↧
How do I send an email to a user dynamically using adaptive response?
Hello Team ,
i have written a query and mapped using a lookup table to get the email ID of a user. i am trying to send an email to this user dynamically using adaptive response to send the email.
I would like to use :- $result.Email$ , but it is not sending the email to the user .
However, if in the search i use `sendemail` and send email, it gets delivered to user correctly
is there any thing missing while sending dynamic email notification in notable events ?
↧
Can you help me with my regex extraction of a field?
Hello Friends,
I have the following issue
I have two types of logs: A & B
A & B are from the same Index, have the same source type and same source (wish of the Client)
BUT they differ in two aspects:
1) the one contains the **value** "aaa" and the another "bbb"
2) log A has the structure FIELDNAME=VALUE
log B has the structure FIELDNAME = VALUE\
since they belong to the same sourcetype i have no idea how to delete this \ after the value
Please help
↧
How do I check to see if my Splunk Index is full?
I am wondering how I can check if an index is full? Going along with this question, is there a way for me to see how much data each index is able to hold? Thanks for your help.
↧