We would like to upgrade our current six years old perpetual license to no-enforcement license but our support contract is long expired. Is there any way to do that without buying a support contract?
↧
Upgrade perpetual license to No-enforcement license
↧
Alerts don't have delete or schedule option in search head
Customer have 3 search heads in cluster environment, they have pushed savedsearches.conf from the deployer. Now they are unable to delete or schedule those alerts via splunk web, it is grayed out
↧
↧
Property to prevent data truncation in TABLE command
Hi All,
I have a data truncation problem. I have a long event that is >10,000 characters. I updated the props.conf TRUNCATE field to 100,000 and this works great to view full event. However, when I want to dump data in organized format with TABLE command it still limits me to 10,000 characters.
Specifically, I have a search command like this:
{base search} | table key1, key2, key3, ...
{base search} displays more than 10,000 characters if I run it by itself because I changed TRUNCATE in props.conf. But when I add "| table" then table thinks that I am only giving it 10,000 characters. So some output keys in the table are broken.
Is there a different property I can update in props.conf to fix this problem?
↧
Regex Processor CPU Profiling per Sourcetype” under "DMC -> Indexing -> Indexing Performance:Instances" is not populating any data.
Regex Processor CPU Profiling per Sourcetype” under "DMC -> Indexing -> Indexing Performance:Instances" is not populating any data.
↧
How can I use blacklist to ignore directories that include "-"
I have a monitor set up in inputs.conf on my UF as follows
[monitor:///log/test]
blacklist = ppd.*\.log$|prod.*\.log$
sourcetype = service_log
index = nptest1
/log/test also contains some subdirectories I was to ignore, named application- or server- how can I edit my blacklist to ignore directories that end in a hyphen?
Thanks!
↧
↧
xml field extraction
I have one xml file
I want to extract (at search time) the fields/values IN BETWEEN and and throw away any of the lines before the very first and after the very last .
(In XML, the fields/values are located on each line in the form value)
4. Use the date in the ActionDate field and the time in the ActionTime field as the timestamp.-423423445345345.10742916222947 Inteccccn 20 Iwildwood 2013-04-24 00:07:00 0 -80.23429525620114,24.08680387475695 local
below is my props.conf and transforms.conf
props.conf
[dreamcrusher]
BREAK_ONLY_BEFORE =
DATETIME_CONFIG =
NO_BINARY_CHECK = true
TIME_FORMAT =
TIME_PREFIX =
category = Custom
disabled = false
pulldown_type = true
PREAMBLE_REGEX = ^<\S+.*
REPORT-dream = dream
transforms.conf
[dream]
REGEX = ^\<(.*?)\>(\S+)\<
FORMAT = $1::$2
======
when i check the events there are no search time extraction
↧
Splunk Dashboard
Is there a way to see in GUI when dashboards in Splunk were first created and also who edited and viewed the dashboards.
↧
Transform.conf Regex Field extraction
Hello everybody
I am new to the regex topic.
I have events with folowing information:
SPIEE-WIRELESS-MIB::**bsnStationMacAddress**.0 = STRING: **a9:12:fa:13:19:8F**
CISCO-LWAPP-UMBH-CALLT-MIB::**cldcClientSSID**.0 = STRING: **Campus-WLAN**
As we can see, we can present these two (and further logs) in folowing format
blabla-MIB::**FIELDNAME**.0 = Blabla: **FIELDVALUE**
I **have to** apply this exraction in transforms.conf
My idea is:
[mytransform]
REGEX= (?:.*\-MIB::)(.+)(?:\.0\s\=\s[a-zA-Z0-9]+:\s)(.+)
FORMAT= $1::$2
The both (.+) are the field name and field value. I have extracted them as groups but how do i define them as a Splunk fieldname and field value
Thank you in advance
↧
Json Field Extraction
Hi,
I have a below event in json format, I want the fields to be created as "key1","key2",etc. I am trying the following code but not working :
index="BBB" sourcetype=AAA | spath output=AA path=message.eumObject.eumInfo.customKeys.key1
Please HELP !!
level: info
message: {"eumObject":{"eumInfo":{"eumId":"123456","eumCoRelationId":"","appId":"xxxxx","timeStamp":"2018-08-1316:21:16","pageUrl":"yyyyyy","pageName":"Operations","mmmmm":"","server":"","responseTime":833,"totalResponseTime":1679.081623,"projectId":""},"timingInfo":{"navigationStart":0,"unloadEventStart":0,"unloadEventEnd":0,"redirectStart":0,"redirectEnd":0,"fetchStart":4,"domainLookupStart":4,"domainLookupEnd":4,"connectStart":4,"connectEnd":4,"secureConnectionStart":0,"requestStart":4,"responseStart":17,"responseEnd":17,"domLoading":23,"domInteractive":803,"domContentLoadedEventStart":844,"domContentLoadedEventEnd":850,"domComplete":1169,"loadEventStart":1169,"loadEventEnd":1169},"userInfo":{"upi":"qqqqq","emailId":"","browserInfo":"Mozilla/5.0 (X11; Linux x86_64; rv:54.0) Gecko/20100101Firefox/54.0","timeZone":"","screenResolution":"1366x637"},"appInfo":{},"errorInfo":{"errorCode":"","errorDescription":"","errorType":""},"resourcesInfo":[],"customKeys":{"key1":833,"key2":1433,"key3":846,"key4":844,"key5":833,"key6":833,"key7":1067,"key8":"","key9":"","key10":""}}}
↧
↧
Getting a timeout error when configuring the tenant on the office 365 add on
When I add all the details required on splunk add on for office 365, I click add and then get the following error:
HTTPSConnectionPool(host='login.microsoftonline.com', port=443): Max retries exceeded with url: //oauth2/token (Caused by : [Errno 110] Connection timed out)
Has any one seen this before?
↧
Need help with time-based search with data from two sources
I'm looking to put together some reports on vulnerability data where I can show a trending value of both fixed and active vulns at any given time. Our vulnerability data is separated where we have assets (asset_id) and the last time they were scanned (last_scan_finished) as one sourcetype, and the assets (asset_id), vulnerability (signature_id) and the last time that vuln was detected (most_recently_discovered) as another sourcetype. When a vulnerability is resolved we don't receive any indication in the data, but it will not be detected in future scans.
I'm looking to timechart each combination of asset_id and signature_id, where if the most_recently_discovered field is greater than or equal to the last_scan_finished date it is considered active, otherwise it's resolved. I've made several attempts however haven't been able to come up with a workable solution. Any help would be greatly appreciated.
↧
how to index outlook data into splunk enterprise 7.1.2 version
I have a business requirement to index outlook data into the Splunk. I used IMAPmailbox,imap and Microsoft Office 365 apps and provided required inputs like server name,username,password but no result found. Request you to please provide the execution process or solution to index outlook into the splunk. Thank You!..
↧
How to find diff between a inputlookup and search result?
I've a lookup file which have a mount list with respective servers. Now I have a script which logs the mount available in every 15 min. I want to create an alert if there is any mount missing from what is mentioned in lookup file. Example -
lookup file (host_mount.csv)-
Host,Mount_to_monitor
host1,/opt
host1,/var
host1,/usr
host2,/var
host2,/foo
host3,/bar
host3,/usr
Say my search result table from log of script like -
HostName,Mount
host1,/opt
host1,/usr
host2,/var
host2,/foo
host3,/bar
which means the diff which is missing would be -
Host,Missing_mount
host1,/var
host3,/usr
How should i do this?
↧
↧
How to use environment default tokens in HTML Dashboards?
Hi Splunkers, I'm trying to use $env:user_realname$ in a HTML dashboard, I've searched a lot about It and realize that I could only get this information with a SplunkJs function but I only get this until now:
var tokens = mvc.Components.getInstance("default");
var current=Splunk.util.getConfigValue("USERNAME");
tokens.set("currentuser", current);
The code above only brings me my username and I want full name (or realname, as Splunk use to reference).
Thanks everyone!
↧
HELP! KVSTORE is broken
On My search head I cant load the KVSTORE
mongod.log says
2018-08-14T14:46:34.831Z W CONTROL No SSL certificate validation can be performed since no CA file has been provided; please specify an sslCAFile parameter
2018-08-14T14:46:34.836Z F NETWORK The provided SSL certificate is expired or not yet valid.
How can I fix this?
Please help
↧
KV Store Failing
I see other questions in the answers site but at this time, i feel mine is unique to the other issues. A rolling message (across search heads).
ServerA (or any of the others in the cluster), has the following message: KV Store changed status to failed. Failed to start KV Store process. See mongod.log and splunkd.log for details.
OR
ServerA (or any of the others in the cluster), has the following message: Failed to start KV Store process. See mongod.log and splunkd.log for details.
Other solutions that appear to have worked is to change the kv store count to an ODD number and reset it due to a limitation in mongodb. We have the SHC Deployer and 3 search heads, but honestly we're not using the KVStore anyway. Can we just disable the KV store to prevent the message from kicking up all the time?
If we can't just disable it, do we have to add another search head to remove the message?? I can't recommend we remove one...
↧
Olap4j with Splunk DBConnect
Hello Everyone,
I am trying to use Olap4j driver with DBConnect towards Olap based database (SAP BW). When I try to load the driver, I get this in logs:
"java.lang.NoClassDefFoundError: org/olap4j/OlapDatabaseMetaData".
File structure:
../drivers/olap4j-xmla-1.2.0.jar
../drivers/olap4j-xmla-1.2.0-libs/olap4j-1.2.0.jar
Any ideas on the error? If I look into the source code on olap4j-1.2.0.jar it does contain class definition for OlapDatabaseMetaData.
↧
↧
Suppress search results
I need help with a very basic search concept. I need a way to suppress search results if a certain condition is met. I have a csv file (file.csv)
Maint
YES
I need the exact search that would follow this basic logic...
index=* (whatever the search) look at file.csv If Maint="YES" ensure search returns nothing, otherwise return as normal
Please provide **actual working search** (I have tried many ways and I am sure I am missing something small, I am not familiar enough with the searches to fix minor issues)
↧
Setting up SQS based S3 input!
Hi
I am running an splunk instance within my AWS account, and i'm trying to setup an Cloudtrail SQS based S3 imput. The cloud trail logs are stored in a bucket (auditlogs) in separate account, which I access via a switch role.
I have done the following however no data appears in index I have selected
- Created an IAM policy with the required permissions
- Created the required SQS Queue, granting permissions to the auditlogs bucket to post events.
- Added an event notification on the S3 bucket to forward 'Object-created' events to my SQS Queue
- Confirmed that the SQS Queue is receiving messages
- Added a new input within the AWS Add on for splunk web, using my auto discovered IAM role
- Requested for the input sends data to my aduit index.
- Checked the logs on the splunk instance and found no errors, other issues.
Questions
- The documentation seems very unclear on the need to have an SNS topic in the middle here? Is it a requirement that SQS is updated via a subscription to an SNS topic. Specifically S3 > SNS > SQS > Splunk? Or would S3 > SQS > Splunk also work?
- My auto discovered IAM role applied to the splunk EC2 instance is in a separate account to the S3 bucket i'm trying to import data from. Is this going to cause me issues - I assume this is the issue, but there
I would appreciate any guidance here!
Thanks
↧
Splunk skips or delays indexing of the log file during the rotation occassionaly
Hello Splunkers,
I have an issue where Splunk some times skips to index the log file during the rotation or delays the indexing during the log rotation.
This issue is only for specific file.So we can rule out the blocked queue, timezone, network throughput or slow performing indexer/forwarder.
Sar report showed good iostat cpu and mem stats on the forwarder.
I don't see initcrclength(crcSalT) or file_descriptor related issue in the splunk log.
In fact there are no error in the splunk log during this issue.
Any guidance is highly appreciated.
Best Regards,
Ankith
↧