Hello,
we try to index correctly SecAudit-BackendServer.1.log from Dynatrace however the non-encrypted log files have special characters just before the timestamp :
*\x00\x00\x00\xEB\x00\x00\x002018-08-14T16:34:51.920+0200 user=toto,source=1.2.3.4,category=AuditLog,object=,event=Access,status=success,message="successfully read audit log /opt/dynatrace/dynatrace-7.0/log/server/SecAudit-FrontendServer.1.log"*
in ssh:
![alt text][1]
How would you handle with TIME_PREFIX in props.conf?
Thanks.
[1]: /storage/temp/255701-capture.png
↧
Dynatrace audit logs indexing problem
↧
Regex help, regex query works, failing on splunk query
Need assistance regex to reformat the field
the field is Message. And the output is
"*Reason: Details: Attributes: folderPathname folder ManagerDisplayName david foster OwnerEmail user@useremail"*
when developing the regex to select anything after "Attributes:" i was able to create this rex
"*(?i)Attributes: (?.+)"*
It works in regex101.com and displays this field
the SPLUNK query that i wrote is
"*(base search)||rex field=Message "Attributes: (?.+)*"
but the message field still shows the entire message value.
Any assistance will help
↧
↧
eventgen events stopped being indexed
The event generation was flawlessly working for weeks but went fully quiet until a sole burst on yesterday at 4pm
The configuration file was not touched (generation frequency still the same) so what can cause the indexing to stop ?
Enabled the following but can only see the events going into the queue but nothing being indexed
debug = true
verbose = true
Taking a look into eventgen_main (there's nothing in eventgen_error )
2018-08-14 15:06:44 eventgen INFO MainProcess Start '2' generatorWorkers for sample 'test_sample.txt'
2018-08-14 15:06:44 eventgen INFO MainProcess Worker# 0: Put 50 events in queue for sample 'test_sample.txt' with et '2018-08-14 15:00:44.141400' and lt '2018-08-14 15:06:44.141451'
2018-08-14 15:06:44 eventgen INFO MainProcess Worker# 1: Put 50 events in queue for sample 'test_sample.txt' with et '2018-08-14 15:00:44.141400' and lt '2018-08-14 15:06:44.141451'
2018-08-14 15:06:44 eventgen INFO MainProcess Worker# 0: Put 50 events in queue for sample 'test_sample.txt' with et '2018-08-14 15:00:44.141400' and lt '2018-08-14 15:06:44.141451'
2018-08-14 15:06:44 eventgen INFO MainProcess Worker# 1: Put 50 events in queue for sample 'test_sample.txt' with et '2018-08-14 15:00:44.141400' and lt '2018-08-14 15:06:44.141451'
2018-08-14 15:12:44 eventgen INFO MainProcess Start '2' generatorWorkers for sample 'test_sample.txt'
Looking at splunkd,
08-14-2018 15:19:41.588 +0000 INFO LicenseUsage - type=Usage s="/opt/splunk/var/log/splunk/test_service.log" st=test_service_log h="ip-172-31-36-143" o="" idx="default" i="EAE584D7-DBF7-4B6F-819B-36BAD9EEE258" pool="auto_generated_pool_enterprise" b=84 poolsz=53687091201
08-14-2018 15:19:41.588 +0000 INFO LicenseUsage - type=Usage s="/opt/splunk/var/log/splunk/test_service.log" st=test_service_log h="ip-172-31-36-143" o="" idx="default" i="EAE584D7-DBF7-4B6F-819B-36BAD9EEE258" pool="auto_generated_pool_enterprise" b=84 poolsz=53687091201
08-14-2018 15:20:42.620 +0000 INFO LicenseUsage - type=Usage s="/opt/splunk/var/log/splunk/test_service.log" st=test_service_log h="ip-172-31-36-143" o="" idx="default" i="EAE584D7-DBF7-4B6F-819B-36BAD9EEE258" pool="auto_generated_pool_enterprise" b=172203 poolsz=53687091201
Can you give me some pointers on what may be happening ?
↧
How to extract fields if the event is in JSON format?
Hi,
I have a below event in json format, I want the fields to be created as "key1","key2",etc. I am trying the following code but it is not working:
index="BBB" sourcetype=AAA | spath output=AA path=message.eumObject.eumInfo.customKeys.key1
Please HELP !!
level: info
message: {"eumObject":{"eumInfo":{"eumId":"123456","eumCoRelationId":"","appId":"xxxxx","timeStamp":"2018-08-1316:21:16","pageUrl":"yyyyyy","pageName":"Operations","mmmmm":"","server":"","responseTime":833,"totalResponseTime":1679.081623,"projectId":""},"timingInfo":{"navigationStart":0,"unloadEventStart":0,"unloadEventEnd":0,"redirectStart":0,"redirectEnd":0,"fetchStart":4,"domainLookupStart":4,"domainLookupEnd":4,"connectStart":4,"connectEnd":4,"secureConnectionStart":0,"requestStart":4,"responseStart":17,"responseEnd":17,"domLoading":23,"domInteractive":803,"domContentLoadedEventStart":844,"domContentLoadedEventEnd":850,"domComplete":1169,"loadEventStart":1169,"loadEventEnd":1169},"userInfo":{"upi":"qqqqq","emailId":"","browserInfo":"Mozilla/5.0 (X11; Linux x86_64; rv:54.0) Gecko/20100101Firefox/54.0","timeZone":"","screenResolution":"1366x637"},"appInfo":{},"errorInfo":{"errorCode":"","errorDescription":"","errorType":""},"resourcesInfo":[],"customKeys":{"key1":833,"key2":1433,"key3":846,"key4":844,"key5":833,"key6":833,"key7":1067,"key8":"","key9":"","key10":""}}}
↧
Why is the KVStore not loading on the search head?
On My search head I cant load the KVSTORE
mongod.log says
2018-08-14T14:46:34.831Z W CONTROL No SSL certificate validation can be performed since no CA file has been provided; please specify an sslCAFile parameter
2018-08-14T14:46:34.836Z F NETWORK The provided SSL certificate is expired or not yet valid.
I know its still valid
/opt/splunk/bin/splunk cmd openssl x509 -enddate -noout -in /opt/splunk/etc/auth/server.pem
notAfter=Sep 14 17:59:43 2020 GMT
How can I fix this?
Please help
↧
↧
Filter transactions that do not contain a certain Event?
I am using transaction to calculate a duration of a job. The search for the completed events is: `index="events" | transaction reference endswith="WAITING"`.
Each event contains a `state` value of either "EXECUTING", "WAITING", or "COMPLETED". I want to find transactions where there is no "COMPLETED" event. Is there a way to do this?
↧
How to extract fields using regex in transforms.conf?
Hello everybody
I am new to the regex topic.
I have events with folowing information:
SPIEE-WIRELESS-MIB::**bsnStationMacAddress**.0 = STRING: **a9:12:fa:13:19:8F**
CISCO-LWAPP-UMBH-CALLT-MIB::**cldcClientSSID**.0 = STRING: **Campus-WLAN**
As we can see, we can present these two (and further logs) in following format:
blabla-MIB::**FIELDNAME**.0 = Blabla: **FIELDVALUE**
I **have to** apply this extraction in transforms.conf
My idea is:
[mytransform]
REGEX= (?:.*\-MIB::)(.+)(?:\.0\s\=\s[a-zA-Z0-9]+:\s)(.+)
FORMAT= $1::$2
Both (.+) are the field names and field values. I have extracted them as groups but how do I define them as a Splunk fieldname and field value?
Thank you in advance
↧
How to find the difference between an inputlookup and a search result?
I've a lookup file which have a mount list with respective servers. Now I have a script which logs the mount available in every 15 min. I want to create an alert if there is any mount missing from what is mentioned in lookup file. Example -
lookup file (host_mount.csv)-
Host,Mount_to_monitor
host1,/opt
host1,/var
host1,/usr
host2,/var
host2,/foo
host3,/bar
host3,/usr
Say my search result table from log of script like -
HostName,Mount
host1,/opt
host1,/usr
host2,/var
host2,/foo
host3,/bar
Which means the diff which is missing would be -
Host,Missing_mount
host1,/var
host3,/usr
How should i do this?
↧
How to suppress search results when a certain condition is met?
I need help with a very basic search concept. I need a way to suppress search results if a certain condition is met. I have a CSV file (file.csv)
Maint
YES
I need the exact search that would follow this basic logic...
index=* (whatever the search) look at file.csv If Maint="YES" ensure search returns nothing, otherwise return as normal
Please provide **actual working search** (I have tried many ways and I am sure I am missing something small, I am not familiar enough with the searches to fix minor issues)
↧
↧
_time and index time are different
how can I know what is wrong when there is a big difference in _time and index time
173,518 events (2/20/13 5:27:50.000 PM to 1/1/18 12:00:00.000 AM) No Event Sampling Job Fast Mode
Events
Statistics (173,518)
Visualization
100 Per Page
Format
Preview
Prev12345678...Next
_time idxtime offset _raw
2015-12-17 07:37:56.000 2018-08-14 04:54:59 83884623 timelag=423 messageId=1450337876eb4ae5bdd1fc7383fe8685 topicName=KistaTopicNC3 retryCount=0 [LogLevel=INFO] -- 2018/08/14 04:54:30 INFO Thread-5 com.apple.keystone.messaging.client.v2.impl.kafka.ReceivedMessagesProcessor - "Kafka consumer received message" timelag=353 messageId=0a9ec5de23bb4f32860895ae5474ea3e topicName=KistaTopicNC3 retryCount=0 [LogLevel=INFO] -- 2018/08/14 04:54:30 INFO Thread-5 com.apple.keystone.messaging.client.v2.impl.kafka.ReceivedMessagesProcessor - "Kafka consumer received message" timelag=257 messageId=228fd880217142c6806367ea28264c24 topicName=KistaTopicNC3 retryCount=0 [LogLevel=INFO] -- 2018/08/14 04:54:30 INFO Thread-5 com.apple.keystone.messaging.client.v2.impl.kafka.ReceivedMessagesProcessor - "Kafka consumer received message" timelag=162 messageId=5383df5980ba4f4882cd464c31ef64aa topicName=KistaTopicNC3 retryCount=0 [LogLevel=INFO] -- 2018/08/14 04:54:30 INFO Thread-5
↧
transaction maxevents=2 returns 1 event, maxevents=3 returns 3
Hello, all,
I'm trying to find the elapsed time between two events: one containing the string "/makeCreditCardPaymentSD" and the one that follows it.
The transaction is grouped over a field called callid, which is correctly extracted.
The logs from which I'm pulling these events may have thousands of irrelevant events between any two for the same callid, but I'm assuming that doesn't matter.
This is what I came up with for a transaction clause:
| transaction callid startswith="/makeCreditCardPaymentSD" maxevents=2
it works... about 3/4 of the time. All the other times it extracts only one event even though there are definitely more events in the transaction. For example, the search with that transaction clause returned this as one of the transactions (IP address redacted):
20180813,12:02:43.644,http-nio-7000-exec-193,INFO ,WebUtilities.getFileNoCache.119,prdvpsivr802-1124346-2018225185936 | FETCH http://###:8080/Postpaid_HostCall/vxml/jsp/makeCreditCardPaymentSD.jsp
If I change maxevents to 3, and change **nothing else** about the query or time range, I get three events in the transaction for that callid:
20180813,12:02:43.644,http-nio-7000-exec-193,INFO ,WebUtilities.getFileNoCache.119,prdvpsivr802-1124346-2018225185936 | FETCH http://###:8080/Postpaid_HostCall/vxml/jsp/makeCreditCardPaymentSD.jsp
20180813,12:02:47.263,http-nio-7000-exec-193,INFO ,WebUtilities.getFile.57,prdvpsivr802-1124346-2018225185936 | FETCH http://###8080/Payment_CCP/vxml/js/menus/PS4535_DM.js
20180813,12:03:09.899,http-nio-7000-exec-172,INFO ,JavaScriptEngine.log.27,prdvpsivr802-1124346-2018225185936 | DISCONNECT EVENT=connection.disconnect.hangup
I've tried a bunch of variations over keeporphans, keepevicted, maxopentxn, maxopenevents, and so on - nothing helps.
The one thing I've tried that does seem to get the right results is to reverse the incoming events and use endswith instead of startswith:
`| reverse | transaction callid endswith="/makeCreditCardPaymentSD" maxevents=2`
but then "reverse" seems to be using a huge amount of memory.
Any suggestions on how to fix this?
Much obliged,
Sean
ETA: I've managed to mitigate the maxevents conflict by setting startswith AND endswith conditions on the transaction, such that a transaction starts with any event containing /makeCreditCardPaymentSD and ends with any event that doesn't contain it:
| transaction callid startswith=eval(if(searchmatch("/makeCreditCardPaymentSD"),true(),0)) endswith=eval(if(searchmatch("/makeCreditCardPaymentSD"),0,true())) maxevents=2 unifyends=true
with the unifyends seemingly necessary to keep other events from elbowing in.
I'm not sure this is capturing all the events I want, though - there are a smaller number of transactions showing up than I expected. I'll keep testing to make sure.
↧
Splunk alert if continuous count is 0 for consecutive five minutes in 10 minutes
I want to run a query for every 10 minutes timeframe. But it should alert only when count is continuously 0 for consecutive 5 minutes.
↧
difference between field extarction and writing regex in search to extract field ?
Can anybody tell me what is the major difference in extraction field from the event and extracting a field using regex in search ? And what is more efficient ?
↧
↧
How to use a lookup table to identify new open ports based on source IP
I have NMAP data in Splunk that reports on open ports associated with a list of IP addresses. I'd like to create a lookup that I can then use to query against and alert/report on in a new query that runs every night. Any suggestions on how to structure the lookup and/or the resulting query?
↧
Archiver - Reporting reporting messages regarding ops.json
The splunkd.log is reporting message regarding ops.json file. I can not find any references to what this file is used for.
Should I be concerned with the size of the file and the archiver performing work on the file. Based on the logs the messages started to report one month ago.
INFO Archiver - Archiving large_file=/opt/splunk/etc/system/replication/ops.json of size_in_bytes=60693118 (exceeding threshold=52428800)
4xCluster Search Heads
10x Indexers
↧
On what user does splunk start after restarting it from deployment server
Hi,
When we restart splunk forwarder from deployment -server does it start
1) based on user defined in boot script OR
2) Based on the UserId under which is installed.
Suppose Splunk is installed under user "splunk" and boot script is defined to start as root user.
So when Splunk restarted by DS, does it run as root or splunk user .
↧
Error when is loading LDAP module
Hi.
When I try to use this add-on I see this error on myldap.py.log (on debug mode):
myldap:63 - ERROR: LDAP modul load failed with error libsasl2.so.2: cannot open shared object file: No such file or directory!
It happens on these operating systems:
Red Hat Enterprise Linux Server release 7.4 (Maipo)
CentOS Linux release 7.4.1708 (Core)
Windows Server 2012 R2
My Splunk Enterprise version is 7.0.4
My Python Version is 2.7.5.
How I correct this issue?
↧
↧
What is the difference between extracting field from an event and extracting a field using regex in a search?
Can anybody tell me what is the major difference in extraction field from the event and extracting a field using regex in search? And what is more efficient?
↧
Lower Memory usage
I have a query that is being blocked from retrieving all relevant data due to policy to keep queries under 500mb, is there anyway I could optimize this query?
index=Nitro_server=xs_json earliest=-48h
| rename hdr.nitro as nitro_loc
| join type=inner
[ inputlookup nitro_loc.csv
| search TimeZone="C" OR "CDT"
| eval nitro_loc=case(len(STORE)==4,STORE,len(STORE)==3,"0".STORE,len(STORE)==2,"00".STORE,len(STORE)==1,"000".STORE) ]
| search Model="*v10*" nitro_loc="*" FirmwareVersion = *
| dedup "Mac_Address"
| stats count by FirmwareVersion TimeZone
Any suggestions would be appreciated!
↧
LDAP connection invalid
Hi.
When I try to use this add-on, on a specific case, it shows me this error on splunklib.log:
2018-08-14 16:54:01,748, Level=ERROR, Pid=64693, Logger=splunklib, File=search_command.py, Line=971, LDAPError at "/opt/splunk/etc/apps/TA-pyLDAP/bin/ldap/ldapobject.py", line 106 : LDAP connection invalid
Traceback:
File "/opt/splunk/etc/apps/TA-pyLDAP/bin/splunklib/searchcommands/search_command.py", line 771, in _process_protocol_v2
self._execute(ifile, None)
File "/opt/splunk/etc/apps/TA-pyLDAP/bin/splunklib/searchcommands/generating_command.py", line 196, in _execute
self._record_writer.write_records(self.generate())
File "/opt/splunk/etc/apps/TA-pyLDAP/bin/splunklib/searchcommands/internals.py", line 519, in write_records
for record in records:
File "/opt/splunk/etc/apps/TA-pyLDAP/bin/ldapquery.py", line 93, in generate
result_type, result_data = l.result(result_id, 0)
File "/opt/splunk/etc/apps/TA-pyLDAP/bin/ldap/ldapobject.py", line 503, in result
resp_type, resp_data, resp_msgid = self.result2(msgid,all,timeout)
File "/opt/splunk/etc/apps/TA-pyLDAP/bin/ldap/ldapobject.py", line 507, in result2
resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all,timeout)
File "/opt/splunk/etc/apps/TA-pyLDAP/bin/ldap/ldapobject.py", line 514, in result3
resp_ctrl_classes=resp_ctrl_classes
File "/opt/splunk/etc/apps/TA-pyLDAP/bin/ldap/ldapobject.py", line 521, in result4
ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop)
File "/opt/splunk/etc/apps/TA-pyLDAP/bin/ldap/ldapobject.py", line 106, in _ldap_call
result = func(*args,**kwargs)
The add-on can connect to my OpenLDAP (I captured the packets with tcpdump and I see on Wireshark the connection works).
Someone can help me with this issue? I'm using SO Red Hat Enterprise Linux Server release 7.4 (Maipo).
↧