Hello Experts,
I am trying to read the text from the last square bracket (which is TestModelCompany,en_US)
21:11:31,367 INFO [TestBenuLogger] [155.56.208.68] [716057] [-] [TestModelCompany,en_US] No 1 XX_TimeStep="10" XX_TimeQuery="10" XX_HTTPSession="1398708550-1911P0" XX_QuerySession="null" XX_TimeStamp="2020-02-09T20:11:31.358Z-PY" XX_Company="Model Company" XX_QueryMode="STANDARD" XX_Agent="Model"
Starting Model API :
Mode : Standard
Query Operation : QUERY
Company : Model Company
New Snapshot Calculation
I wrote a regular expression to extract the content from last bracket,
(?<=\[)[^\[\]]*(?=][^\[\]]+$)
It works well. However I am unable to integrate it in the splunk,
This is my existing splunk query,
sourcetype=text XX_Company="*" last_modified_on index="*_test_application" | rex field=_raw "last_modified_on.*?to_datetime\('(?.*?):\d\d\w\'" | eval lmo_date_converted=strptime(lmo_date,"%Y-%m-%dT%H:%M") | eval daysDiff=(_time-lmo_date_converted)/86400 | rex field=_raw "(?<=\[)[^\[\]]*(?=][^\[\]]+$)" | where daysDiff > 90 | stats avg(daysDiff) as "Last Modified On averege days in past", max(daysDiff) as "Max Value Of Last Modified On" by XX_Company XX_Mode | sort -"Last Modified On averege days in past"
This is a working splunk query. With this, I would like to display the content from last bracket as column. Could you guide?
↧
Search from Last Occurrence of a string
↧
Analyzing HEC response times on idle
Hi,
thanks to the wonderful website_monitoring app, I see some interesting but unexplained tidbits.
We have two indexers with HEC configurued. Because of project delays those HEC inputs are idle.
I use
_https://splunk-index1:8088/services/collector/health_
for the query in website_monitoring.
And at least onece a day I do get a 5 second response time on one of the indexers, not the other. Usually this is less than 20ms.
Checking _index/_audit for anything happening in parallel, I found nothing so far that would explain this monster increase.
It is not linked to specific times.
If I only use the port, the peak times are just up t0 60ms worst case. But that gives me an ugly 404 error, so I figured I might as well use a decent endpoint.
Any ideas?
thx
afx
↧
↧
Does this add-on work with Github's SASS solution?
I'm curious if this add-on will work with the Github SAAS solution. it looks like it's been awhile since it's been updated so just curious. If not, do you know of an add-on that does?
↧
No route to host at 8089 cluster
My indexer cluster is down except for 1 out of 6. 8089 is suddenly not working for indexers and CM<>indexer comms and i get the below error messages. Its a multi site indexer cluste. I have ran telnet and curl commands on 8089 & indexers but still unable to connect to all but 1/6 indexers. Also, deployment server is not accessible. CM is unable to connect to 8089 for the indexers, the indexers cannot talk to each other on port 8089 either and the DS is not able to connect to my indexers at 9996.
----------------------------------------
FYI custom SSL is enabled at 8089 but i don't see as the cause for this connectivity issue.
I have checked with networking team who are saying its an application issue and not iptables/routing issue on the server like i suspected. Please help.
IDX:
02-10-2020 03:19:20.324 +0000 WARN CMSlave - Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=myCM:8089 rv=0 gotConnectionError=0 gotUnexpectedStatusCode=1 actual_response_code=500 expected_response_code=2xx status_line="Internal Server Error" socket_error="No error" remote_error=Cannot add peer=myidx mgmtport=8089 (reason: http client error=No route to host, while trying to reach https://myidx:8089/services/cluster/config). [ event=addPeer status=retrying AddPeerRequest: { _id= active_bundle_id=EF3B7708025567663732F8D6B146A83 add_type=Clear-Masks-And-ReAdd base_generation_id=2063 batch_serialno=1 batch_size=1 forwarderdata_rcv_port=9996 forwarderdata_use_ssl=1 last_complete_generation_id=0 latest_bundle_id=EF3B77080255676637732F8D6B146A83 mgmt_port=8089 name=EEC311D7-7778-44FA-B31D-E66672C1D568 register_forwarder_address= register_replication_address= register_search_address= replication_port=9100 replication_use_ssl=0 replications= server_name=myidx site=site3 splunk_version=7.2.6 splunkd_build_number=c0bf0f679ce9 status=Up } ].
CM:
02-07-2020 18:00:41.497 +0000 WARN CMMaster - event=heartbeat guid=BDD6A029-2082-48ED-96F3-21BD624D94CD msg='signaling Clear-Masks-And-ReAdd' (unknown peer and master initialized=1
02-07-2020 18:00:41.911 +0000 WARN TcpOutputFd - Connect to myidx:9996 failed. No route to host
02-07-2020 18:00:41.912 +0000 WARN TcpOutputProc - Applying quarantine to ip=myidx port=9996 _numberOfFailures=2
02-07-2020 18:00:42.013 +0000 WARN TcpOutputFd - Connect to myidx:9996 failed. No route to host
02-07-2020 18:00:42.323 +0000 WARN CMMaster - event=heartbeat guid=44AF1666-AB56-4CC1-8F01-842AD327CF79 msg='signaling Clear-Masks-And-ReAdd' (unknown peer and master initialized=1
02-07-2020 10:36:54.650 +0000 WARN CMRepJob - _rc=0 statusCode=502 transErr="No route to host" peerErr=""
02-07-2020 10:36:54.650 +0000 WARN CMRepJob - _rc=0 statusCode=502 transErr="No route to host" peerErr=""
DS trying to connect to indexers:
02-07-2020 11:56:12.097 +0000 WARN TcpOutputFd - Connect to idx2:9996 failed. No route to host
02-07-2020 11:56:12.098 +0000 WARN TcpOutputFd - Connect to idx3:9996 failed. No route to host
02-07-2020 11:56:13.804 +0000 WARN TcpOutputFd - Connect to idx1:9996 failed. No route to host
↧
configure Splunk to parse and index JSON data - line break issue
I got a custom crafted json file that holds mix of data types within. I'm a newbie with splunk administration so bear with me.
This is the file I wand to parse:
`{
"data": [
{
"serial": [
0
],
"_score": null,
"_type": "winevtx",
"_index": "xxx",
"_id": "xxx,
"_source": {
"process_id": 48,
"message": "",
"provider_guid": "xxx",
"log_name": "Security",
"source_name": "Microsoft-Windows-Security-Auditing",
"event_data": {
"TicketOptions": "xxx",
"TargetUserName": "xxx",
"ServiceName": "krbtgt",
"IpAddress": "::ffff:",
"TargetDomainName": "xxx",
"IpPort": "53782",
"TicketEncryptionType": "0x12",
"LogonGuid": "xxx",
"TransmittedServices": "-",
"Status": "0x0",
"ServiceSid": "xxx"
},
"beat": {
"name": "xxx",
"version": "5.2.2",
"hostname": "xxx"
},
"thread_id": 1016,
"@version": "1",
"@metadata": {
"index_local_timestamp": "2019-07-20T06:27:21.23323",
"hostname": "xxxDC",
"index_utc_timestamp": "2019-07-20T06:27:21.23323",
"timezone": "UTC+0000"
},
"opcode": "Info",
"@timestamp": "2019-07-20T06:25:33.801Z",
"tags": [
"beats_input_codec_plain_applied"
],
"type": "wineventlog",
"computer_name": "xxx",
"event_id": 4769,
"record_number": "198",
"level": "Information",
"keywords": [
"Audit Success"
],
"host": "xxx",
"task": "Kerberos Service Ticket Operations"
}
},
{
"serial": [
1
],
"_score": null,
"_type": "winevtx",
"_index": "xxx-xxx",
"_id": "==",
"_source": {
"event_data": {
"SubjectDomainName": "-",
"LogonType": "3",
"LogonGuid": "{xxx}",
"SubjectUserSid": "S-1-0-0",
"LogonProcessName": "Kerberos",
"TargetDomainName": "xxx",
"AuthenticationPackageName": "Kerberos",
"ProcessName": "-",
"SubjectLogonId": "0x0",
"TargetUserName": "xxx",
"ProcessId": "0x0",
"TargetLogonId": "",
"IpAddress": "::1",
"LmPackageName": "-",
"ImpersonationLevel": "%%1833",
"IpPort": "0",
"SubjectUserName": "-",
"TargetUserSid": "S-1-5-18",
"KeyLength": "0",
"TransmittedServices": "-"
},
"provider_guid": "{xxx}",
"beat": {
"name": "xxx",
"version": "5.2.2",
"hostname": "xxx"
},
"@metadata": {
"index_local_timestamp": "2019-07-20T06:34:21.23323",
"hostname": "xxx",
"index_utc_timestamp": "2019-07-20T06:34:21.23323",
"timezone": "UTC+0000"
},
"opcode": "Info",
"@timestamp": "2019-07 -20T06:33:40.262Z",
"thread_id": 52,
"event_id": 4624,
"record_number": "123",
"level": "Information",
"log_name": "Security",
"source_name": "Microsoft-Windows-Security-Auditing",
"@version": "1",
"process_id": 48,
"host": "xxx",
"type": "wineventlog",
"computer_name": "xxx",
"version": 1,
"tags": [
"beats_input_codec_plain_applied"
],
"keywords": [
"Audit Success"
],
"task": "Logon",
"message": ""
}
}
}
This is a valid json, as far as I understand I need to define a new link break definition with regex to help splunk parse and index this data correctly with all fields. Can you assist what could be a good regex definition? I tried a few and nothing worked. maybe other settings should be applied? please advise.
Berry
↧
↧
kvstore lookups from database.
Hi
Please give me any feedback . ideas as to whether I am following the best action.
I have a database table that is occasionally updated / add to. I would like to start using this information in searches as a lookup.
What is the best action to take here?
I had thought of running a search and outputting he data to a KVstore lookup I have tried this but as any record in the table could be updated I am not clear on how to use the Key_field / _key value to pick up the updates.
I have also seen examples using a csv lookup and using joins to merge the old / new data then writing out a new file.
Which method is best for picking up changes that may occur in any field from the database. The records do have a fixed identity field which may help.
Anyone able to recommend best method with an example?
↧
Unable to upload dSYM file
I'm trying to upload dSYM file from the UI https://mint.splunk.com/dashboard/project/XXX/settings/dsyms but getting an error:
"Access to XMLHttpRequest at 'https://ios.splkmobile.com/api/v1/dsyms/upload' from origin 'https://mint.splunk.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource."
Also the list of dSYM shows only 4 dSYM files from 2017, but we upload a new one together with the build every month.
I'm using fastlane to upload the dSYM file more details:
https://github.com/fastlane/fastlane/blob/master/fastlane/lib/fastlane/actions/splunkmint.rb
looks like fastlane uses the same api(https://ios.splkmobile.com/api/v1/dsyms/upload) than the Splunk Mint UI.
From the logs fatslane uploaded the file with successfully, but the file does not appear on that dSYM list.
↧
Using result fields for earliest/latest time in secondary search
I have an existing search that finds fields named "RunDate" "StartTime" "EndTime" stored as part of test run summaries. The search then proceeds to convert those time values into usable Unix, via strptime:
index="IDX1" sourcetype="SRC" ProjectName="PRJ" | eval stime = strptime(StartTime,"%m/%d/%Y %I:%M:%S %p") | eval etime = strptime(EndTime,"%m/%d/%Y %I:%M:%S %p") | table RunDate stime etime | sort RunDate desc
Now is the tricky part...
I would like a 4th column that uses the time frame in each row to perform a calculation on values coming from a different index/source.
index="IDX2" "HOST" "data.metricId" IN (1234) | stats avg("data.metricValues{}.value") as average | eval total=average/100
Somehow, this needs to be time constrained by "earliest=stime" & "latest=etime" for each RunDate (the results should be a series)
Is this possible? To run a secondary search/eval, using calculated values from the primary search as the earliest and latest time constraints?
I attempted to do this with a maps search, but it seems that for a maps search to work properly, there must be an overlapping field. In this case, the only thing that overlaps between the two searches are the time parameters.
↧
How to check for updated apps without an online connection
Our Splunk cluster has no Internet connection by policy.
Any idea how to at least semi automate update checks for splunkbase apps?
thx
afx
↧
↧
How to combine rows with overlapping MV values
I have data from a couple different sources that I am trying to combine together into coherent results. The issue I am running into is that sometimes the data does not line up perfectly. Both data sources will report on a user and try to list all their email aliases but sometimes they are incomplete lists and only partially overlap. So we end up with multiple rows that represent the same user but and have most of the same values for the email field, but because they are not **exactly** the same, when I try to group by email address it doesn't work out how I would hope.
I included some example SPL below to illustrate what the data looks like. There are also some other fields in results, but those cannot be used for merging results either as the email address of the user is the only field that is in both data sets.
| makeresults
| eval email =split("1@example.com,2@example.com;2@example.com,3@example.com;4@example.com;5@example.com", ";")
| mvexpand email
| eval email=split(email, ",")
| streamstats count as orig_row
![alt text][1]
So I am wondering if there is any way to combine rows #1 and 2 in the example results while leaving rows 3 and 4 intact?
Thanks!
[1]: /storage/temp/282602-capture.png
↧
How to trim everything from a field after a comma
I have a field that contains:
CN=Joe Smith,OU=Support,OU=Users,OU=CCA,OU=DTC,OU=ENT,DC=ent,DC=abc,DC=store,DC=corp
I'd like to trim off everything after the first comma.
This information can always be changing, so there is no set number of characters.
Thanks.
↧
How to search from last occurrence of a string
Hello Experts,
I am trying to read the text from the last square bracket (which is TestModelCompany,en_US)
21:11:31,367 INFO [TestBenuLogger] [155.56.208.68] [716057] [-] [TestModelCompany,en_US] No 1 XX_TimeStep="10" XX_TimeQuery="10" XX_HTTPSession="1398708550-1911P0" XX_QuerySession="null" XX_TimeStamp="2020-02-09T20:11:31.358Z-PY" XX_Company="Model Company" XX_QueryMode="STANDARD" XX_Agent="Model"
Starting Model API :
Mode : Standard
Query Operation : QUERY
Company : Model Company
New Snapshot Calculation
I wrote a regular expression to extract the content from last bracket,
(?<=\[)[^\[\]]*(?=][^\[\]]+$)
It works well. However I am unable to integrate it in the splunk,
This is my existing splunk query,
sourcetype=text XX_Company="*" last_modified_on index="*_test_application" | rex field=_raw "last_modified_on.*?to_datetime\('(?.*?):\d\d\w\'" | eval lmo_date_converted=strptime(lmo_date,"%Y-%m-%dT%H:%M") | eval daysDiff=(_time-lmo_date_converted)/86400 | rex field=_raw "(?<=\[)[^\[\]]*(?=][^\[\]]+$)" | where daysDiff > 90 | stats avg(daysDiff) as "Last Modified On averege days in past", max(daysDiff) as "Max Value Of Last Modified On" by XX_Company XX_Mode | sort -"Last Modified On averege days in past"
This is a working splunk query. With this, I would like to display the content from the last bracket as a column. Could you guide?
↧
How to configure Splunk to parse and index JSON data
I got a custom-crafted JSON file that holds a mix of data types within. I'm a newbie with Splunk administration so bear with me.
This is the file I want to parse:
`{
"data": [
{
"serial": [
0
],
"_score": null,
"_type": "winevtx",
"_index": "xxx",
"_id": "xxx,
"_source": {
"process_id": 48,
"message": "",
"provider_guid": "xxx",
"log_name": "Security",
"source_name": "Microsoft-Windows-Security-Auditing",
"event_data": {
"TicketOptions": "xxx",
"TargetUserName": "xxx",
"ServiceName": "krbtgt",
"IpAddress": "::ffff:",
"TargetDomainName": "xxx",
"IpPort": "53782",
"TicketEncryptionType": "0x12",
"LogonGuid": "xxx",
"TransmittedServices": "-",
"Status": "0x0",
"ServiceSid": "xxx"
},
"beat": {
"name": "xxx",
"version": "5.2.2",
"hostname": "xxx"
},
"thread_id": 1016,
"@version": "1",
"@metadata": {
"index_local_timestamp": "2019-07-20T06:27:21.23323",
"hostname": "xxxDC",
"index_utc_timestamp": "2019-07-20T06:27:21.23323",
"timezone": "UTC+0000"
},
"opcode": "Info",
"@timestamp": "2019-07-20T06:25:33.801Z",
"tags": [
"beats_input_codec_plain_applied"
],
"type": "wineventlog",
"computer_name": "xxx",
"event_id": 4769,
"record_number": "198",
"level": "Information",
"keywords": [
"Audit Success"
],
"host": "xxx",
"task": "Kerberos Service Ticket Operations"
}
},
{
"serial": [
1
],
"_score": null,
"_type": "winevtx",
"_index": "xxx-xxx",
"_id": "==",
"_source": {
"event_data": {
"SubjectDomainName": "-",
"LogonType": "3",
"LogonGuid": "{xxx}",
"SubjectUserSid": "S-1-0-0",
"LogonProcessName": "Kerberos",
"TargetDomainName": "xxx",
"AuthenticationPackageName": "Kerberos",
"ProcessName": "-",
"SubjectLogonId": "0x0",
"TargetUserName": "xxx",
"ProcessId": "0x0",
"TargetLogonId": "",
"IpAddress": "::1",
"LmPackageName": "-",
"ImpersonationLevel": "%%1833",
"IpPort": "0",
"SubjectUserName": "-",
"TargetUserSid": "S-1-5-18",
"KeyLength": "0",
"TransmittedServices": "-"
},
"provider_guid": "{xxx}",
"beat": {
"name": "xxx",
"version": "5.2.2",
"hostname": "xxx"
},
"@metadata": {
"index_local_timestamp": "2019-07-20T06:34:21.23323",
"hostname": "xxx",
"index_utc_timestamp": "2019-07-20T06:34:21.23323",
"timezone": "UTC+0000"
},
"opcode": "Info",
"@timestamp": "2019-07 -20T06:33:40.262Z",
"thread_id": 52,
"event_id": 4624,
"record_number": "123",
"level": "Information",
"log_name": "Security",
"source_name": "Microsoft-Windows-Security-Auditing",
"@version": "1",
"process_id": 48,
"host": "xxx",
"type": "wineventlog",
"computer_name": "xxx",
"version": 1,
"tags": [
"beats_input_codec_plain_applied"
],
"keywords": [
"Audit Success"
],
"task": "Logon",
"message": ""
}
}
}
This is a valid JSON, as far as I understand I need to define a new link break definition with regex to help Splunk parse and index this data correctly with all fields.
Can you assist what could be a good regex definition?
I tried a few and nothing worked. maybe other settings should be applied? please advise.
Berry
↧
↧
Unable to upload dSYM file and receiving error message
I'm trying to upload dSYM file from the UI https://mint.splunk.com/dashboard/project/XXX/settings/dsyms but getting an error:
"Access to XMLHttpRequest at 'https://ios.splkmobile.com/api/v1/dsyms/upload' from origin 'https://mint.splunk.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource."
Also, the list of dSYM shows only 4 dSYM files from 2017, but we upload a new one together with the build every month.
I'm using Fastlane to upload the dSYM file more details:
https://github.com/fastlane/fastlane/blob/master/fastlane/lib/fastlane/actions/splunkmint.rb
Looks like Fastlane uses the same API(https://ios.splkmobile.com/api/v1/dsyms/upload) than the Splunk Mint UI.
From the logs, Fastlane uploaded the file successfully, but the file does not appear on that dSYM list.
↧
How to convert JSON into specific table format
This what we have in logs: ```index="xyz" INFO certvalidationtask ```
And this prints a JSON object which consists of a list of commonName + ExpirationDate
Stage.env e401a4ee-1652-48f6-8785-e8536524a317 [APP/PROC/WEB/0] - - 2020-02-10 16:09:01.525 INFO 22 --- [pool-1-thread-1] c.a.c.f.c.task.CertValidationTask : {commonName='tiktok.com', expirationDate='2020-05-21 17:50:20'}{commonName='instagram.com', expirationDate='2020-07-11 16:56:37'}{commonName='blahblah.com', expirationDate='2020-12-08 11:30:42'}{commonName='advantage.com', expirationDate='2020-12-10 11:41:31'}{commonName='GHGHAGHGH', expirationDate='2021-05-19 08:34:03'}{commonName='Apple Google Word Wide exercise', expirationDate='2023-02-07 15:48:47'}{commonName='some internal cert1', expirationDate='2026-06-22 13:02:27'}{commonName='Some internal cert2', expirationDate='2036-06-22 11:23:21'}
I wanted a table which contains 2 columns -> Common Name & Expiration Date. Where if the expiration date is less than 30 days from the current date we show that in RED color, for less than 90 days we show in Yellow, everything else in Green.
Much much thanks in Advanced.
↧
How to calculate percentage of data which has two different values between these two values
Here I have 3 fields "Status", merchantID & count. I am trying to find out the percentage of "CONFIRMED" and "REJECTED (these are values of "Status" for each merchantID. I mean calculation would be ((REJECTED-CONFIRMED)/CONFIRMED)*100, but this should be at a merchantID level. I am kind of new in splunk and stuck. I could only come up with the below
==============
index=apps
sourcetype="pos-generic:prod" Received request to change status CONFIRMED OR REJECTED
partner_account_name="Level Up"
| stats count by status, merchantId
============================================
↧
SSO on OKTA using SAML error message: "**Saml response does not contain group information**"
Hi at all,
I have the following problem:
We configured SSO with OKTA using SAML. When authenticating we receive from Splunk the following error message "Saml response does not contain group information".
↧
↧
Result Token not displaying in email message
I have a scheduled PDF that I need to display the dates the report was run for. Unfortunately, I just learned that the tokens will not display in the Scheduled PDF as they do when I open the dashboard and then export PDF.
My other option is then to display the dates the report was run for in the email message. I went through the Splunk documentation and several other posts but cannot figure out why my results token is still coming across as blank when email gets sent out.
Here is the search I have created to establish the start and end date for the report. It is the first search and only returns one row of results. I removed using |table due to other posts suggestions and am using |fields instead. I even tried by removing |fields all together and still have no results in my email message.
| makeresults
| eval start = strftime(relative_time(now(), "-1w@w0"),"%m-%d-%Y")
| eval end = strftime(relative_time(now(),"@w6"),"%m-%d-%Y")
| eval message="Report was generated from "+start+" through "+end
| fields message
This search is not contained in a panel. It was set up to be outside any panels and then use a token to display as a title for a html panel for text.
In the PDF schedule under Message, I simply put in $result.message$ but is completely blank when emailed. I even tried $message$ since it was set up as the token name but of course that didn't work either.
Anyone have any ideas of why this isn't working or another way to get the dates to display on my scheduled PDF?
↧
Execute sql command on dbconnect
Hi
I have queries that does not run on db connect, but it will be run on informix server and return result.
What is the reason?
![alt text][1]
![alt text][2]
Thanks
[1]: /storage/temp/283606-7e9e3bc8-3562-4ae8-8e58-467f4b3aee31.jpeg
[2]: /storage/temp/283605-abf7cf7a-f979-43dc-851a-9d7469c43e61.jpeg
↧
Missing events from Splunk Universal Forwarder
I have one missing event out of 168 events from our Universal Forwarder. I've already checked the internal logs and the file has been indexed "Batch input finished reading file=", but I cannot find this source on my index. I also tried to expand time range and nothing appears, then check if the forwarder was restarted on the time of file was index, but it is not.
Settings on my forwarder is:
**inputs.conf**
[batch://my_path]
move_policy = sinkhole
disabled = false
sourcetype = my_sourcetype
index = my_index
**outputs.conf**
[tcpout]
defaultGroup = default-autolb-group-forwarder
[tcpout:default-autolb-group-forwarder]
disabled = false
server = myIndexer:9997
useACK = true
↧