Hi,
I have a number of pre-existing date fields from Nessus that are reported in epoch format. I'd like to add a new field that translates that field into Julian format. How would I do that?
This link had the same issue, but I don't see an answer. I know that this can be done at search time, but I want it done automatically, retaining the original field and adding a new one with the converted date.
https://answers.splunk.com/answers/499710/how-to-convert-epoch-to-human-readable-in-kv-mode.html
↧
What is the best method to add a field based upon another field?
↧
Where can I see xml/html code for the reports page and the datasets page?
Hello,
Where can i see the source code for reports page and data sets page ( reports and data sets tabs which appears on top)? Am i able to change the source codes of these pages on app level?
Thanks.
↧
↧
How do I change the timezone offset for events that appear to be from the same host but the real host and timezone is contained in the event
**RAW EVENTS:**
Event 1:
host=HOSTA
real_event_host=HOSTX
real_event_time=2018-09-25T06:39:03:142-06:00
Event 2:
host=HOSTA
real_event_host=HOSTY
real_event_time=2018-09-25T08:40:03:142-04:00
**Here is how the above events get loaded:**
Event 1:
_time=25/09/2018 06:39:03.000 **(What I want is for this to now switch to the timezone of the indexer -400 i.e. 25/09/2018 08:39:03.142)**
host=HOSTA
real_event_host=HOSTX
real_event_time=2018-09-25T06:39:03:142-06:00
Event 2:
_time=25/09/2018 08:40:03.321 **(For this one the timezone is the same so the times should be the same)**
host=HOSTA
real_event_host=HOSTY
real_event_time=2018-09-25T08:40:03:321-04:00
**How do I either use the real_event_time as the _time and convert it to the indexer's timezone OR at the very least make the _time reflect the timezone of the event?
HOSTX is in -600 timezone offset
HOSTY is in -400 timezone offset
Both events appear to come from HOSTA which is in -400 timezone offset because HOSTA is a log aggregator**
↧
How to leverage the ExportResultsDialog.js file
In the **/opt/splunk/share/splunk/search_mrsparkle/exposed/js/views/shared/jobstatus/buttons** folder, there is a file named **ExportResultsDialog.js**. Is it possible to edit this file to add some text (e.g., a warning) which will appear on all invocations of this dialog?
If so, would it only require a refresh/reload/restart of Splunk to implement the changes?
We are currently using Splunk Enterprise 6.6.2.
↧
Is there an alternative to JOINS when records are more than 50000
**Scenario** - I have two indexes index1 and index2.
Inner Query: I need to compare two indexes (Index1 and Index2) with Group Number and CORP_ENT_CD combination. If there is a match, i extract DCN's of that matching columns.
Outer Query: I need to compare index1 and the result from above set and need to display unmatched columns.
**Query**:
index=index1
| rex "DCN (?.*)-SL:(?.*)-TS:(?\d+)"
|join type=left DCN [ search
index=index1
| rex "DCN (?.*)-SL:(?.*)-TS:(?\d+)"
|eval dummy=GRP_NBR + CORP_ENT_CD
|join dummy [search index=index2 earliest=-15d@d
|rename "Group Number" as Group
|search Group=*
|eval dummy= Group + CORP_ENT_CD
]
|fields DCN |table DCN
|eval matched="Yes"
]
|fillnull value="No" matched
|search matched="No"
---------
Is there any other way to increase the performance?
↧
↧
custom dashboard
I am trying to hide 2nd panel unless, there is a click on 1st and which gives time range input. I am able pass the early and latest tokens but I am unable to hide it. Here is the source I have, Please help.-24h@h now Test1 host="My_host" TransactionId=TID*
| eval Status=if(like(_raw, "%POSTING:SUCCEEDED%"), "2.Successful transactions" , "1.Rejected Transactions")
| timechart count by Status$field1.earliest$ $field1.latest$ $earliest$ $latest$ open for click/hideTest Panel 2 token($click_earliest$,$click_latest$) host="My_host" TransactionId=TID* "processingStatusCode":"REJECTED"
| rex field=_raw max_match=0 "errorCode\\\\\":\\\\\"(?<error_code>\d+)\\\\\""
| rex field=_raw max_match=0 "responseCode\":\"(?<response_code>\w+)"
| eval error_code1 = if(isnotnull(error_code) AND error_code!="", error_code,response_code)
| stats count by error_code1
| lookup CSA_Error.csv CSA_Code as error_code1 OUTPUT Description | table Description count | where Description!= " "$click_earliest$ $click_latest$
↧
After trying to add a new member to a search head cluster, why am I getting the following "Failed to proxy call to member" error?
I am trying to add a new member to an existing cluster but it is showing the following error :`"Failed to proxy call to member https://xxx:80809"`
I tried the both of the following ways with the help of Splunk docs
splunk add shcluster-member -current_member_uri https://xxxx:8089
splunk add shcluster-member -new_member_uri https://xxxx:8089
My pass4SymmKey is the same in both places, but still i am facing an issue. Can any one help me to fix it?
thanks in advance
↧
I need to create a dashboard in Splunk which gives information of the CPU and memory utilization about the devices installed on the server(Cisco prime application)
I need to create a dashboard in Splunk which gives information of the CPU and memory utilization about the devices installed on the server(Cisco prime application)
↧
How do I compare avg of first 10 results to avg of last 10 results and apply a calculation?
I need to return the average of the earliest 10 results **(OG)** in an index and the average of the latest 10 results **(FG)** in the same index. I then need to apply a calculation to get the result **(ABV)** -ie:
**ABV=[average of earliest 10 results] minus [average of the latest 10 results] multiplied by 131.25**
I can calculate **OG** by using this search:
**| streamstats window=10 earliest(SG) as SGStart | stats avg(SGStart) as OG**
..and I can calculate **FG** by using this search:
**| streamstats window=10 latest(SG) as SGEnd | stats avg(SGEnd) as FG**
..and I can also calculate **ABV** by appending:
**| eval stepG = 'OG'-'SG' | eval ABV=stepG*131.25 | table ABV**
...but obviously some events are lost in the pipeline due to filtering and I can't figure out how to put it all together.
Any help would be greatly appreciated!
↧
↧
How do I get the logs from the servers into Splunk?
Dear All,
I am new to Splunk. Just installed Splunk on my servers. Kindly let me know how I can start receiving the logs from other severs.
Thanks & Regards
Siraj
↧
Is there an alternative to JOINS when records are more than 50000?
**Scenario** - I have two indexes: index1 and index2.
Inner Query: I need to compare two indexes (Index1 and Index2) with Group Number and CORP_ENT_CD combination. If there was a match, i would extract DCN's of those matching columns.
Outer Query: I need to compare index1 and the result from above set and need to display unmatched columns.
**Query**:
index=index1
| rex "DCN (?.*)-SL:(?.*)-TS:(?\d+)"
|join type=left DCN [ search
index=index1
| rex "DCN (?.*)-SL:(?.*)-TS:(?\d+)"
|eval dummy=GRP_NBR + CORP_ENT_CD
|join dummy [search index=index2 earliest=-15d@d
|rename "Group Number" as Group
|search Group=*
|eval dummy= Group + CORP_ENT_CD
]
|fields DCN |table DCN
|eval matched="Yes"
]
|fillnull value="No" matched
|search matched="No"
---------
Is there any other way to increase the performance of this solution?
↧
Can you help me with a dashboard customization issue?
I am trying to hide a 2nd panel unless there is a click on the 1st panel and which gives time range input. I am able to pass the early and latest tokens but I am unable to hide the 2nd panel. Here is the source I have, Please help.-24h@h now Test1 host="My_host" TransactionId=TID*| eval Status=if(like(_raw, "%POSTING:SUCCEEDED%"), "2.Successful transactions" , "1.Rejected Transactions") | timechart count by Status$field1.earliest$ $field1.latest$ $earliest$ $latest$ Test Panel 2 host="My_host" TransactionId=TID* "processingStatusCode":"REJECTED"| rex field=_raw max_match=0 "errorCode\\\\\":\\\\\"(?\d+)\\\\\"" | rex field=_raw max_match=0 "responseCode\":\"(?\w+)" | eval error_code1 = if(isnotnull(error_code) AND error_code!="", error_code,response_code) | stats count by error_code1| lookup CSA_Error.csv CSA_Code as error_code1 OUTPUT Description | table Description count | where Description!= " "$click_earliest$ $click_latest$
↧
Where do I add domain controllers in Splunk App for Windows Infrastructure?
I installed and configured Splunk App for Windows Infrastructure.
With this I install: Splunk Add-on for PowerShell, Splunk Supporting Add-on for Active Directory (and configure it "Connection test for default succeeded"), Splunk Add-on for Microsoft Active Directory, Splunk Add-on for Microsoft Windows DNS, Splunk Add-on for Microsoft Windows.
When I configure it and I complete all requirements I see only one server (self Splunk) but I don't see any domain controllers.
Where I must add domain controllers?
↧
↧
How do I compare the average of the first 10 results to the average of the last 10 results and apply a calculation?
I need to return the average of the earliest 10 results **(OG)** in an index and the average of the latest 10 results **(FG)** in the same index. I then need to apply a calculation to get the result **(ABV)** -ie:
ABV=[average of earliest 10 results] minus [average of the latest 10 results] multiplied by 131.25
I can calculate **OG** by using this search:
| streamstats window=10 earliest(SG) as SGStart | stats avg(SGStart) as OG
And I can calculate **FG** by using this search:
| streamstats window=10 latest(SG) as SGEnd | stats avg(SGEnd) as FG
And I can also calculate **ABV** by appending:
| eval stepG = 'OG'-'SG' | eval ABV=stepG*131.25 | table ABV
But obviously some events are lost in the pipeline due to filtering and I can't figure out how to put it all together.
Any help would be greatly appreciated!
↧
Can you help me change the timezone offset for events that appear to be from the same host?
How do I change the timezone offset for events that appear to be from the same host (but the real host and timezone is contained in the event)?
**RAW EVENTS:**
Event 1:
host=HOSTA
real_event_host=HOSTX
real_event_time=2018-09-25T06:39:03:142-06:00
Event 2:
host=HOSTA
real_event_host=HOSTY
real_event_time=2018-09-25T08:40:03:142-04:00
**Here is how the above events get loaded:**
Event 1:
_time=25/09/2018 06:39:03.000 **(What I want is for this to now switch to the timezone of the indexer -400 i.e. 25/09/2018 08:39:03.142)**
host=HOSTA
real_event_host=HOSTX
real_event_time=2018-09-25T06:39:03:142-06:00
Event 2:
_time=25/09/2018 08:40:03.321 **(For this one the timezone is the same so the times should be the same)**
host=HOSTA
real_event_host=HOSTY
real_event_time=2018-09-25T08:40:03:321-04:00
**How do I either use the real_event_time as the _time and convert it to the indexer's timezone OR at the very least make the _time reflect the timezone of the event?
HOSTX is in -600 timezone offset
HOSTY is in -400 timezone offset
Both events appear to come from HOSTA which is in -400 timezone offset because HOSTA is a log aggregator**
↧
How do I leverage the ExportResultsDialog.js file?
In the **/opt/splunk/share/splunk/search_mrsparkle/exposed/js/views/shared/jobstatus/buttons** folder, there is a file named **ExportResultsDialog.js**. Is it possible to edit this file to add some text (e.g., a warning) which will appear on all invocations of this dialog?
If so, would it only require a refresh/reload/restart of Splunk to implement the changes?
We are currently using Splunk Enterprise 6.6.2.
↧
Is there an alternative to JOIN when records are more than 50000?
**Scenario** - I have two indexes: index1 and index2.
Inner Query: I need to compare two indexes (Index1 and Index2) with Group Number and CORP_ENT_CD combination. If there was a match, i would extract DCN's of those matching columns.
Outer Query: I need to compare index1 and the result from above set and need to display unmatched columns.
**Query**:
index=index1
| rex "DCN (?.*)-SL:(?.*)-TS:(?\d+)"
|join type=left DCN [ search
index=index1
| rex "DCN (?.*)-SL:(?.*)-TS:(?\d+)"
|eval dummy=GRP_NBR + CORP_ENT_CD
|join dummy [search index=index2 earliest=-15d@d
|rename "Group Number" as Group
|search Group=*
|eval dummy= Group + CORP_ENT_CD
]
|fields DCN |table DCN
|eval matched="Yes"
]
|fillnull value="No" matched
|search matched="No"
---------
Is there any other way to increase the performance of this solution?
↧
↧
In transforms.conf, can you help me filter out wineventlog eventcode 4656 account names?
I am trying to figure out how to filter out account names that end in $ for the 4656 event codes. i am currently using the following in transforms.conf:
REGEX = (?ms)(.*EventCode=4656.*)(Subject:.*Account Name:(\s*\w+\$)
DEST_KEY = queue
FORMAT = nullQueue
I have tried multiple combinations of the above and it never filters out.
↧
get data from two indexes
Good day everyone,
i am dealing with an issue that i haven't been able to find an answer for so far. here is the problem:
I have two indexes collecting data; one index collects from DHCP which have Client_IP address that has been assigned to a machine and the other index is DNS which collects Clients internet queries. DNS index have the same "Client_IP" field. now i want to be able to take the Client_IP from the DNS search; find the hostname found in DHCP and create a table that includes time, Client_Name "from DHCP index" and Client_IP that matches the time of DNS query. DHCP data needs to have the closest time to the DNS query since DHCP can assign the same IP to a different client.
really appreciate any help with this issue.
Thanks,
↧
Help with XML Field extraction
I have a log file, that outputs different formats depending on the portion of the application doing the logging. Some of the events output like xml the sample data shown here. I'd like to find some way to extract the kv pairs out of. If a transforms/props can be put in place that will recognize events like this one and extract the fields I need without interfering with the other single-line machine data and json entries in the log that would be nice. If multiple lines can be entered to account for XML and JSON even better.
Honestly, Id be happy with an inline solution with |extract |xmlkv |xpath or something like that.
Otherwise I will be forced to write some pretty nasty |REX statements for each field.
Thanks!
Looking for:
Clientid 11111111
MemberFirstName Jane
MemberLastName Doe
Gender FEMALE
DOB 11/11/1911
EmployeeIDNum xxxxx
MentorFirstName
MentorLastName
Event Samples.
2018-09-25 12:48:23,599 [tp-bio-8001-exec-151] [ STANDARD] [ ] [ PHSInt:01.01] (og.Domain_FW_Apollo_Int_.Action) INFO hostname01.domain.com|10.200.200.200|HTTP|AssessmentServices|Services|SaveAssessmentAnswers|AD0A0F376B08E09090B78F37816A41733 - INSERTING INTO SERVICE REQUEST LOG:--SERVICEREQUESTTYPE -->:SaveAssessmentAnswers--SERVICEREQUESTSTATUS -->:--TRANSACTIONID-->:3740e6fc-99xx-43f2-ba47-4630da0aaeda--MEMBERELIGID-->:--PID-->:--PARTICIPANTID-->:--DEBUGMESSAGE-->:[hostname01.domain.com] --REQUEST-->:3740e6fc-59ee-43f2-ba47-4630da0aaeda 11111111 931 ImpersonatorDetail MEMBER MemberFirstName Jane MemberLastName Doe Gender FEMALE DOB 01/01/1911 EmployeeIDNum 35121212121212 --RESPONSE-->:3740e6fc-59ee-43f2-ba47-4630da0aaeda Message We’re sorry, we’re not able to verify your account information. Please contact your benefits administrator. --REFERENCEID-->:
2018-09-25 12:47:21,248 [tp-bio-8001-exec-177] [ STANDARD] [ ] [ PHSInt:01.01] (og.Alere_FW_Apollo_Int_.Action) INFO hostname.domain.com|10.214.6.60|HTTP|AssessmentServices|Services|SaveAssessmentAnswers|A6E53D8C7F19456C1484D3F2307AB5FDB - INSERTING INTO SERVICE REQUEST LOG:--SERVICEREQUESTTYPE -->:SaveAssessmentAnswers--SERVICEREQUESTSTATUS -->:--TRANSACTIONID-->:a8667bd9-2be5-4655-8d9a-dd47e8111ce4--MEMBERELIGID-->:--PID-->:--PARTICIPANTID-->:--DEBUGMESSAGE-->:[hostname.domain.com] --REQUEST-->:axx67bd9-2be5-4655-8d9a-dd47e8111ce4 11121212 931 ImpersonatorDetail PARENT MentorFirstName Jane MentorLastName Doe MemberFirstName Aiden MemberLastName Doe Gender MALE DOB 01/1/2001 EmployeeIDNum 351111111111 --RESPONSE-->:axx67bd9-2be5-4655-8d9a-dd47e8111ce4 Message We’re sorry, we’re not able to verify your account information. Please contact your benefits administrator. --REFERENCEID-->:
Other Samples in the log that are not XML.
2018-09-252018-09-25 13:17:4613:17:46,,541541 [ [tp-bio-8004-exec-171tp-bio-8004-ex ] [ STANDARD] [ ] [ PHSInt:01.01] (lo_Data_System_BatchLog.Action) INFO hostname06.domain.com|10.200.200.200|HTTP|HealthIndicatorsInt|Services|saveHealthData|A30AC19E66FD562E79942068C75D03XXF - In UpdateBatchLog:ID=20001,Type=ProcessEvent,Action=P-212799085,Status=INFO,Message=Processing of EventNew HD,Exception=
JSON I think.
2018-09-25 13:17:45,929 [ PegaRULES-Batch-18] [ STANDARD] [ ] [ ApolloCCBatch:01.01] (on.Domain_FW_Apollo_Int_.Action) INFO - INSERTING INTO SERVICE REQUEST LOG:--SERVICEREQUESTTYPE -->:MPEAPI--SERVICEREQUESTSTATUS -->:200--TRANSACTIONID-->:DOE--MEMBERELIGID-->:99999999--PID-->:999999999--PARTICIPANTID-->:JOHN--DEBUGMESSAGE-->:[hostname04.domain.com] OK [Time Elapsed=697.0ms]--REQUEST-->:{ "MemberProductEligibilityRequest":{ "requestHeader":{ "applicationName":"APPLICATION", "transactionId":"bc99999b547b64cf99a01cabd625e0bc7" }, "consumerDetails":{ "firstName":"JOHN", "lastName":"DOE", "dateOfBirth":"1900-05-09T00:00:00Z", "searchId":"999999999", "contractNumber":"999999" }, "filteringAttributes":{ "includeExtendedAttributes":"true", "applyFilters":"true" }, "requestDetails":{ "requestType":"BIG5", "searchType":"ALL" } }}--RESPONSE-->:{"MemberProductEligibilityResponse":{"responseHeader":{"transactionId":"bc2706b547b64cf99a01cabd625e0bc7"},"consumerDetails":[{"demographics":{**** Section suppressed for logging ****},"contactDetails":{**** Section suppressed for logging ****},"idSet":{**** Section suppressed for logging ****},"populationDetails":{"populationEffectiveDate":"2018-01-01T00:00:00Z","populationCancelDate":"9999-12-31T00:00:00Z","populationId":"POP33477","populationDateAssigned":"2017-12-12T00:00:00Z","populationBrandingType":"Optum Logo","populationBrandingEffectiveDate":"2018-01-01T00:00:00Z"},"coverageDetails":{"recordType":"HEALTH_COVERAGE","employeeStatus":"A","contractNumber":"0999999","eligibilitySourceSystem":"CS","planVariation":"0106","reportingCode":"0106","customerName":"TESLA","coverageType":"M","coverageEffectiveDate":"2018-01-01T00:00:00Z","hireDate":"2001-01-04T00:00:00Z","stateOfIssue":"CA","legalEntity1":"20020","marketSite":"0004422"},"extendedAttributes":{"ecExtended":[],"elExtended":[],"euExtended":[{"typeCode":"EU3","value":"0004422","effectiveDate":"2001-01-01T00:00:00Z","cancelDate":"9999-12-31T00:00:00Z"},{"typeCode":"EU3","value":"0004422","effectiveDate":"2001-01-01T00:00:00Z","cancelDate":"9999-12-31T00:00:00Z"}],"cuExtended":[],"suExtended":[],"muExtended":[]},"productDetails":{"product":[{"source":"Optum","productEvent1":"Productname for Life","productEffectiveDate":"2018-01-01T00:00:00Z","productTerminationDate":"2199-12-31T00:00:00Z"}]}}]}}--REFERENCEID-->:999999
↧