Hello everyone, I am new to Splunk world and stuck with a query. Can you please help me find the solution for following problem.
I am trying to create a new column with a value which is increased by 1, if there is any change in limit column.
Here is the code that I tried :-
| sort localisation _time
| streamstats range(_time) as Duration window=2
| eval Duration1 = Duration/60
| eval limit = if(Duration1 < 1,1,2)
| autoregress limit as limit_old | eval change=0 | autoregress change as change_old | eval change = if(limit=limit_old, change_old,change_old+1) | table limit change
"Changes i get" is the column which is getting populated and "Expected changes" is what i am looking for. Every time the value in limit column changes, i want the column to increase values by 1 or else stay the same.
I tried the answer from this [Post][2] but its is not working for me
Limit Change I get ExpectedChange
1 0
1 0 0
2 1 1
2 0 1
1 1 2
2 1 3
1 1 4
2 1 5
1 1 6
2 1 7
1 1 8
2 1 9
2 0 9
2 0 9
2 0 9
2 0 9
Thank you in advance.
[1]: https://answers.splunk.com/answers/675583/how-to-increment-the-field-based-on-the-previous-v-1.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev
[2]: https://answers.splunk.com/answers/675583/how-to-increment-the-field-based-on-the-previous-v-1.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev
↧
How do I create a new column with incremental change based on another column?
↧
Output multiple fields from query to csv via output lookup
I have created a query that will extract specific information from my Active Directory logs, and output it into a nicely labeled table.
I would like to know if it is possible to push these fields into an outputlookup command that will create a lookup file that is usable elsewhere.
I have a query that looks similar to this:
index="Microsoft_Active_Directory" sourcetype="Active_Directory"`
| dedup AccountName
| rex field=manager "CN\x3d(?:[AP]\x2d)?(?P[^\x5c]+)\x5c\x2c\s(?P[^\x2c]+)\x2cOU"
| rex field=name "(?P^[^\x2c]+)(?:\x2c\s(?P.*))?"
| rex field=whenCreated ".*?(?P\d+\x2f\d+\x2f\d+)"
| eval Manager = toString(manFirst) + " " + toString(manLast)
| eval "Employee Name" = toString(nameFirst) + " " + toString(nameLast)
| rex field=Manager mode=sed "s/Null\sNull/N\/A/g"
| rex field=employeeID mode=sed "s/x+/N\/A/g"
| rex field="Employee Name" mode=sed "s/Null\s//g"
| rename AccountName AS Username, description AS "Job Title", physicalDeliveryOfficeName AS Location, l as City, st AS State, userPrincipalName AS "Employee E-mail", employeeID AS "Employee ID", telephoneNumber AS "Phone Number"
| table Username "Employee Name" "Job Title" Location City State "Employee E-mail" "Employee ID" "Phone Number" Manager Created
I am hoping that this query can be used to create a lookup file with these 11 fields populated that can then be used to query against in other use cases. I have tried finding my answer online before coming here and asking a question, but I cannot seem to find the correct way to do this.
And then assuming that the above is all possible, it would also be awesome to know how to make this happen dynamically so that the lookup file will be updated periodically (a dynamic lookup), say weekly, with the current AD information.
Any help given is certainly appreciated, thanks.
↧
↧
Best way to publish a live dashboard that does not allow the logged in user to search other data, and has a persistent login.
I'm trying to put a dashboard on a TV in a high traffic hallway with people that aren't allowed to search the other information in an index.
1) How can I publish a live dashboard that doesn't provide someone the ability to click the magnifying glass and search the rest of the data, or manipulate the search or dashboard panels in anyway?
2) How can I setup persistent login for one service account?
Thanks
↧
Navigation Bar/header
Is it possible to convert the XML file of the Navigation Bar to HTML view so I can edit it my own way?
For example, I'm trying to make the Nav bar instead of Horizontal to Vertical/side bar. I would like to add my own Jquary, Javascript, and HTML. How can i find the HTML version of it. If no, can i attached/link an external file to it?
Thanks in advance.
↧
Getting the warning "Splunk has found # orphaned searches owned by # unique disabled users", but no results displayed
We are running Splunk Enterprise v. 7.0.4 on our search head cluster.
Recently we have started to get the following warning:
"Splunk has found 4 orphaned searches owned by 1 unique disabled users.Click to view the orphaned scheduled searches. Reassign them to a valid user to re-enable or alternatively disable the searches."
but the click would take us to a search that won't produce any results.
Strange, that running Health Check on Splunk DMC server doesn't show any scheduled orphaned searches on the same search heads.
Any ideas?
↧
↧
query to combine two sources from same index
Hello I have an index with two different sources and same sourcetype I want to combine them and output the fields ip address ,id from search1 and field description from search2 with id(search1) or instances{}.id(search2) as common field
search1
index=main sourcetype=description source="us-east-1:ec2_instances" id=111
search2
index=main sourcetype="description" source="us-east-1:ec2_security_groups" "instances{}.id"=111
I want to combine the above two queries which have id or "instances{}.id" as a common field and output something like below
IP ID Description
10.100.47.198 111 PrivateGroup
↧
How can I get my inputlookup to ignore the time within the timestamp
Here's What I have to fix but haven't yet figred out how.
In this search
index=dev_tsv "BO Type"="assessments"
|dedup "Workflow Name" "Workflow Phase" "Workflow Process Due Date" "Workflow Process Name" "Workflow Process Sort Order" "Workflow Started Date" "Workflow Step Name" "Step Due Date" "Workflow Step Sort Order"
| table "Workflow Name" "Workflow Phase" "Workflow Process Due Date" "Workflow Process Name" "Workflow Process Sort Order" "Workflow Started Date" "Workflow Step Name" "Step Due Date" "Workflow Step Sort Order"
| convert timeformat="%Y-%m-%d %H:%M:%S.%6N" mktime("Step Due Date") AS cumulativeDueDate
| eval dueDateRange=mvrange(cumulativeDueDate,now(),86400)
| mvexpand dueDateRange
| convert ctime(dueDateRange) timeformat="%+"
| where NOT match(dueDateRange,"(Sun|Sat).*") AND NOT [ | inputlookup exclude_holidays | eval holiday_date=strptime(holiday_date, "%Y-%m-%d %H:%M:%S") | rename holiday_date as dueDateRange | eval dueDateRange=strftime(dueDateRange, "%+") ]
The initial value of dueDateRange here is formatted like
"%Y-%m-%d %H:%M:%S.%6N"
after input lookup it is
"%Y-%m-%d %H:%M:%S"
but the "%H:%M:%S" portion is always
"00:00:00" in the input lookup values, so I don't think the inputlookup portion is working to exclude as the fields in the actual data have valid "%H:%M:%S"
for example:
"2018-07-04 07:51:25.966000"
this is the value of input lookup file:
holiday_date
2018-01-01 00:00:00
2018-01-15 00:00:00
2018-02-19 00:00:00
2018-05-28 00:00:00
2018-07-04 00:00:00
2018-09-03 00:00:00
2018-10-08 00:00:00
Any thoughts on how I can get this to work by not using the time inside the inputlookup value timestamp? So it would use %Y-%m-%d?
↧
How to extract the .csv file as key value pairs in splunk ?
I am receiving a .csv file data from the forwarder to splunk. The .csv will be rolled and will be created a new csv file which has the header(1) as fields which is mentioned below and the the second events gives the values for the header field. So how can we create a searh which gives a key value pair ?
eg: uid = 868B17148C74F8E000C221DE19880DE24CB0DB18 , id = 1530219670 and so on ....
1.uid;pid;id;setup_start_ts;state_msg;state_details;setup_time;call_time;src_uri;src_ip;src_mac;src_port;src_ua;dst_uri;dst_ip;dst_mac;dst_port;dst_ua;ruri;callid;from_tag;to_tag;otg;dtg;MOS;src_codecs;dst_codecs;media_leg_locations;pai;privacy;sip_reason_protocol;sip_reason_cause;sip_reason_text;q850_cause;diversion_uri;diversion_params;acct_status_type;sequence_number;ingress_devs;egress_devs;init_devs;term_devs;trav_devs;mec_ids;realm_ids;megaco_gateway;mgcp_gateway;src_user;src_user_pref_tag;dst_user;dst_user_pref_tag;avg_mos;avg_rtcp_delay;max_rtcp_delay;sip_code;media_types;caller_ip;callee_ip;setup_delay;setup_delay_type;max_jitter;avg_jitter;max_packet_loss;avg_packet_loss
868B17148C74F8E000C221DE19880DE24CB0DB18;1530219670;137;1535135041;Finished;;115;12997;sip:****;*****;00:00:00:00:00:00;5060;Twilio Gateway;"sip:****;user=app";****;00:00:00:00:00:00;5060;ININ-TsServer/17.2.10.6;sip:****;106450a01608d3bf7dcd6c8b157b232b;36787548_6772d868_735cfbe8-0c8a-4a12-87d7-fae3a224a123;2Am7hhq;;;1.000000;PCMU,telephone-event;PCMU,telephone-event;ml_0_1_26_178251_176,ml_0_1_26_178431_179,ml_0_1_26_178614_1061;;;Q.850;16;;;sip:***;reason=unconditional;;1;;5;5;;;;;;;****;;**1;;3.552500;;;200;audio;****;***;115.970000;Successful;0;0;0;0
↧
Help with Drilldown and setup.xml
I have a bit of an odd problem here. I'm trying to setup a drilldown/workflow action with a custom url/POST action sending username, password and server information. This is within an app where these can be variable. Here is an example of how the url will look:
https://server/capme/index.php?sip=10.10.10.1&spt=4242&dip=10.10.10.2&dpt=80&ts=2012-11-27%2005:34:00&usr=paulh&pwd=aBcDeF
Is there anyway I can pull the username, password and server information from a setup.xml file?
↧
↧
Custom Drill/Workflow Action
I have a bit of an odd problem here. I'm trying to setup a drilldown/workflow action with a custom url/POST action sending username, password and server information. This is within an app where these can be variable. Here is an example of how the url will look:
https://server/capme/index.php?sip=10.10.10.1&spt=4242&dip=10.10.10.2&dpt=80&ts=2012-11-27%2005:34:00&usr=paulh&pwd=aBcDeF
Is there anyway I can pull the username, password and server information from a setup.xml file?
↧
Basic query works in search, but not in dashboard panel =(
When I run this basic query in search, I get results:
![alt text][1]
When I run the same exact query in a dashboard panel, I get no results:
"Search did not return any events."
![alt text][2]
[1]: /storage/temp/254796-fake-data-returns-data.png
[2]: /storage/temp/254797-fake-data-not-returning.png
↧
How do I line break after a particular word?
Hello
Below is a sample one sample event which starts with ####### and ends with * All done!. How do I break the events correctly?
Thanks in advance
#######################
Program: FADBDataLoader.pl
Begin Processing: Mon Aug 27 09:38:15 2018
Executing from: linuxbatch-prod
Working Directory: /opt/apps/FADB/Support
Command: FADBDataLoader.pl
No override date provided from command-line.
Will use database override OR date card file instead.
Connecting to fadbprd as ... success!
Calling 'GMRUSER.CHECK_OVERRIDE_DATE' to get possible override process date
execProcCursor(): Executing GMRUSER.CHECK_OVERRIDE_DATE with 3 parameters
(FADB_UX_SS_MMKT_FNAV_LOAD, SSB, )
Procedure returned value:
No date found in database table...
Process date file: /opt/apps/FADB/Support/process_date_am.dat
Process date: 08/27/2018
Control Date will be checked against Processing Date
Validating data file with LMFadbUtils::validate_ssmmkt_fnav_intraday routine...
Calling 'GMRUSER.SSB_MMKT_FNAV_LOAD' to load data.
ExecProc(): Executing GMRUSER.SSB_MMKT_FNAV_LOAD with 4 parameters
(20180827, SSB, FADB_UX_SS_MMKT_FNAV_LOAD, )
Parsing the database output...
* Begin Money Market FNAV load into FADB.TBL_FNAV_INTRADAY
* from GMRUSER.STAGE_MMR_FNAV
* of 4 stage records for SSB (2)
* dated 27 Aug 2018
* Updated 4 of 4 custodian IDs. within staging table.
* Strike time found: 0800
* Updating control date in stage table...
* Updated 4 of 4 Account IDs within staging table.
* Updated 4 of 4 instrument IDs. within staging table.
* Number of prior run records to be deleted: 0
* Inserted 4 of 4 rows into FADB.TBL_FNAV_INTRADAY.
Sending Rejections
execProcCursor(): Executing GMRUSER.GET_REJECTIONS with 4 parameters
(08/27/2018, FADB_UX_SS_MMKT_FNAV_LOAD, 2, )
No Rejections Found.
ExecProc(): Executing GMRUSER.BO_REPORT_JOB_STATUS_INS with 2 parameters
(08/27/2018, FADB_UX_SS_MMKT_FNAV_LOAD)
execProcCursor(): Executing GMRUSER.BO_REPORT_JOB_STATUS_CHECK with 3 parameters
(08/27/2018, FADB_UX_SS_MMKT_FNAV_LOAD, )
ExecProc(): Executing GMRUSER.BO_REPORT_JOB_STATUS_UPD with 2 parameters
(08/27/2018, FADB_UX_SS_MMKT_FNAV_LOAD)
Calling 'FADB.GET_FEED_FILE_ARCH_FLAG' to get mult. archive flag
execProcCursor(): Executing FADB.GET_FEED_FILE_ARCH_FLAG with 2 parameters
(FADB_UX_SS_MMKT_FNAV_LOAD, )
--------------------------------------------------------------------------------
### DATABASE LOGGING INFORMATION ###
Job Name: FADB_UX_SS_MMKT_FNAV_LOAD
Batch ID: SSB
Process Date: 08/27/2018
JOB_ID: 741
RUN_NUMBER: 1
* Inserting log for STDOUT output to database...
* Inserted 99 rows into the table for run number 1
* Inserting log for STDERR output to database...
* Inserted 21 rows into the table for run number 1
* All done!
↧
How can I get my inputlookup to ignore the time within the timestamp?
Here's What I have to fix but haven't yet figred out how.
In this search
index=dev_tsv "BO Type"="assessments"
|dedup "Workflow Name" "Workflow Phase" "Workflow Process Due Date" "Workflow Process Name" "Workflow Process Sort Order" "Workflow Started Date" "Workflow Step Name" "Step Due Date" "Workflow Step Sort Order"
| table "Workflow Name" "Workflow Phase" "Workflow Process Due Date" "Workflow Process Name" "Workflow Process Sort Order" "Workflow Started Date" "Workflow Step Name" "Step Due Date" "Workflow Step Sort Order"
| convert timeformat="%Y-%m-%d %H:%M:%S.%6N" mktime("Step Due Date") AS cumulativeDueDate
| eval dueDateRange=mvrange(cumulativeDueDate,now(),86400)
| mvexpand dueDateRange
| convert ctime(dueDateRange) timeformat="%+"
| where NOT match(dueDateRange,"(Sun|Sat).*") AND NOT [ | inputlookup exclude_holidays | eval holiday_date=strptime(holiday_date, "%Y-%m-%d %H:%M:%S") | rename holiday_date as dueDateRange | eval dueDateRange=strftime(dueDateRange, "%+") ]
The initial value of dueDateRange here is formatted like
"%Y-%m-%d %H:%M:%S.%6N"
after input lookup it is
"%Y-%m-%d %H:%M:%S"
but the "%H:%M:%S" portion is always
"00:00:00" in the input lookup values, so I don't think the inputlookup portion is working to exclude as the fields in the actual data have valid "%H:%M:%S"
for example:
"2018-07-04 07:51:25.966000"
this is the value of input lookup file:
holiday_date
2018-01-01 00:00:00
2018-01-15 00:00:00
2018-02-19 00:00:00
2018-05-28 00:00:00
2018-07-04 00:00:00
2018-09-03 00:00:00
2018-10-08 00:00:00
Any thoughts on how I can get this to work by not using the time inside the inputlookup value timestamp? So it would use %Y-%m-%d?
↧
↧
Is there a custom drill/workflow action which can pull information from a setup.xml file?
I have a bit of an odd problem here. I'm trying to setup a drilldown/workflow action with a custom url/POST action sending username, password and server information. This is within an app where these can be variables. Here is an example of how the url will look:
https://server/capme/index.php?sip=10.10.10.1&spt=4242&dip=10.10.10.2&dpt=80&ts=2012-11-27%2005:34:00&usr=paulh&pwd=aBcDeF
Is there anyway I can pull the username, password and server information from a setup.xml file?
↧
Why does my basic query work in search, but not in dashboard panel?
When I run this basic query in search, I get results:
![alt text][1]
When I run the same exact query in a dashboard panel, I get no results:
"Search did not return any events."
![alt text][2]
[1]: /storage/temp/254796-fake-data-returns-data.png
[2]: /storage/temp/254797-fake-data-not-returning.png
↧
Could not use srptime to parse timestamp
Have been getting "Could not use srptime to parse timestamp from Token TOKEN = DD215569A74FB06F5BC0C966CF60AD86:2018-08-27 14:28:06,382 , Failed to parse timestamp defaulting to file modtime. Please advice.
Log:-
INFO:SESSION TOKEN = DD215569A74FB06F5BC0C966CF60AD86:2018-08-27 14:28:06,382
INFO:REQUEST:2018-08-27 14:28:15,000
INFO:
Props.conf
[ wsa:splunkalert:log ]
CHARSET=UTF-8
LINE_BREAKER=([\r\n]+)(\w+\:\w+\s\w+\s\=\s\w+\:\d+\-\d+\-\d+\s\d+\:\d+\:\d+\,\d+)
MAX_TIMESTAMP_LOOKAHEAD=30
NO_BINARY_CHECK=1
SHOULD_LINEMERGE=false
TIME_FORMAT= %H-%m-%d %H:%M:%S,3N
TIME_PREFIX=\s
disabled=false
pulldown_type=true
↧
How do you configure remote KV store on Universal Forwarder?
I have an application which uses the KV store to store the application's state. When installing it on a universal forwarder, I get errors saying "HTTP 503 Service Unavailable -- KV Store status is unknown"
I understand that there is no support for local KV stores on universal forwarders. However, is there any mechanism available which allows us to force queries to a remote KV store?
I tried editing the server.conf file with the following, but it didn't appear to have any effect:
[kvstore]
replication_host =
↧
↧
Navigation Bar/header: can you convert XML file to HTMl view for edit?
Is it possible to convert the XML file of the Navigation Bar to HTML view so I can edit it my own way?
For example, I'm trying to make the Nav bar instead of Horizontal to Vertical/side bar. I would like to add my own Jquary, Javascript, and HTML. How can i find the HTML version of it. If no, can i attached/link an external file to it?
Thanks in advance.
↧
Possible KV Extraction Issue?
I have some json events that are fairly long (10K-20K characters). Most events come through fine, except for the fact that some events have an issue with some of the fields towards the end of the event not being automatically extracted. I ran a couple of searches and found that the maximum raw length of the events with no issues is 10,197, while the minimum raw length of the events with the extraction issues is 10,251.
This led me to believe that I needed to bump up maxchars in the kv stanza in limits.conf, since it has a default limit of 10,240. However, upon making the change, the extraction issues remain. Here are my configurations:
On the forwarder:
props.conf:
[test:json]
KV_MODE = json
TIMESTAMP_FIELDS = Date
TRUNCATE = 0
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = false
On the search head:
limits.conf:
[kv]
limit = 200
maxchars = 25000
maxcols = 1024
Any ideas?
Thanks!
E
↧
What is the best way to get the running average and standard deviations for HTTP request character length?
What would be the best way to search for anomalies/outliers for HTTP request character length by source IP? Looking for HTTP requests whose standard deviation may indicate a potential hack/indicator of compromise.
Thx
↧