How to write a cron schedule to execute in every 5 mins between 7 am to 12 min-night ?
↧
How to write a cron schedule to execute in every 5 mins between 7 am to 12 min-night ?
↧
Overlay goal trendline
Hi, I want to generate a timechart count of actual values and overlay a trendline of expected goal growth. Basically I want to trend how my data is growing over time with a visual of how I hoped it grew over time.
index=myDownloads
| timechart span=1w count as Downloads
Above, is an example of the base query and I tried appending static data using a lookup table with prepopulated weekly download numbers to no avail. Before I go too far down this road, is there a way easier way I can plot a simple slope of static values growing 1% each week?
Thank you!
↧
↧
What is the best way to generate a timechart count and overlay a trendline of expected goal growth?
Hi, I want to generate a timechart count of actual values and overlay a trendline of expected goal growth. Basically I want to trend how my data is growing over time with a visual of how I hoped it grew over time.
index=myDownloads
| timechart span=1w count as Downloads
Above, is an example of the base query, and I tried appending static data using a lookup table with prepopulated weekly download numbers to no avail. Before I go too far down this road, is there a way easier way I can plot a simple slope of static values growing 1% each week?
Thank you!
↧
How do I convert the following data into a pivot table?
With the following search
index=msperf sourcetype="perfmon_processor_xml"
| xpath outfield=Architecture "//COMMAND/RESULTS/CIM/INSTANCE/PROPERTY"
| mvexpand Architecture
| rex field=Architecture "^[^=\n]*=\"(?P[^\"]+)[^=\n]*=\"(?P[^\"]+)[^<\n]*<\w+>(?P[^<]+)"
| table PropertyName PropertyValue
| where PropertyName in ( "Description", "DeviceID", "Name", "NumberOfCores", "NumberOfLogicalProcessors")
| dedup PropertyValue
| sort PropertyName PropertyValue
I've got the following result:
PropertyName PropertyValue
Description Intel64 Family 6 Model 45 Stepping 7
DeviceID CPU0
DeviceID CPU1
Name Intel(R) Xeon(R) CPU E5-2643 0 @ 3.30GHz
NumberOfCores 6
and I would like to convert to the following format:
DeviceID Description Name NumberOfCores
CPU0 Intel64 Family 6 Model 45 Stepping 7 Intel(R) Xeon(R) CPU E5-2643 0 @ 3.30GHz 6
CPU1 Intel64 Family 6 Model 45 Stepping 7 Intel(R) Xeon(R) CPU E5-2643 0 @ 3.30GHz 6
Help please?
↧
Is there a way to avoid the join command between indexes?
I'm trying to get my head around the alternatives, but can't see how I could get rid of the `join` in the following query:
index="docverificationengine" "Issuing country does not match WR records for Sender" | rex field=_raw "records for Sender \[(?P[^\]]+)\]" | table senderId | join senderId [ search index="senderverification" Verification "DocumentType\\\":2" | rex field=_raw "queue: {\\\\\"SenderId\\\\\":(?\d+)," | table senderId]
I have to admit though, that I don't have a clear concept of what would be a good performance. It takes around 4.5 seconds to run with a set of less than 2k in the "docverificationengine" index but over 300k in the "senderverification" one
↧
↧
Need to use these 2 searches because of multikv with 1 table
So here is my search
index=someindex sourcetype=somesourcetype source="someloglocation*" eventtype="nix_kernel_attached" "\"outcome\":\"success\""
| multikv
| mvexpand _raw
| rex field=_raw "\"userId\":\"(?[^\"]+)\""
| eval eventtype=mvindex(eventtype,1)
| eval LoginType=case(eventtype == "nix_kernel_attached", "WebUI")
| search userinfo=userid* "\"message\":\"login\"" eventtype=nix_kernel_attached LoginType=*
| join type=inner max=0 userinfo
[search index=someindex sourcetype=somesourcetype source="someloglocation/*" eventtype="nix-all-logs" "\"outcome\":\"success\""
| multikv
| mvexpand _raw
| rex field=_raw "\"userId\":\"(?[^\"]+)\""
| eval eventtype=mvindex(eventtype,0)
| eval LoginType=case(eventtype=="nix-all-logs", "CLI")
| search userinfo=someuserid* "\"message\":\"login\"" eventtype=nix_kernel_attached LoginType=*]
I need to display the UserID and the LoginType in a table so that we can show how the user came in.
I've been messing with this for a while, one of the problems is that some of these events have an eventtype with 2 different values for the same event. Hence the mvindex command to yank out the one that doesn't pertain to that particular search
If there is a better method when working with mvindex I am all ears for it. Problem is if a user logs in with the cli tools it shows up in both eventtypes but if they login with the UI then it only registers with one eventtype as you can tell from what my search is doing.
By the way this join is "working" in that it does return results but I don't trust the results because of the eventtype thing. It also looks like its bringing back duplicates which sure I can eliminate with a dedup but I'm hoping there is a more sane method to this madness.
Oh I also don't have access to the backends so I can't make any changes to the way the data is being indexed.
Thank you all for your help with this very much.
↧
regex for counting fields
hi
i have one question, is it possible to count the number of event in regex format for writing in transforms.conf?
↧
rex extract field not working as expected/ miss handling ")" in regex
Hi
I have a field with following value
16/08/2018 03:04:11 - Christian (Work notes) Remote Desktop Notes: - still unable to remote in to the machine 10/08/2018 07:11:53 - Christian (Work notes) Remote Desktop Notes: - machine is offline - 08/08/2018 01:11:53 - Sam (Work notes) Remote Desktop Notes: - machine is comprimised
This is all job comments relate with the work and I want to get the last comment only of the job which will be the string between the first and second timestamps
- Christian (Work notes) Remote Desktop Notes: - still unable to remote in to the machine
I tried use following regex in regex101.com, it seems works fine.
^\d{2}\/\d{2}\/\d{4}\s\d{2}:\d{2}:\d{2}\s-\s(?.+?(?=\d{2}\/\d{2}\/\d{4}\s\d{2}:\d{2}:\d{2}\s-\s))
But when I put the rex into the query it does not return anything
... | rex field=work_notes "^\d{2}\/\d{2}\/\d{4}\s\d{2}:\d{2}:\d{2}\s-\s(?.+?(?=\d{2}\/\d{2}\/\d{4}\s\d{2}:\d{2}:\d{2}\s-\s))" | table number lastcomment
so I am doing some testing and find the problem is splunk miss reading the ")" as if I do following query
... rex field=work_notes "^\d{2}\/\d{2}\/\d{4}\s\d{2}:\d{2}:\d{2}\s-\s(?.*)" | table number lastcomment
it return as
Christian (Work notes)
instead of the whole string as what ".*" expect to do
Christian (Work notes) Remote Desktop Notes: - still unable to remote in to the machine 10/08/2018 07:11:53 - Christian (Work notes) Remote Desktop Notes: - machine is offline - 08/08/2018 01:11:53 - Sam (Work notes) Remote Desktop Notes: - machine is comprimised
and if I put space between * and ) like below
...| rex field=work_notes "^\d{2}\/\d{2}\/\d{4}\s\d{2}:\d{2}:\d{2}\s-\s(?.* )" | table number lastcomment
it will return as
Christian (Work
Sorry for the long post, any suggestion what is going on there?
↧
file without line feeds and carriage returns
Hi at all,
I have a file without CR al LF to divide events.
I usually parsed these files without problems (e.g. SAP logs), but now I don't know why it doesn't run!
this is an example of my file
141.146.8.66 - - [13/Jan/2016 21:03:09:200] "POST /category.screen?category_id=SURPRISE&JSESSIONID=SD1SL2FF5ADFF3 HTTP 1.1" 200 3496 "http://www.myflowershop.com/cart.do?action=view&itemId=EST-16&product_id=RP-SN-01" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.38 Safari/533.4" 294&&&130.253.37.97 - - [13/Jan/2016 21:03:09:185] "GET /category.screen?category_id=BOUQUETS&JSESSIONID=SD7SL2FF1ADFF8 HTTP 1.1" 200 2320 "http://www.myflowershop.com/cart.do?action=changequantity&itemId=EST-12&product_id=AV-CB-01" "Opera/9.20 (Windows NT 6.0; U; en)" 361&&&141.146.8.66 - - [13/Jan/2016 21:03:09:167] "GET /product.screen?product_id=RP-LI-02&JSESSIONID=SD9SL9FF8ADFF1 HTTP 1.1" 200 3855 "http://www.myflowershop.com/cart.do?action=changequantity&itemId=EST-20&product_id=RP-LI-02" "Googlebot/2.1 ( http://www.googlebot.com/bot.html) " 929&&&
The end of an event is `&&&` .
I tried with SHOULD_LINEMERGE = true and false
I tried with LINE_BREAKING, MUST_BREAK_AFTER, BREAK_ONLY_BEFORE_DATE and BREAK_ONLY_BEFORE.
I tried to replace `&&&` with `\n` , but every time I continue to have only one event not divided.
Where I'm wrong? i know that it's a very stupid thing but I'm going mad!
Thank you in advance.
Bye.
Giuseppe
↧
↧
Chart only values 15% above calculated average response
I've created a chart that only shows run times above a 60 day average and it's corresponding average, which works perfectly. However, now my users are looking to narrow these to occurrences that are 15% and higher than said average, evidently it's too difficult to look at the numbers I am already presenting. Any suggestions based on my existing search I have working?
index=global_foo sourcetype=prd_global_bar_log firm_name="*" start_time="*" firm_number="*"
| strcat firm_name " - Firm Number: " firm_number AS Firm
| bin _time span=60d
| eventstats avg(duration_minutes) as avg_time by Firm
| where duration_minutes > avg_time
| eval date_wday_new=if(date_wday="sunday","1. Sunday",if(date_wday="monday","2. Monday",if(date_wday="tuesday","3. Tuesday",if(date_wday="wednesday","4. Wednesday",if(date_wday="thursday","5. Thursday",if(date_wday="friday","6. Friday",if(date_wday="saturday","7. Saturday","unknown")))))))
| chart values(duration_minutes) as run_time by Firm date_wday_new
| appendcols
[ search index=global_foo sourcetype=prd_global_bar_log firm_name="*" start_time="*" firm_number="*"
| stats avg(duration_minutes) as Average by firm_name]
↧
Unset inputs link list
Hello community, please can you give me some help?
I have in my dashboard three different inputs link list each with different options, my question is:
It is possible to do that when selecting any of the options that I have, the previously selected option will be deactivated, that is, regardless of the fact that there are three different link list inputs, it is always a single selected option.
Ex: If I click on the latency option that deactivates the rest.
How can I do this?
![alt text][1]
[1]: /storage/temp/255812-inputlinklist.png
↧
What is the max value for maxHotSpanSecs
Manual says to not go below an hour, but I am getting:
Invalid key in stanza [main] in /opt/splunk/etc/system/local/server.conf, line 46: maxHotSpanSecs (value: 31536000).
so it sounds like there is a max too.
↧
How to check if an account or username is locked through Splunk? This is not related to window login or Unix Login...
We have been issues when application stops responding , when a particular account gets locked.
I would like to create an alert to overcome this issue.
↧
↧
HttpListener - Socket error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number
Hi,
I started to get the error below after my splunk was updated:
HttpListener - Socket error from 127.0.0.1 while idling: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number
I thought was some 'garbage' from previous version, but even if running a fresh install logs still shows the same problem. I found this error while troubleshooting an issue with splunk kafka connector which is no longer sending messages to splunk.
Saw a couple of discussions with similar error but couldn't find anything that could solve my problem.
Thanks.
↧
excludeFromUpdate is not ignoring the directories.
I am trying to push a specific .conf as part of the /local directory of myApp from Deployment Server.
I have provided the excludeFromUpdate attribute to ignore a couple of directories which I do not want to be impacted in the app. But when the app is pushed, I see on the forwarders the /local directory that i pushed but removes the 2 "to be excluded" directories mentioned in my serverclass.conf.
In my serverclass.conf I have -
[serverClass:all_forwarders:app:introspect]
excludeFromUpdate = $app_root$/default, $app_root$/bin
restartSplunkd = 1
I am trying to push the "introspect" app from DS having a /local directory which contains a .conf file.
But when the app is pushed, it removes the /default and /bin directories on UF which I don't want to get impacted.
I have verified the versions of my DS ad UFs, they are above 6.2
Any ideas on what am I missing here ?
↧
Colorpalette help
Hi
I use this colorpalette code in my xml
But the value of my threshold has To be in %
I do 10%,20% but it doesnt works
An idea please?
Code:
[#DC4E41,#EC9960,#53A051] 10,20
↧
forward events to multiple indexers
hi everyone,
I have web server events.
I want to forward specific events that contain digits 404 to index1 and remaining event to index2.
below is an example event:
12.130.60.4 - - [13/Jan/2016 21:03:09:149] "GET /category.screen?category_id=GIFTS&JSESSIONID=SD9SL6FF8ADFF9 HTTP 1.1" **404** 3585 "http://www.myflowershop.com/category.screen?category_id=GIFTS" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.38 Safari/533.4" 976
Please advise.
↧
↧
CPU usage of /apps is 100 percent on a indexer
In one of indexer the /apps usage is 100 per.How can I know what is the root cause which app is using more CPU
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/apps-apps 5.3T 5.3T 1.1G 100% /apps
↧
sysmon props.conf _time extractions is working but isn't adding the milliseconds that it should from the UTCTime value.
My props.conf time extraction looks like this and works great for extracting the time and milliseconds from the tool to get data in in splunk.
Added it for both Xml source and WinEventLog.
[XmlWinEventLog:Microsoft-Windows-Sysmon/Operational]
SHOULD_LINEMERGE=false
NO_BINARY_CHECK=true
BREAK_ONLY_BEFORE=
MAX_TIMESTAMP_LOOKAHEAD=23
TIME_FORMAT=%Y-%m-%d %H:%M:%S.%3Q
[WinEventLog://Microsoft-Windows-Sysmon/Operational]
SHOULD_LINEMERGE=false
NO_BINARY_CHECK=true
BREAK_ONLY_BEFORE=
MAX_TIMESTAMP_LOOKAHEAD=23
TIME_FORMAT=%Y-%m-%d %H:%M:%S.%3Q
![alt text][1]
[1]: /storage/temp/254786-sysmon.jpg
↧
Question about perfmon:logicaldisk
Hello
I want to monitore the free MB and the free space of my logical disk
So in inputs.conf I have :
[perfmon://LogicalDisk]
index = perfmon
counters = Free Megabytes;% Free Space;%
disabled = 0
instances = *
interval = 30
But when I do this search : index="perfmon" source="perfmon:logicaldisk" instance="C:" the name of my counters in the events is counter="Mégaoctets libres" instead counter="Free Megabytes" and counter="% d’espace libre" instead counter="Free Space". What is wrong please??
↧