How to write a regular expression to extract a field which has characters,numbers and also special characters.Sometimes spaces in between.
I tried this. rex "(?\w+[A-Z0-9][^-])" --- to include characters and hyphen
but it doesnt work
Thanks in advance!!
↧
Regular expression
↧
Help on NOT function and WHERE function
Hi
I use the 2 eventtype below in a search
eventtype="TotalSpace" OR eventtype="DiskHealthSize"
I need to do an `NOT host=E*` for the 2 eventtype
Is it enough to do `eventtype="TotalSpace" OR eventtype="DiskHealthSize" NOT host=E*` or do I have to do it for the 2 eventtypes??
I have to do also a `| where Value <15)` but just for the second eventtype
I would like to do something like `(eventtype="DiskHealthSize" | where Value <15)` but it doesnt works....
Finally, I have to do a `where Free_Space <15` at the end of the query below but I have no results even if there is events corresponding....
Where i have to put this piece of code??
eventtype="TotalSpace" OR eventtype="DiskHealthSize" NOT host=E*
| eval time = strftime(_time, "%m/%d/%Y %H:%M")
| eval Value = round(Value, 1). " %"
| eval TotalSpace = TotalSpaceKB/1024
| eval TotalSpace = round(TotalSpace/1024,1). " GB"
| stats latest(Value) as Free_Space latest(TotalSpace) as TotalSpace by host | where Free_Space <15
Thanks for helping me please
↧
↧
merging events with unique entires
Hi everyone,
I have a data set for incoming emails through our mail gateway. The problem is it sends a log with the sender address (src_user) and another log with the recipient (recipient), I want to know how many emails are being sent to internal email address by the same sender.
I've had a look at the events and can see there is a matching string in the message field. example log below
sender event
<141>Feb 15 10:22:05 mail.server.corp filter_instance1[27702]: rprt s=2qkyvtba71 m=1 x=2qkun2vf0b-1 mod=session cmd=data from=fake.user@domain.com
recipient log<141>Feb 15 08:49:04 mail.server.corp filter_instance1[25779]: rprt s=2qkun2vf0b m=1 x=2qkun2vf0b-1 mod=session cmd=data rcpt=user@company.co.uk
So the only matching string is the x=2qkun2vf0b which links the two emails together. If the same sender sends another mail to the same recipient this obviously changes. So it's getting a bit difficult to come up with something!
What i really want is a query that will show how many emails a recipient email has recieved from the same sender. Is thi possible with my current event log state?
Thanks
↧
issue in rex query
Hi Guys,
I have a log as below;
server1;443 status= running.
server2;443 status= running.
server3;443 status= running.
server4;443 status= running.
In this I need to create a field by name “ServerName” by targeting all server names as its values example server1, server2, server3 etc.
I am running the rex query as below;
|rex "(?.*);443"
now i am getting the expected result but an additional text lines will be added onto its values as below;
Message=server1
Message=server2
Message=server3
Message=server4
Any idea from where this word “Message=” is being added to these values as prefix and how can we remove it?
↧
Send only few events
Hello Splunk Support,
we have the following problem:
- We must send a log file to different receiver:
-- a Splunk server and the splunk server need ALL events
-- a non-splunk server, but only few events, so a whitelisting solution
I found the following documentation
https://answers.splunk.com/answers/9076/how-to-configure-a-forwarder-to-filter-and-send-only-the-events-i-want.html
Now my questions:
- Could I combine both solution – all events to one server and few events to another server??
↧
↧
eval output is incorrect when comparing two fields with numeric values
I have a query that has an eval statement that assigns 1 to field 'isTrue' if field 'value1' is greater than field 'value2', otherwise assign 0.
My problem is if field value1 has say a value of 300 and I am comparing it to field value2 which has a value of 0.00, 'isTrue' field says '0' instead of '1'.
However, what confuses the heck out of me is when value2 is non-zero, isTrue field is assigned the correct value!
And as if I wasn't confused enough, if I use makeresults to fake out the values, isTrue field gets assigned the right value when comparing field 'value1' that has a greater than zero value against field 'value2' that has a value if 0.00
Can someone out there please help? What am I missing here? I tried adding quotes, double quotes on the field names but to no avail
Here is my full query. The eval statement is at the bottom.
index=uc sourcetype=rcd
| bucket _time span=5m
| stats latest(Variable10) as Variable10 by _time Variable2
| stats count(eval(like(Variable10,"Tx%|U|%"))) as U_Count by _time
| streamstats count as pri_key
| streamstats avg(U_Count) as avg, stdev(U_Count) as stdev
| eval avg=round(avg,2)
| eval stdev=round(stdev,2)
| eval lowerBound=(avg-stdev*2)
| eval upperBound=(avg+stdev*2)
| eval time_5m_value=if(pri_key=4,'U_Count',"")
| eval time_15m_prev_upperBound=if(pri_key=3,'upperBound',"")
| eval time_15m_prev_lowerBound=if(pri_key=3,'lowerBound',"")
| eval time_15m_prev_avg=if(pri_key=3,'avg',"")
| eval time_15m_prev_stdev=if(pri_key=3,'stdev',"")
| stats values(time_5m_value) AS value1 values(time_15m_prev_upperBound) AS value2 values(time_15m_prev_lowerBound) AS time_15m_prev_lowerBound values(time_15m_prev_avg) AS time_15m_prev_avg values(time_15m_prev_stdev) AS time_15m_prev_stdev
| eval isTrue=if(value1 > value2, 1, 0)
And here is the makeresults statement that I was testing with that is working just fine when comparing value1 that is greater than 0 against value2 field that is 0.00:
| makeresults count=1 | eval value1=300, value2=0.00, time_15m_prev_lowerBound=0.00, time_15m_prev_avg=0.00, time_15m_prev_stdev=0.00| fields - _time
| eval isTrue=if(value1 > value2,1,0)
Thank you in advance!!
↧
Dashboard layout - want two visualizations in single panel and condition also.
Hi All,
I want the following layout :
![alt text][1]
- I am able to achieve **Status Overview** layout by :
`
`
- But not able to create the **Component 2 Status** panel layout.
- The visualization is *"Single Value"* for both red sub parts.
- The first sub part shows percentage of queues with pending messages, and if (percentage > 0) OR we can say if(pendingMessages > 0) then,
- show the second sub-part with no. of pending messages.
Please Help.
Thanks in advance!
[1]: /storage/temp/268695-layout.png
↧
Default colors on a pie chart
Hello,
I'm looking for the way to add a default color to a pie chart.
To be more specific, I have a Pie Chart showing version dispatch of a specific application.
I already have custom colors via the following config
But I cannot find a default color for every other version that matches neither 4.5.2 nor 5.1.2.
Following documentation (https://docs.splunk.com/Documentation/Splunk/6.1.3/Viz/Chartcustomization#Chart_colors) I do not find any clue.
Any Ideas ?
Many thanks
↧
tuning machine learning toolkit
I installed mltk app and PSC add on but I dont know how can I tune it with my own data as it use itself lookups, how can I define models and use it base on my network info?
↧
↧
Efficient and "correct" way to counting stats based on a *sequence* of events within a rolling timeframe
Creating stats count based on a **sequence** of events **within a timeframe**. For example, count the unique sessions, within a 6-hour timeframe, that resulted in 1- Failures without Success, 2- Success, or 3- Failures followed by Success:
SessionID Time Action
Abcd 12:03:11 Failure
Abcd 12:04:19 Failure
m 12:05:49 Failure
XXXXX 12:06:20 Failure
XXXXX 12:07:34 Failure
Abcd 12:10:11 Failure
Abcd 12:23:12 Success
ZZ 12:28:10 Success
XXXXX 12:31:00 Failure
Abcd 21:03:11 Success
m 22:03:11 Failure
m 22:03:12 Success
Produces:
Failure_no_success | Success | Failure_then_success
2 | 2 | 2
Where Failure_no_success is the three XXXXX and the first m sessions, Success is the ZZ session and the last Abcd session, and Failure_then_success is the four Abcd and the last two m sessions.
There are multiple inefficient ways to solve this, like combining many subsearches, outputing some of the data to a lookup table and reading it back, etc. But is there a "correct" and scalable way to perform this count?
↧
Max warm settings exceeded, but cold is still empty
I am looking through my indexes, and I see that my busiest one is not responding at all how I thought I had it configured.
I am hoping I have some sort of settings precedence overriding the behavior I expected....
----------------------------
indexes.conf
---------------------------
#Unlimited storage overall
maxTotalDataSizeMB = 1000000000
#Once my hot/warm index reaches 500GB, send them off to cold
homePath.maxDataSizeMB = 500000
#Purge data older than 5.1 years
frozenTimePeriodInSecs=160833600
[volume:hot]
path = E:/splunk-hot
[volume:cold]
path = F:/splunk-hot
[busyIndex]
repFactor = auto
homePath = volume:hot/busyIndex/db
coldPath = volume:cold/busyIndex/colddb
The problem:
Looking at my IndexDetail page from the splunk monitoring console, I see that:
Warm Index Size = 552GB -- Why did it not start rolling already? It has exceeded the maxDataSizeMb
Cold Index Size = 0
Total buckets: 1747 (Max buckets is 300, per this same page) -- Why did it not start rolling already?
Cold Path -- I have checked, and it seems fine. The dummy folders have been created by splunk so It has permissions. Per "Index Detail" page, maxColdDb is 0 (for unlimited!)
The settings from my indexes.conf are reflected properly in this "Index Detail" screen, so I assume my indexes.conf has valid stanzas.
Second question.....
My goal:
For each index, store 500GB of data on hot storage before pushing off to cold, where it will sit. Overall data will be purged after 5.1 years.
I think my settings are not at all in line with this though. If my max bucket size is not configured, it would default to "auto" (750MB), meaning no matter how high I set my homePath.maxDataSizeMB to, it can never exceed ~230GB.
So, I need to:
1. Change my max bucket count to 675 (leaving bucketSize at auto 750)
2. Change my homePath.maxDataSizeMB to something much larger, because it applies to all indexes as a group, not a single index
Correct?
↧
Why UF consuming so much swap
Hello
Trying to figure out why my UF is consuming 37GB of swap space
Ran some commands and here are the results
[splunk@server07 ~]$ free -h
total used free shared buffers cached
Mem: 94G 93G 1.3G 46G 252M 49G
-/+ buffers/cache: 43G 51G
Swap: 57G 53G 4.2G
The swap calcultaions by splunk process:
[splunk@server7 ~]$ grep --color VmSwap /proc/100427/status
VmSwap: 4180 kB
[splunk@server7 ~]$ grep --color VmSwap /proc/100423/status
VmSwap: 37438788 kB
Anyone have any ideas why its consuming so much swap?
This doesn't seem normal
Thanks for the thoughts!
↧
Splunk Add-on for Apache Web Server not working
Hello all,
I have installed app "Splunk Add-on for Apache Web Server" from splunk web. Unfotrunately when i try to create Data input i am unable to select the source type for ex: apache:access and when i try to launch app it says page not found. please help. I also tried to download the addon and uploaded the tar file to update add-on, but it doesnot work. please suggest
↧
↧
Splunk add-on for Microsoft Cloud Services - No handlers could be found for logger "msrestazure.azure_active_directory"
Hello everyone
I have installed this addon and even before I configure it, it is throwing these errors on a constant basis:
02-15-2019 16:48:06.847 +0100 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/mscs_azure_audit.py" No handlers could be found for logger "msrestazure.azure_active_directory"
02-15-2019 16:48:06.955 +0100 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/mscs_azure_resource.py" No handlers could be found for logger "msrestazure.azure_active_directory"
Has anyone faced this issue before?
If I run the script manually, I get the same error:
./splunk cmd python /opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/mscs_azure_audit.py
No handlers could be found for logger "msrestazure.azure_active_directory
I already removed and reinstalled the addon, restarted splunk but the error persists.
Thanks
↧
Splunk problem with fast and smart mode
Hello community,
I am facing a problem ,I have an instance of splunk installed on linux server , And I am trying to do a copy of this instance on my localhost which turns on windows machine.So I have done copy of all my apps and indexes.But I find out that while doing my requests, search commands like stats, timechart ....are working only in verbose mode, And they are returning no data in fast and smart mode.
Any help please,
N.B. : -Version splunk on linux : 7.2.0 -Version splunk on my local machine : 7.2.3
↧
Warning after configured HEC (Http Event Collector) on Ansible playbooks when pointing to HEC collector
configured device to use HEC. the logs are being ingested now into SPLunk but receive warning after running Ansible playbooks and pointing to HEC collecter? Has any one else seen this warning?
[WARNING]: Failure using method (v2_runner_on_ok) in callback plugin (): Connection to proxy failed
↧
Why doesn't this javascript work to apply a click event to the Water Gauge visualization?
As a followup to "Water Gauge visualization Drilldown??"
Why would this code not apply the click event to my dashboard when it loads. If I run the JavaScript directly in the browser console, the click event is attached, and all is well.
Dashboard:
| makeresults | eval hi=55 | fields hi 0 |
Note the required "drilldown" option entry in the Water Gauge XML to define the URL:
If the option is not found, the JavaScript will just return false and no action will be taken. Notice that this option is customized to my environment and may throw an error if it is not updated to match your environment.
JavaScript:
var components = [
"splunkjs/ready!",
"splunkjs/mvc/simplexml/ready!",
"jquery"
];
require(components, function(
mvc,
ignored,
$
) {
$('.splunk-water-gauge').click(function(el) {
parent_el = $(this).parents('div[class^="dashboard-element viz"]')[0].id;
viz_details = mvc.Components.getInstance(parent_el);
if (typeof(parent_el) == 'undefined') {
return false;
}
var url = viz_details.options.reportContent["display.visualizations.custom.TA-ctl_splunk_it_one_viz.water_gauge.drilldown"];
if(typeof(url) == 'undefined') {
return false;
}
window.open(url,'_blank');
});
});
↧
↧
SendResults - use multiple line body and add signature
Is it possible to have multiple lines for the email_body, and to include the email signature as set in system email settings?
One line of text is not sufficient for some cases; specifically I am sending an email based on events in another application to encourage the recipient to go to that system for corrective action. I would like the send the links they need in the email, but in separate lines from the message. These links are also already included in the email signature I have set up in Splunk.
↧
REST error in DMC Index Detail
In the DMC, I am seeing errors like below when looking at Index Detail.
[] REST Processor: Failed to fetch REST endpoint uri=https://127.0.0.1:8089/services/data/indexes/?count=0 from server https://127.0.0.1:8089. Check that the URI path provided exists in the REST API.
and
[subsearch]: [] REST Processor: Failed to fetch REST endpoint uri=https://127.0.0.1:8089/services/data/indexes-extended/?count=0 from server https://127.0.0.1:8089. Check that the URI path provided exists in the REST API
I don't see any reference to the `/services/data` in the REST endpoints. I am not sure what could be wrong. Is this telling me that on that Spunk Server, it is not seeing that index?
↧
How do you write a regular expression to extract a field which has characters, numbers and also special characters?
How do you write a regular expression to extract a field which has characters, numbers and also special characters? There are sometimes spaces in between.
I tried this. rex "(?\w+[A-Z0-9][^-])" --- to include characters and hyphen,
but it doesnt work
Thanks in advance!!
↧