Filter attempts (whitelist or blacklist) on Message key value data appear to behave differently when renderXml = True compared to when renderXml = False.
Taking the following Event Message data for example:
fragment_beginC:\Windows\System32\ping.exefragment_end
When renderXml = False, the following expression succeeds in filtering events:
blacklist = Message=".*\\(calc|ping).exe"
However, when renderXml = True, the same expression fails to filter events.
After trying a various filtering strategies on this Message key/data when renderXml = True, it appears that matching fails any time when XML character entities (quote, ampersand, single quote, greater than, less than) are included in the pattern for matching.
I've tried cancelling these characters various ways (backslash, name, decimal) to no success.
Can anyone think of a workaround?
↧
Unable to filter WinEventLog inputs with RenderXml and XML character entities within pattern
↧
permanently extracting a field
Hi, I am using regex to extract a field. However, I need to make it permanent so that I don't have use regex in future searches. The regex is:
rex field=message "(?(\w{5,3}\s+)+"
I would really appreciate any help! I hope I've provided sufficient information.
↧
↧
Checkbox and dropdown
below is my dashboard code, when checkbox is selected i want splunkd* only to be shown in the dropdown else all.
below code seems like is finding checkbox is selected and going to if section , but,sourcetype like "splunkd*" is always evaluating to true/false and also like doesnt seems to work ? any updates?index=_internal | eval field1= if(replace("$applyFilter$",".*(\d)","\1")="1",sourcetype like "splunkd*", ,sourcetype)
--Sree
↧
Extracting fields from SNMP GETBULK data
Hi everyone,
I need to extract fields from data continuously polled for via SNMP Modular Input. Each event looks like this:
IF-MIB::ifInOctets."1" = "1587826952" IF-MIB::ifOutOctets."1" = "3472375195" IF-MIB::ifInOctets."2" = "0" IF-MIB::ifOutOctets."2" = "0" IF-MIB::ifInOctets."3" = "0" IF-MIB::ifOutOctets."3" = "0" IF-MIB::ifInOctets."4" = "0" IF-MIB::ifOutOctets."4" = "0" IF-MIB::ifInOctets."5" = "0" IF-MIB::ifOutOctets."5" = "0" IF-MIB::ifInOctets."6" = "50036733" IF-MIB::ifOutOctets."6" = "3575426650" IF-MIB::ifInOctets."7" = "0" IF-MIB::ifOutOctets."7" = "0" IF-MIB::ifInOctets."8" = "657176060" IF-MIB::ifOutOctets."8" = "2715199686"
I've tried using the extractions in snmp_ta but it didn't work for this particular set of SNMP poll data. It managed to create "*ifInOctets*" and "*ifOutOctets*" fields but does not separate them by interface ID.
I've then resorted to index-time extraction.
My **transforms.conf**:
[snmp_ifOctets_extraction]
REGEX=IF-MIB::(.+?)\.\"((?:\d\.?)+)\"\s=\s\"(.*?)\"
FORMAT=$1.$2::$3
WRITE_META = true
REPEAT_MATCH = true
My **props.conf**:
[snmp_ifoctets]
TRANSFORMS-ifoctets = snmp_ifOctets_extraction
That didn't turn out so well either. It extracted just one field, "*ifInOctets.1*". The regex didn't seem to repeat itself throughout the event even though REPEAT_MATCH is set. Anyone have any ideas why this is happening?
PS I'm also open to any other ideas on how to parse this set of data. I'm thinking of splitting each poll result into individual events next if index-time extraction isn't workable.
↧
How to index by old sourcetype , after logs monitoring has been disabled
Hi,
We have below configuration:
1. **source**: <Path>/access.log
2. **sourceType**:AccessLogs
3. **Index**: AccessLog
Now, we need to create new sourceType (and also new index) as per requirement and disable old index (shouldn't monitor logs now onwards) . But, old data exists till now, needs to be searched using old sourcetype. How to configure these
Can a index/sourceType exists without any source(to Monitor )
Thanks,
Ramu
↧
↧
How to split one Line into multiple lines while search
In reference to my other post
https://answers.splunk.com/answers/337397/how-to-break-xml-in-search-time.html
I am adding other way of the question.
I have total xml data in a field like below.
GREAT SOUTHERN WOOD PRESERVING INC 1100 HIGHWAY 431 NORTH ABBEVILLE AL REMBRANDT FOODS- ABBEVILLE 496 INDUSTRIAL PARK RD ABBEVILLE AL 36310 RITE AID #7092 514 KIRKLAND STREET ABBEVILLE AL 36310-2700 31.56149
I need to break the entire field into multiple rows. like below.
----------------------------------------------------------------------------------------
GREAT SOUTHERN WOOD PRESERVING INC 1100 HIGHWAY 431 NORTH ABBEVILLE AL
----------------------------------------------------------------------------------------REMBRANDT FOODS- ABBEVILLE 496 INDUSTRIAL PARK RD ABBEVILLE AL 36310
----------------------------------------------------------------------------------------RITE AID #7092 514 KIRKLAND STREET ABBEVILLE AL 36310-2700 31.56149
Please let me know how could i do it. I tried rex, but i do not think that can give multiple rows out of one.
↧
Pass String Field from Outer Search into Inner Map Search
My search looks like this:
index=index_name source="Source A.csv" | eval Start2=strptime(Start, "%m/%d/%Y%H:%M") | eval End2=strptime(End, "%m/%d/%Y%H:%M") | map maxsearches=99999 search="search index=index_name earliest=$Start2$ latest=$End2$ source=\"Source B.csv\" | eval Problem2=\""$Problem$\"" | stats values($Problem2$) as Problem3, avg(Data) as Average, min(Data) as Min, max(Data) as Max, stdev(Data) as Stdev" | table Average Min Max Stdev Problem3
Problem is a field in Source A of the form XX003 or X2999, a letter or two, then three or four numbers. I am using the Start and End fields from Source A to look in source B's Data field and calculate stats for each Problem in Source A. I can't seem to get the Problem to pass through the map search. Help!
I have tried eval Problem=$Problem$ (like some other examples)
Problem="$Problem$"
Problem=\"$Problem\"
and the example in the code above. I have a nearly identical search with a numerical field i.e. OtherProblem=2.9 that works great.
Help!
↧
LDAP errors after an employee is replaced and ownerships were edited in $SPLUNK_HOME/etc//metadata/local.meta
We have an employee replacement and the saved searches & objects ownership was changed from $SPLUNK_HOME/etc/<app>/metadata/local.meta file and restarted splunkd. All the objects were accessible & reports are running fine under new employee's ID. But I still see LDAP error logs "User not found". He's still in the LDAP. I believe the errors in splunkd is caused by $SPLUNK_HOME/etc/users/<Employee-ID>. If I delete the folder , What is its impact?
↧
How to schedule SOS dashboard view?
When I run the SOS dashboard Disk Usage from <URI:8000>/app/sos/index_disk_usage as a search, I get all the data and unable to get the dashboard view as seen in SOS. Is there any way to schedule the dashboard view of SOS directly?
↧
↧
How to calculate the duration between the last 2 events in a transaction?
I would like to calculate the duration between the last two events in a transaction. An example transaction looks something like:
2015-12-31 13:03:03,695 Outgoing UserId="99999999999" MsgType="Menu" Internal="START"
2015-12-31 13:03:15,437 Incoming UserId="99999999999" MsgType="Refresh"
2015-12-31 13:03:19,847 Incoming UserId="99999999999" MsgType="Key" Key="1"
2015-12-31 13:03:20,238 Outgoing UserId="99999999999" MsgType="Menu"
How can I calculate the duration between last Incoming and last Outgoing events?
↧
DBConnect indexing field with backslash character
I'm indexing a field with DBConnect that contains the backslash character, eg \, in order to escape quotation marks and hyphens within the data. This has a side effect of breaking the field extraction after the first \ character. Has anyone encountered this problem, and if so, how do you work around it?
↧
Regarding autoextraction of Fields in Splunk
Hi,
I was able to run search queries in Splunk and the fields were getting automatically extracted in the Interesting Fields and depending upon that we were able to modify queries.
But currently i am able to run the queries and getting results, but the fields are not getting listed under the " Interesting Fields ".
I am not able to figure out the issue, is any settings have changed or not? Please share some thoughts on the same.
Regards,
Pradipta
↧
How to embed image and apply dynamic data
Hi Experts,
We are developing Data Center layout with dynamic data like Power consumption, Temprature data,
Is it possible to embed any layout in Splunk Dashboard?.
↧
↧
DBConnect V2 indexing timestamp
I have DBConnect V2 running on SPlunk 6.3.1 and it was working fine until the new year. All records were indexing correctly pre 01/01/2016 00:00:00 but since then they are now indexing against 01/01/2015.
I was alerted to this fact by a lack of records in the index and when i ran the following, I discovered them in 01/01/2015
index=app_agg source=Oracle sourcetype=LUW_VARIABLES_HOME latest=-10mon@mon | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") | where indextime>"2015-12-31 23:59:00"
Any help would be most appreciated.
↧
Logging Aggregation for Checkpoint Firewalls Logs
I am going slightly over my license limit from time to time because of the Checkpoint firewall logs. Is there a way to aggregate some of the firewalls logs before start indexing them into the Splunk indexers? Or the only option would be to add another 20GB of license to Splunk.
↧
Field extractor bug in space-delimited timestamp
Since we're using Splunk Free I can't open a case to submit a bug report. Hopefully someone here can either tell me this is expected behavior or reproduce the bug and file it.
I was working on adding fields to our access-combined SourceType and noticed that the first extraction I completed returned incorrect data. For logs from today (Jan 1) the fields were correct but logs from yesterday (Dec 31) and before were all off by one position. Since I'm not concerned about perfect extraction I had setup the new fields with space-delimited separators. In trying a different field extraction with a different sample date *before* today I found that the fields lined up fine for all days prior.
I switched to raw view in Search and it looked like both today and yesterday's log entries are identical in their space-delimited breakouts. Heading back into Field Extractor I previewed Jan 1 entries versus Dec 31 entries and found that the datetime fields between the two aren't consistent. For instance:
Dec[single space]31[single space]23:31:39
Jan[double space]1[single space]12:06:15
It looks like Field Extractor was trying to compensate for the length difference between the "1" and "31" day fields. But, in doing so, it assumed the extra space was another field delimiter and moved everything over by one place.
Has anyone else seen this behavior or are you able to reproduce it? I've setup an extraction based on the correct format, but since our installation is fairly new I don't have enough indexed data to see if this affects all dates with single-digit days or if it's only in Field Extractor.
Thanks!
Alan
↧
Regex - Search for Number Range within Array/String
Sample Data:
ID | [[`Event1`,1435],[`Event2`,78],[`Event3`,142]] | etc.....
I'm wanting to build a query which will display the ID and the entire field of event data where 'Event2' is greater than (x).
I'm not overly familiar with regular expressions, so if anyone can point me in the right direction, I'd greatly appreciate it. As it, I'm searching for ([`Event2`,1] OR [`Event2`,2] OR [`Event2`,3] . . .)
I'm trying to optimize my search here, and have had one heck of a time trying to self-instruct myself regex.
↧
↧
what is the framework in splunk enterprise 6.3?
i'm using splunk enterprise 6.3 edition.i want to know that how can i find the framework for this version of enterprise...
please,guide me in a respective path....
↧
Search Head Cluster: Pre 6.3 we could run more number of Schduled Searches Concurrently.
Prior to 6.3, quotas were enforced instead on a member-by-member basis for SHC we have the System Wide Quote Working. Here is an example for this
Let’s assume that we have 2 members in the cluster- set to run scheduled searches as the other 10 members as adhoc only. The Captain takes into consideration number of members in the cluster who can run scheduled searches?
((24*3)+6 ).85 2 = 132
In this case following settings were used : 24 core machines :
machines total and they are both considered job_servers and run scheduled searches
1 of them is the captain
max_search_per_cpu = 3
base_max_searches = 6
max_searches_perc = 85
auto_summary_perc = 85
When the system-wide Quota is reached splunkd.log files shows following messages.
12-11-2015 11:20:40.790 -0500 WARN SavedSplunker - The maximum number of concurrent scheduled searches has been reached (limits: historical=132, realtime=132). historical=233, realtime=0 ready-to-run scheduled searches are pending.
host = iapp106.howard.ms.com-9000 source = /var/hostlinks/farm/splunk/prod/federated-shc-1/9000/home/var/log/splunk/scheduler.log sourcetype = scheduler
Since upgrading to Splunk Version 6.3.1 we are seeing less number of Concurrent Scheduled searches.
Did something changed in 6.3?
↧
Best practises to integrate servicenow with splunk
Hello Splunkers
I am new to splunk and we are planning to integrate servicenow with splunk.I have few questions to ask they might be silly but your answers worth me a lot.
1-Does splunk app for servicenow will automatically generate tickets.
2-How to avoid bulk of tickets to splunk from servicenow when something goes wrong but the issue was known.
3-What are best practises to install this app.
Thanks
Jack
↧