I have a lookup table that is giving me strange search results that I can't figure out — I have a table which is a list of names, and the team they are on:
person1,team1
person2,team1
person3,team2
However, there are people in the data that may not be defined in a team. I was looking to define them as "Other", so I could create searches for them without using nots. So, in my lookup definition I have Minimum Matches set to 1 and Default Matches set to Other. Also, automatic lookups are turned on.
When I search like:
index=myindex
and drill into interesting fields, it shows a count of 239,824 in team Other
If I click on Team other, or search like:
index=myindex team=Other
Then it shows a count of 86,495.
Why would it be showing 239824 on a more general search, and 86495 when searched for specifically with everything else (including time picker) being the same?
After a bit more testing, to rephrase the question:
If I do the automatic lookup, with a minimum match of 1 and the default match=Other set, I get a different count than running:
index=index| fillnull value=Other Team| search Team=Other
Shouldn't they be the same?
↧
Can you help me with a lookup table behavior question?
↧
Can you help me compare the numerical values of fields?
My company gets a log file that we are trying to compare a set of numbers to one another. These numbers have to be within ten digits of one another, otherwise we need to be alerted on this. I've been able to Extract both of these numbers into fields (last_applied and last_received), and I am trying to run a search that compares the two, but ONLY alerts when they are over 10 digits away form each other. Been trying to use the `diff` function, but am a bit stuck on which `eval` or other functions that would be appropriate.
This is what I have so far:
index="mail" host="outlook.office365.com" last_applied=* last_received=* | diff attribute=last_applied attribute=last_received
Any help would be greatly appreciated.
↧
↧
How can I get Data searched per index
IS there a way I can see how much data being searched per index.
Eg: for an index user has searched 10 GB of data last 1 hour in across 15 search queries.
An index has 100 GB of data, but last 1-day user searched only 100 MB in the search result.
or Index has 100 GB of data, but user searched too often and search a total of 120 GB of data.
↧
Why depends not working when simple XML dashboard is converted to HTML?
I have a dashboard with a some input fields. The dropdowns are dependent on the value selected from radio group. This is all working good with the Simple XML. But, when I convert the dashboard to HTML, the dependent inputs are not working at all. Both dropdowns are always showing up regardless of the value I select from the radio options. Can someone please help me what is wrong? Is this a bug or not supported functionality?
Here is the Simple XML:
And here is the converted HTML:
Inputs test HTML Skip Navigation >
Inputs test HTML
↧
How come the comment "macro" is not working?
I must be out of my mind. The comments built-in macro since version 6.5.0 gives me an error that it can't find the macro. I'm using the syntax found in the docs here, with my version of splunk in the url so it shows the one for my version.
https://docs.splunk.com/Documentation/Splunk/6.6.10/Search/Addcommentstosearches
index=* sourcetype=* `comment("THIS IS A COMMENT")`
this gives me an error
> Error in 'SearchParser': The search specifies a macro 'comment' that cannot be found.
What could I be doing incorrectly?
Chris
↧
↧
How do you filter custom events in Windows Security log using regular expression in blacklist?
Hello,
We have Splunk Enterprise 7.2 with Deployment Server role and Splunk Universal forwarder on a Windows SQL server.
The SQL server has custom event in Windows Security Log.
Below is a portion of the Event Message.
I need to create the blacklist entry in the inputs.conf file to filter out events where two patterns are match ing at the same time.
"class_type:LX" AND "server_principal_name:DOMAIN1\"
The second pattern is 3 lines below of the first pattern.
Any help will be greatly appreciated.
Thank you,
Joseph
session_id:174
server_principal_id:274
database_principal_id:0
target_server_principal_id:0
target_database_principal_id:0
object_id:0
user_defined_event_id:0
transaction_id:0
class_type:LX
permission_bitmask:00000000000000000000000000000000
sequence_group_id:A842D899-40A5-491E-886C-A8E7F7682BDD
session_server_principal_name:DOMAIN1\sqlservice
server_principal_name:DOMAIN1\sqlservice
server_principal_sid:010500000000000515000000093a2a243fad146207e53b2b2f0a0000
database_principal_name:
target_server_principal_name:
target_server_principal_sid:
target_database_principal_name:
↧
Spath key with period in it
Hi All,
I am dealing with a key that has a period in it. I am trying to figure out how to use spath to extract it but I haven't had any luck. Here is an example of what the data would look like:
Event:
apples: {
granny.smith: {
color: "green",
crunchy: "true"
}
}
In this case I would like to extract "granny.smith" as a field but I am unable to.
Any help would be much appreciated.
Thanks!
↧
How do I use the same data model on multiple search heads?
Hi,
I have 2 independent Search Heads (SH) (no clustering) and they use the same indexers.
On the first SH: I have a data model, and i want users from the 2nd SH request it.
But it's impossible to share it in parameters.
Is it possible to do this? (because in my mind, a data model stores statistics in a new .tsidx on indexers available for my 2 SH)
Thanks
↧
Add fields to the incoming Data
Hi,
Im trying load a csv file using UF, there are no headers in CSV file, how can I give column names to that values in the file. can I do at props.conf, I don't want to use extract field option..
eg : file.csv as below values
1,2,3,4,5
6,7,8,9,10
11,12,13,14,15
↧
↧
Alternative to having three members in a Search Head Cluster ?
I have deployed Splunk Search Head Cluster with two Search Head members and a Deployer.
I read here [Captain election process has deployment implications
][1] that a cluster must consist of a minimum of three members to participate in dynamic captain election process.
If I dont have the option to have add a third one and a cluster cannot function without a captain, can I use "static captain" option to overcome the problem ? or are there any better alternatives or workaround this issue ?
[1]: https://docs.splunk.com/Documentation/Splunk/7.2.3/DistSearch/SHCarchitecture
↧
Best approach for a count that shows the most recent occurrence
Say I have some data thats like:
_time, last name
How do I get a count of the last name along with the most recent _time in the same table?
--Mark
↧
Best approach for calculating a count and showing the most recent occurrence timestamp
Given data like:
_time, lastname
How would I do a count of lastname and display the most recent _time for that lastname on the same row of a results table?
--Mark
↧
how to check cpu and memory utilization of my system?
2 panels should be developed on a single dashboard, one for CPU and one for memory monitoring of your local system. This should check for CPU and memory utilization of your system every one minute. When the utilization is below 80% the color of the cell should be green, between 80% and 90% color should be yellow and above 90% color should red. When you click on the row, it should take you to a drill down panels that show CPU/Memory Utilization history and for this you will need 2 additional dashboards. We should be able to select from drop down for what time range we want to see the history (time range picker).
↧
↧
Upgrade Add-on in clustered environment
I am looking for options to automate the upgrade of Add-on in Clustered environment
Say for example, I have this add-on Splunk_TA_nix in SHCluster (which we have one deployer and 3 search head members)
Now I want to upgrade this add-on without breaking any custom configuration through automation (Instead of taking backup of folder, placing the new version of app and restoring the custom configuration to new version)
Please help to identify the best and automated way to upgrade Add-on
↧
Currently Open Tickets
Hi guys,
tickets can have state:
em7_state = Open
em7_state = In Progress
em7_state = Closed
Tickets are stored in the following format:
> date,time,em7_state,em7_description,em7_ticket_id> date,time,em7_state,em7_description,em7_ticket_id> date,time,em7_state,em7_description,em7_ticket_id> date,time,em7_state,em7_description,em7_ticket_id> date,time,em7_state,em7_description,em7_ticket_id
So it might happen, that a ticket gets created with status open:
> 2018-07-01,00:00:01,Open,em7_description,em7_ticket_id
Then it gets updated (to In Progress) at
> 2018-09-03,20:00:01,In> Progress,em7_description,em7_ticket_id
And is not closed untill today
How do i search for tickets that are **currently** open ?
If i do a simple search like:
> index=xxxx (em7_state = "Open" OR> em7_state = "In Progress") | dedup em7_ticket_id
Then my search would be bound to the timeframe selected - let's say last 24hrs, thus tickets created earlier won't show up (because there was no change in em7_state logged).
Thanks for your input
↧
Find duration between A to C events (startswith & endswith) where event B needs to be there in middle
Example:
Event A: LoggingAspect.BeforeController
Event B: Found in Cache
Event C: LoggingAspect.afterReturningController
I want to know total execution time from A to C, common field between all 3 events is traceId.
```index=rs | transaction traceId startswith="LoggingAspect.BeforeController " endswith="LoggingAspect.afterReturningController" | timechart span=1h avg(duration)
```
But the above query gives me other events as well which doesn't have B. I wanna add Event B to the query & eventually see a response time chart.
↧
issue with timechart latest() function
Hi,
I have data like mentioned below
**28-11-01 10:30:13,127 digits=30
28-11-01 07:20:08,240 digits=50
28-11-01 05:01:18,101 digits=60
28-11-01 12:12:22,127 digits=120
09-12-01 12:12:22,127 digits=180
10-01-01 05:01:18,101 digits=500**
i want to display the latest digit using timeline chart, i have written query like | timechart latest(digits) as latestRecord and its working fine but when run this couple of time in the span of last 3 months, November months keep changing the out put like one time its displaying 30, another time 50, 60, 120 like that
if i run multiple times also,expected output:
**2018-11 30
2018-12 180
2019-01-500**
↧
↧
How to find the common information between two users activities information?
how do i get the common information from two users in a proxy log?
for example, i would like to find whether a url that both of the users have accessed in a particular period of time.
user=ABC OR user=XYZ tag=proxy
http_user_agent=mozilla* OR http_user_agent=firefox* |associate
↧
run subquery for each row of csv file passing the field in search string
I want to run a splunk query for all the values in the csv file and replace the value with the field in the csv file. I've imported the file into splunk as input loookup table and able to view the fields using inputlookup query but I want to run that with all the sub queries where I'm fetching maximum count per hour, per day, per week and per month basis
input file is ids.csv which has around 800 rows and its just one column, liek below:
1234,
2345
2346
4567
...
query that im using:
| inputlookup ids.csv | fields ids as id | [search index="abc" id "search string here" |bin _time span="1hour" | stats count as maxHour by _time | sort - count | head 1] |appendcols[search
index="abc" id "search string here" |bin _time span="1day" | stats count as maxDay by _time | sort - count |head 1 ]|appendcols[search
index="abc" id "search string here" |bin _time span="1week" | stats count as maxWeek by _time | sort - count | head 1 ]|appendcols[search
index="abc" id "search string here" |bin _time span="1month" | stats count as maxMonth by _time | sort - count | head 1]
Im not getting the expected results for this, Im expecting a tabular format where i get the count for each time range with the specific id by passing id field in the search subquery.
How can I solve this?
Thanks
↧
heavy forwarder send a data twice
heavy forwarder used TCP method sends a data to the third party
After processing data shown duplicate phenomenon
That phenomenon usually is shown when creating the file
I was set up the conf like this
inputs.conf
[monitor:///xxx/header01/*]
disabled = 0
index = xxx02
sourcetype = xxx002
whitelist = \d \-\d \-\d
blacklist = .*\.swp
ignoreOlderThan = 1d
crcSalt =
_TCP_ROUTING = pre_server_group2
outputs.conf
[tcpout:pre_server_group2]
disabled = false
sendCookedData = false
useACK = false
server = x.x.x.x:19000
maxQueueSize =100MB
and the original raw file name is
2019-01-22-10
2019-01-22-11
2019-01-22-12
2019-01-22-13
why heavy forwarder send twice?
↧