Hello all,
We are replacing our single Splunk indexer with a pair of new indexers and have migrated all the indexes except those filled by syslog sources.
We know that sending syslog straight to an indexer is not best practice, so we are now looking at directing this to SyslogNG first. However, we would like to make use of the old Splunk indexer server to take the output of syslogNG and load balance it to the two new indexers.
What we don't understand is if this is simply a matter of editing the old indexers outputs.conf or if the indexer will still need to function to take the different UDP data input ports and direct them to the correct indexes.
Thanks in advance!
↧
How do you convert an indexer into a heavy forwarder?
↧
How do I delimit multivalue fields?
Dear All,
We have a scenario, where For each *Application_ID*, *Application_Name* is having multi-value and delimited.
we would like the data loaded into individual rows, in the following manner -
Example: *Application_Name* is multi-value and delimited (A:B:C)
Application_ID Application_Name
1 A:B:C
2 D:E:F
Desired Output:
Row 1: 1 A
Row 2: 1 B
Row 3: 1 C
Row 4: 2 D
Row 5: 2 E
Row 6: 2 F
How do I accomplish this?
Thanks in Advance
Anil
↧
↧
Extracted field has many key value pairs
I have extracted a logmessage field which has below outputs. It works good on a table. This field has many key value pairs, but Address is the only field that is captured automatically. How could I use the values in this field to plot a time/scatter chart or display some key value pairs from this extracted field in a search? I would appreciate any new suggestions.
MemberLeft notification for Member(Id=5, Timestamp=2001-10-05 18:33:38.274, Address=1.1.1.1:110, MachineId=1111, Location=machine:xxddxxdd,process:30451,member:AppName-xxxX-seeing-node1, Role=xxxxceServer) received from Member(Id=6, Timestamp=2017-09-08 08:07:48.209, Address=2.139.1.1:11, MachineId=50283, Location=machine:xxddxxdd,process:7577,member:AppName-Rxxx-Partitioned-node1, Role=AppServer)
Extend*TCP has marked TcpConnection(Id=0x0000015F9B3BBCF8C61322961FF72A8A51B57CC50E26A4CBxxxxxxxxxxx, Open=true, Member(Id=0, Timestamp=2000-11-01 05:45:04.37, Address=1.1.1.1:0, MachineId=0, Location=site:,machine:zzzzzxx123,process:29497,member:aggregator2DH, Role=WachoviaCibSptSingleAppContext), LocalAddress=1.1.1.1:79, RemoteAddress=1.1.1.1:11) as suspect: The connection has fallen 3736 messages (10001914 bytes) behind; the threshold is 10000 messages or 10000000 bytes.
↧
Shell script execution after a search and outputcsv
Hi,
I'm trying to run the following query:
index=alerts Status="Open" AlertId="30822ac3b4a6138de30c5726e2e05931"|table _time, AlertId, host, user, AlertMsg, "Close", |head 1
|outputcsv updatedalert | movealert
movealert at the end of the query is a batch file hosted on my server.
If I run the first part of the command, it creates the updatedalert.csv file as expected.
If I run a search only with "| movealert" alone, the script executes and moves the files to my lookup directory.
But when I try to run both command combined, none of them executes as if one was blocking the other.
Any idea if (and how) I can this to work?
↧
What causes union'd data sets to be truncated?
Hi Splunk Experts--
I'm confused about the union command and am hoping you can
help. Specifically, I'm struggling to understand what causes the
"things that get unioned" to be truncated-- in my case to 50,000
records.
Here's an example of what confuses me:
Imagine three sets of data-- I've put them in three separate indexes
called union_1, union_2 and union 3. The data sets are very similar:
each has 60,000 records, each consisting of a timestamp, a color and a
hash. Each data set has exactly one event per second and each covers
the same 60,000 seconds (from 2017-01-01 00:00:01 to 2017-01-01
16:40:00). The color is random and the hash is unique across all
180,000 events (60,000 * three data sets).
Here's union_1:
time color hash
------------------------- ------ --------------------------------
2017-01-01 00:00:01 -0800 blue 08decd051408e648b941b5dbb9b1578c
2017-01-01 00:00:02 -0800 yellow 39d98f7f9a98920ee08631c9e6a4e867
2017-01-01 00:00:03 -0800 green 2b34449aae3a941c64dd76d33a6cfc04
...
2017-01-01 16:39:58 -0800 blue b2cc43ab839bf57711a00f8f7a622e97
2017-01-01 16:39:59 -0800 blue e26f577b10d0fa172c122deca813d38f
2017-01-01 16:40:00 -0800 blue c9b0b55e7513963f7b04cf3c424686f2
...and union_2:
time color hash
------------------------- ------ --------------------------------
2017-01-01 00:00:01 -0800 violet c8e68d6c154fc0ca88220a299dba7c55
2017-01-01 00:00:02 -0800 blue 3e18602a1d137ea4bf9157e67c4386ed
2017-01-01 00:00:03 -0800 violet ecdf61cd34cda950bd782e3a6ba51fd6
...
2017-01-01 16:39:58 -0800 violet 5c00f68da1aa343ec0944fbcd42775fc
2017-01-01 16:39:59 -0800 green 2c3a626ff26a05f9895dc1c9ae1d074e
2017-01-01 16:40:00 -0800 red 9b796de25b072d8a48d3e9a7a716c4e9
...and union_3:
time color hash
------------------------- ------ --------------------------------
2017-01-01 00:00:01 -0800 orange 772468eb812735bfa984b91477afe967
2017-01-01 00:00:02 -0800 violet 6d9ebc2ce8b1c79d42793d624daeb402
2017-01-01 00:00:03 -0800 red a31d8811b95b4597f943f268f4068fb0
...
2017-01-01 16:39:58 -0800 yellow 17b43d58e4920f1d2044552acdad5507
2017-01-01 16:39:59 -0800 violet 12425e908448371c38a1f0fe12aedf73
2017-01-01 16:40:00 -0800 indigo ea1fb54c5c2b5fd2161856ea6937226e
You get the idea... :)
Now let's run some SPL:
| union maxout=10000000
[ search index=union_1 ]
[ search index=union_2 ]
[ search index=union_3 ]
| stats count by index
This produces what I'd expect-- 60,000 records per "thing that got
unioned":
index count
------- -----
union_1 60000
union_2 60000
union_3 60000
But let's make things a bit more complicated:
| union maxout=10000000
[ search index=union_1 | head 60000 ]
[ search index=union_2 ]
[ search index=union_3 ]
| stats count by index
Wait, what? Adding a head command to the first search causes the
second and third to be truncated to 50000?
index count
------- -----
union_1 60000
union_2 50000
union_3 50000
How about this one?
| union maxout=10000000
[ search index=union_1 ]
[ search index=union_2 | head 60000 ]
[ search index=union_3 ]
| stats count by index
Hmmm... same result:
index count
------- -----
union_1 60000
union_2 50000
union_3 50000
What if we move the head command to the final search?
| union maxout=10000000
[ search index=union_1 ]
[ search index=union_2 ]
[ search index=union_3 | head 60000 ]
| stats count by index
Wow... now only the final search gets truncated:
index count
------- -----
union_1 60000
union_2 60000
union_3 50000
Notes that may or may not be relevant:
* Many commands have a similar effect (i.e. cause the same
truncations) as head-- in particular dedup and sort seem to cause
the same problems.
* I suspect that these commands (and presumably many others) cause
the subsearch to no longer qualify as a "streaming subsearch"--
(although honestly I can't imagine why head would do this) and
that this fact makes union behave much more like append.
* I believe (but am not sure) that the 50000 truncation limit is due
to maxresultrows in limits.conf-- that value (for me is currently
50000)
For context, here's what I want to do:
* In general, get a better understanding of how union works and how
its different than append.
* Specifically, union a set of three searches that each produce substantially more
than 50000 records and not experience truncation.
Anybody willing to help me out with this? Would totally appreciate the
benefit of your wisdom :)
Thanks!
↧
↧
Splunk 7.0.0: How to get metrics in from collectd
Hi,
How to add a tag(region) to a collectd based metric from a host? For example if we have 2 regions (us-east,us-west), how to add this data from collectd to the Splunk metrics index?
Thanks
Suresh
↧
Summarizing data and storing it in a metrics index (Splunk 7.0.0)
In the metrics getting started documentation ( http://docs.splunk.com/Documentation/Splunk/7.0.0/Metrics/GetStarted ) it says "Summary indexing does not work with metrics."
When I read the rest of the documentation I don't see any specific reason I couldn't craft my own data to fit the metrics format.
If I massage an event into having all the correct fields ( http://docs.splunk.com/Documentation/Splunk/7.0.0/Metrics/GetStarted#Metrics_data_format ) could I save that event to a metrics store?
I am looking to leverage the speed increase in the metric store with data I already process and save into summaries.
My only concern might be that the sourcetype would be stash so I may need custom input stash parsing to make it "fit".
↧
Command 'search' can't compare two floating numbers
I am writing a saved search to trigger and alert when a difference between values is higher than a threshold. A simplified version of my search is as follows. This threshold is expected to be a floating point number, and Splunk can't do correct comparison:
| NOOP | stats count|eval var1=2.1|eval var2=2.0|search var1 > var2
==> No results found. Try expanding the time range.
| NOOP | stats count|eval var1=2.1|eval var2=2.0|search var1 < var2
==> count var1 var2
0 2.1 2.0
Did I do something incorrectly?
Thanks
↧
Search job execution in search head cluster environment
I have two questions about search job execution when search head cluster is used.
1. In a search head cluster, when I access a specific search head from a browser and search there, is the job execute with the search head that I access?
Or, if there are members that have fewer load, will the member responsible for processing?
2. Is there a way to check which members performed the schedule search?
↧
↧
Splunk on Splunk - are there logs on Lookup csv file operations (add/delete)
Issue statement:
I have a lookup csv file uploaded, with permission of read/write for a user group. Today the file disappeared!
Question:
Is there splunk log on Lookup csv file operations (add/delete)? If yes, where are they and I guess I can ask admin to make it searchable in Splunk.
Thanks.
↧
Unable to export csv using "reverse" in searching
Hi all! I tried to export search result with "reverse" in the end of the order:
(u_id>=1177548 AND u_id<=1282822) (event=iap OR event=iaps) u_geo!=VN | dedup u_id | reverse
and received an error like: https://host-name/en-US/api/search/jobs/1510206165.2375/event?isDownload=true&timeFormat=%25FT%25T.%25Q%25%3Az&maxLines=0&count=0&filename=&outputMode=csv possibly temporarily unavailable or permanently moved to a new address.
All working properly without "reverse" and request has only ~3.000 rows. Also I tried export via "outputcsv" and this request has been completed successfully, but file exported to splunk server folder... It's not a problem for me, but a biggest problem for analysts.
↧
choose all Multiselect values by default without using *
I have a multi-select who has its values populated by a lookup.If the lookup has 3 rows all three values show up.
I want this multi-select to have all the values chosen by default WITHOUT USING ALL(*)
All values should show up in the area of multi-select as chosen.
If any leads kindly help
or suggest some other input type can be used to fulfill this.
↧
How do I use the latest/newest value to be used as a value?
I am trying to use the latest "Value" from the last Added/Updated Registry Key but however it took in the oldest result instead... How do I fix this?
My intended result should be "TestData oh" in the first row but however it took in the oldest data which is "TestData"
![alt text][1]
[1]: /storage/temp/219716-result.png
↧
↧
How to get login id of current user ?
Hello
I want to set "User Search Filter" for LDAP
some tool support as like
"This filter is used to determine the LDAP entry for current user. For example: (&(uid={0})(objectclass=person)). In this example, {0} represents login name of current user."
SPlunk,, How to get login id of current user ?
Have a nice day
↧
i have a field source and i want extract another new field from existing field source
eg:
source = shuttle(Oct1-3).zip:./shuttle/5720/LOG/shuttle_log.20171002 ,shuttle_3.zip:./shuttle_3/5720/LOG/shuttle_log.20171011....etc
i want extract folder _no : 5720
if possible pls tell the regex expression how to extract the fileds
Pls help me.
Thanks
↧
html dashboard, event for the rendering is done
Hi all,
I have a html dashboard and i want to replace some value in a table with icons (like green arrow and warning sign so on).
The javascript itself its not a problem, where i struggle is to find out which event can i use to run the javascript (the page need to be fully rendered/ dashboards fully rendered).
I was trying to use "search:done" for each search object, but when the page start to have more than 1 dashboard, the rendering still takes a while after the search is done and it doesn't work.
A timeout worked with one dashboard, but with more i would need a bigger timeout and so on.
Is there an event i can use for the dashboard element to be fully rendered?
thanks a lot
↧
Use wildcard in source?
I have a directory C:\logs
in this directory I have multiple files:
#1: logging-projectname-0.log (There can be multiple files like *-1.log, *-2.log etc..)
#2: logging-projectname-batch-0.log (There can be multiple files like *batch-1.log, *batch-2.log etc..)
I only want to search the files like #1. So, I tried ---- source="c:\\logs\\logging-projectname-[0-9]{1,}.log" SEARCH_STRING
It's not working. Can anyone suggest?
Thanks in advance.
↧
↧
How to use a lookup without exact match (between values)
Hi all,
I have created a query that uses a couple of input lookups.
| inputlookup CSC_value | lookup CSC_posture_name _key as csc_posture_name_key output name as posture_name | lookup CSC_tree _key as csc_tree_key output name as tree_name
Output example:
creationdate csc_posture_name_key csc_tree_key posture_name tree_name value
1510132678 59e9ec538cb36149 59e9e6728cb Policy Defined test1 19
1510132888 59e9ec538cb36149 aee363bb0b1 Policy Impleme test2 43
1510132888 23a4cb4254bba123 aee363bb0b1 Policy Impleme test3 49
The result I get is ok, but the next step is to do a lookup for each result from above query into another lookup (CSC_posture_value), where I have to match exact on the csc_posture_name_key (not the problem) but also to find the row where the "value" from above query is between min and max and to return the name.
Here is how the CSC_posture_value looks like
csc_posture_name_key name min max
59e9ec538cb36149 A Low 0 19
59e9ec538cb36149 A Medium 20 39
59e9ec538cb36149 A OK 40 59
59e9ec538cb36149 A High 60 79
59e9ec538cb36149 A Critical 80 100
23a4cb4254bba123 B Low 0 19
23a4cb4254bba123 B Medium 20 39
23a4cb4254bba123 B OK 40 59
23a4cb4254bba123 B High 60 79
23a4cb4254bba123 B Critical 80 100
Next query works to match on the csc_posture_name_key, but It returns all names for that csc_posture_name_key.
| inputlookup CSC_value | lookup CSC_posture_name _key as csc_posture_name_key output name as posture_name | lookup CSC_tree _key as csc_tree_key output name as tree_name | lookup CSC_posture_value csc_posture_name_key output name
I want it to return only the name that matches the csc_posture_name_key AND where the value is between min and max.
I tried this, but that's not working, but it is to give you an idea what I need.
| inputlookup CSC_value | lookup CSC_posture_name _key as csc_posture_name_key output name as posture_name | lookup CSC_tree _key as csc_tree_key output name as tree_name | lookup CSC_posture_value csc_posture_name_key, value>=min, value<= max output name
Any suggestions?
Thanks in advance!
↧
oneshot and delete re-index duplicate data help
Hi,
I have this file path source specified in the main index that i want to re-index everything collected into a new index that I have created.
Problem:
1. Switched over the indexing of new data from the main index to new index. But noticed that old data doesnt get indexed as Splunk dont do duplicate indexing.
2. Tried doing bt probe to force reindexing which doesnt work which i proceeded to use the | delete command to remove all data from file path source in the new index. Great, it's all cleaned up. (should have made sure the bt probe had worked :( )
3. NO it was not. i foolishly went to use the one shot command in an attempt to re-index all data from old index to new index.
4. Now when i do a search in the new index, i am seeing different results and event count as the old index. When the same query gets run on both indexes now, the results that came out is different.
5. I attempted to run the | delete command again in the new index, however it returns with zero events being deleted.
6. Now the new index has all the data but event count and search query are still different from the old index.
I have some other event log source in the new index, so I am unable to just delete the whole index.
Could i get some help on how i can just force delete all data from only the file path source in the new index? and from there can i reindex the exact same indexed data as the old index? I do not mind losing the indexing of new data in the time being.
Thank you in advance!
↧
IOPS reported by bonnie++ and Splunk Monitoring console
One of our client have 10K HDD in RAID10 and as per Bonnie++ Random Seeks (IOPS) comes to approx 1500 IOPS and wanted to build a dashboard for IOPS and disk usage. I was thinking to re-use the Monitoring console searches
But when I look into the Monitoring Console or DMC, the results show some Indexers of 6000 IOPS !! which is Not possible. Is this a problem with the Splunk api or does this involve RAM assistance?
the query used in DMC is:
| rest splunk_server_group=* splunk_server_group="*" /services/server/status/resource-usage/iostats | eval iops = round(reads_ps + writes_ps)
↧