Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Number of events found not matching number of events displayed

$
0
0
Our Splunk Enterprise deployment has started returning inconsistent results, and I've been unable to track the source of the issue. In one example, Splunk reports that it found 34 results matching the search query, but the event viewer tab below only displays 9 of the results. I ran this same query >10 times (without any changes in search terms or time window) on our search head and received this inconsistent answer about 50% of the time. Here is a screenshot: ![alt text][1] Additionally, some queries are returning with very large numbers of reported results but displaying no results at all in the event viewer. The large number of results is expected; the query is somewhat complex and broad. But this query structure has worked consistently for us for over a year, and suddenly it is producing these inexplicable results: ![alt text][2] It is probably relevant to mention that we used to be using two independent search heads connected to a pool of ~10 indexers. We have recently moved to a new set of two indexers connected to a new pool of ~10 indexers - all of which is mirrored at another site, where the original equipment is being used as a replicated backup. [1]: /storage/temp/217830-34-results-showing-9.png [2]: /storage/temp/217831-no-results-shown.png

Search to group by time range and ID

$
0
0
I have logs like this: 10:40:00 AM: id=1,status=SUCCESS 10:45:17 AM: id=2,status=SUCCESS 11:00:23 AM: id=34,status=SUCCESS 11:15:49 AM: id=1,status=SUCCESS 11:20:59 AM: id=2,status=SUCCESS I want to write a query, that brings me only those records where I see logs for the same identifier in a short span of time. Look at this one: **10:40:00 AM: id=1,status=SUCCESS 10:40:02 AM: id=1,status=SUCCESS 10:40:15 AM: id=1,status=SUCCESS** **10:45:17 AM: id=2,status=SUCCESS 10:45:23 AM: id=2,status=SUCCESS** 11:00:23 AM: id=34,status=SUCCESS 11:15:49 AM: id=1,status=SUCCESS 11:20:59 AM: id=2,status=SUCCESS If you look at the above sample there are 3 success states for id=1 at 10:40:00, 10:40:02 and 10:40:15 and 2 success states for id=2 at 10:45:17 and 10:45:23 AM. I'm interested in this where I want to display repeated logs that happened in a short span of time. When I run a query the output has to be just the following: 10:40:00 AM: id=1,status=SUCCESS 10:40:02 AM: id=1,status=SUCCESS 10:40:15 AM: id=1,status=SUCCESS 10:45:17 AM: id=2,status=SUCCESS 10:45:23 AM: id=2,status=SUCCESS As for ID=1 and 2 I see many records within seconds (this will be something I want to specify in the query as well).

Can I use != in blacklist?

$
0
0
I only want to see cmd.exe and blacklist everything else for EventCode 4688. blacklist = EventCode="4688" Message="(?:New Process Name:).+(?:cmd.exe)" will remove cmd.exe but 'Message!=' doesn't do the opposite.

Use count from first search in the Where Clause of the subsearch

$
0
0
I want to use the count from the first search "FilesImported" as criteria in the where clause of the subsearch. FilesImported is 0 and "File Missed" needs to be 1, but "File Missed" is currently returning 0 which shows me that the subsearch Where Clause is not working as I expected. So, how does one use the count of the first search as criteria in the Where Clause of the subsearch? source=*D:\\gfd\\import* source=*Daily\\Debug* Moved earliest=-36h@h | eval time=strftime(round(strptime(file_Time, "%I:%M:%S %P")), "%H:%M:%S") | eval dow=strftime(strptime(file_Date, "%m/%d/%Y"), "%A") | rex field=source "importhelpers\\\+(?[^\\\]+)" | where ClientID="NAB" | where (like(source,"%"."NAB"."%") AND (dow!="Sunday" AND dow!="Monday") AND (time>"07:57:00" AND time<"08:27:00") AND FileImported="Record") | stats count as FilesImported | appendcols [ search source=*D:\\gfd\\import* source=*Daily\\Debug* "Could not find a file in the" OR Moved earliest=-36h@h | eval time=strftime(round(strptime(file_Time, "%I:%M:%S %P")), "%H:%M:%S") | eval dow=strftime(strptime(file_Date, "%m/%d/%Y"), "%A") | rex field=source "importhelpers\\\+(?[^\\\]+)" | where ClientID="NAB" | where ((like(source,"%"."NAB"."%") AND FilesImported!=1) AND (dow!="Sunday" AND dow!="Monday") AND (time>"07:27:00"AND time<"08:27:00") AND (file_Missing="Position")) | stats count as "File Missed" ] | table "File Missed"

How to migrate a clustered indexer peer to a new hardware in a single-site cluster

$
0
0
Wondering if someone has gone through a hardware migration of a clustered indexers environment. Long story short, we want to move to a new platform and abandon the current hardware due to several issues that we are having with it. All data is local to the indexers. We are running Splunk Enterprise 6.3.3. So this is what we would like to do: - Start a rsync from a peer A to the new host B, warm and cold buckets (peer A online) - Either: I)- Put CM on maintenance mode, to avoid extra replication work since the new node will have the data? - stop peer A (and remove later after having the new peer joined with peer A data; disable maintenance mode) OR II-)- Remove peer A from cluster: - splunk offline --enforce-counts - Final rsync job to sync the latest changes on hot and cold volumes from A to host B - Have a copy of the clustered indexes.conf loaded to the etc/system/local/ of host B before starting it - Install same version of splunk and start splunk on host B - Add host B to the cluster and restart host B - Push new outputs.conf to all UFs and HFs with the new addresses of the peer B, and removing peer A - If all goes well, then we permanently remove the old peer A: - splunk remove cluster-peers -peers - Repeat until all peers are removed from Cluster and moved to the new hosts. Do you guys think this can work? Any suggestions / recommendations with this draft plan, specially regarding the options I and II above? Thanks a lot!

Splunk DB Connect: distribute events to different indexes

$
0
0
We have data in a database from which we get records with db connect. They contain, among others, a selection field. The events must be filled into different indexes based on the selector field: Event=1 Selector=a => index_a Event=2 Selector=b => index_b Event=3 Selector=a => index_a There might be hundreds of different values in the selector field and thus also hundreds of indexes. Using a own saved search or a search with a case statement is not really an option. The events belong to different user groups and have to be distributed into different indexes to enforce access control. Using 'Restrict Search Terms' is not an option. I was thinking about something like ...| collect index=index_<> This works obviously not ;-) has anyone a hint?

Is this search accurate to measure how much data a search used the past week?

$
0
0
I have an event that is using X amount of space. The search is: index=network default send string I'd like to search how many gigs of license this event is using over the last week. Anyway to do that with a search?

How to extract the fields from JSON output and display as table

$
0
0
{ "ERROR_CODE" : "XXX-XXX-00000", "ERROR_DESC" : "Success." }, "accountBalances" : { "accountNumber13" : "22222222222", "siteId" : "200001005", "siteCode" : "HRD", "customerName" : "LiXX XXXXXX", "serviceAddress" : "XXXXXXXXXX, VA XXXXX-4849 ", "streetNumber" : "XXX", "streetName" : "XXXXXX", "city" : "CHESAPEAKE", "state" : "VA", "zip5" : "23320", "homeTelephoneNumber" : "XXX 000-0000", "acceptChecks" : "True", "acceptCreditCards" : "True", "pendingWODepositAmount" : "0.0", "statementInfo" : [ { "statementCode" : 1, "currentBalance" : "0.0", "serviceCategories" : [ "INTERNET", "CABLE", "TELEPHONE" ], "amountBilled" : "577.71", "minimumDue" : "270.6", "billDay" : "8", "statementDueDate" : "20171029", "totalARBalance" : "577.71", "ar1To30" : "307.11", "ar31To60" : "198.89", "ar61To90" : "71.71", "ar91To120" : "0.0", "ar121To150" : "0.0", "arOver150Days" : "0.0", "writeOffAmount" : "0.0", "totalUnappliedPayment" : "0.0", "totalUnappliedAdjustment" : "0.0", "depositDue" : "0.0", "depositPaid" : "0.0", "depositInterest" : "0.0", "totalMonthlyRate" : "174.23", "lastStatementDate" : "20171009" } ] }

How to make search using Splunk Rest API

$
0
0
I have following search query that I run on the Splunk search UI & It works fine: index=cpaws source=PFT buildNumber=14 type=REQUEST | stats p98(wholeduration) as currentRunP98Duration| appendcols [search index=cpaws source=PFT buildNumber=13 type=REQUEST | stats p98(wholeduration) as previousRunP98Duration1] | appendcols [search index=cpaws source=PFT buildNumber=12 type=REQUEST | stats p98(wholeduration) as previousRunP98Duration2] |eval avgP98=(previousRunP98Duration1+previousRunP98Duration2)/2 | eval success=if(currentRunP98Duration>=avgP98*0.1,"Good","BAD")| table success For printing out parameter "success", I was using the table command. Now I want to call the same query using the Splunk REST API and in return I want to get the success parameter value. How can I do that? I went through the Splunk REST API Documentation but I couldn't/didn't find anything helpful. Please help me.

Can I use a lookup table of IP ranges + location names to add a location field to network traffic based on IP range?

$
0
0
I have a lookup table of IP ranges with location names. I'm trying to search network traffic and add a "location" field to the result based on what IP range the src_ip falls under. I do not have access to any of the configuration files and would like to know if I can do this within the search. Example of my lookup table (range_location.csv): range location 50.106.56.0 /21 site_1

Bucket rolling issue

$
0
0
Our indexers have two volumes configured: [volume:cold_vol] path = /opt/splunk/var/lib/splunk_cold/colddb maxVolumeDataSizeMB = 70000000 [volume:warm_vol] path = /opt/splunk/var/lib/splunk/warm_vol maxVolumeDataSizeMB = 358000 Here is the output of df: [root@security-splunk-indexer-01001 local]# df Filesystem 1K-blocks Used Available Use% Mounted on devtmpfs 263961732 0 263961732 0% /dev tmpfs 263978416 68 263978348 1% /dev/shm tmpfs 263978416 4211840 259766576 2% /run tmpfs 263978416 0 263978416 0% /sys/fs/cgroup /dev/sda4 10435584 6149488 4286096 59% / /dev/sda2 519852 170676 349176 33% /boot /dev/sda5 376918204 372549972 4368232 99% /opt/splunk/var/lib/splunk /dev/sdb1 76794778604 790009404 76004769200 2% /opt/splunk/var/lib/splunk_cold /dev/sda1 522984 9744 513240 2% /boot/efi tmpfs 52795684 0 52795684 0% /run/user/5905 For some reason, the warm bucket volume is at 99% utilization, but the buckets aren't rolling to the cold volume. The value of maxVolumeDataSizeMB is smaller than the total size of the volume. Any idea why these buckets aren't rolling?

Send data to heavy forwarder to filter events AND change sourcetype - help please

$
0
0
Hello, As the question states, i'm looking to send events from a universal forwarder to a heavy forwarder to have filtered. Once filtered, i'd like to change the sourcetype. I have not implemented this yet. This is for me to propose to upper management to agree on. I want to make sure the props/transforms piece is correct. I think the filtering is good, however i just want to make sure the syntax is all good. I've listed my config and config details: ON UNIVERSAL FORWARDER inputs.conf --------------- [monitor://c:\program files\app1\web.log] _TCP_ROUTING = filter_heavy_forwarders index = cmis_index sourcetype = app1_web_logs -------------------------------------------------------------- ON UNIVERSAL FORWARDER outputs.conf ----------------- [tcpout] defaultGroup=infosec_indexers [tcpout:infosec_indexers] autoLB = true server = infosec_server1:9997,infosec_server2:9997,infosec_server3:9997…,infosec_server16:9997 [tcpout:cmis_indexers] autoLB = true server = cmis_server1:9997 [tcpout:filter_heavy_forwarders] autoLB = true Server = filter_hvyfwd1:9998,filter_hvyfwd2:9998 -------------------------------------------------------------- ON HEAVY FORWARDER props.conf -------------- [app1_web_logs] TRANSFORMS-routing = app1_web_filter TRANSFORMS-changest = app1_cmis_web -------------------------------------------------------------- ON HEAVY FORWARDER transforms.conf ----------------------- [app1_web_filter] REGEX = (Events|To|Filter) DEST_KEY = _TCP_ROUTING FORMAT = cmis_indexers [app1_cmis_web_st] DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::app1_cmis_web -------------------------------------------------------------- ON HEAVY FORWARDER outputs.conf ------------------ [tcpout] defaultGroup=none [tcpout:cmis_indexers] autoLB = true server = cmis_server1:9997 --------------------------------------------------------------

what range of udp/tcp ports can be used for various log sources ?

$
0
0
I have 3 different log sources sending logs to Splunk from a number of hosts on on udp 514. Breakdown : WLC (5-6 hosts), ESX(8) and Eqallogic (6). However, so far I am only getting data from WLC hosts. I am thinking of assigning different udp ports for esx and equallogic hosts to ease categorization on Splunk? What would be the ideal ports for the above log sources ? Please advise

Rows with same column value should be colored with same color

$
0
0
Lets say I have a table with fields A, B, C, D. I would like to color rows based on the values of column D. Basically rows with same value of column D, should be in the same color. Is there a way this can be achieved in Splunk? Any help related to this will be appreciated!

How to retrieve search name by search id

$
0
0
my splunk server has high CPU usage and I saw a bunch of splunkd process like below search --id=admin__admin__search__search9_xxxxx.yyyyy --maxbuckets=0 --ttl=600 --maxout=500000 --maxtime=8640000 --lookups=1 --reduce_freq=10 --user=admin --pro --roles=admin:can_delete:power:user These searches seem to run periodically. How could I look up scheduled/ad-hoc searches name by these search_ids, and furthermore, to retrieve the search query content?

Rex has exceeded configured match_limit, consider raising the value in limits.conf

$
0
0
I am trying to extract about 20 fields from a log file each lines have about 800 charachers, I can only extract to first 14 field the get error saying my rex has exceeded configured match_limit, consider raising the value in limits.conf. First, base on documentation the default value for match_limit is 100000. I am no where to close that limit,(I think) Second I did try to check the configure file. but it doesnt seems working. I created a limits.conf file in ....\etc\system\local\limits.conf. doesnt work I changed ...\etc\system\default\limits.conf. doesnt work. Any suggestion of where I am doing wrong? Is that possible is my rex(as below) not right? ^(.*?)"(?.+?(?="))(.*?){(?.+?(?=}))(.+?)(?.+?(?=<))(.+?)(?.+?(?=<))(.+?)(?.+?(?=<))(.+?)(?.+?(?=<))(.+?)(?.+?(?=<))(.+?)(?.+?(?=<))(.+?)(?.+?(?=<))(.+?)(?.+?(?=<))(.+?)"(?

Setting the query start time and end time

$
0
0
I want to monitor my dashboard from today 7 Am to tomorrow 5 AM. I don't want to set the time manually. FYI, My dashboard contains list of jobs running from 7AM to next day 5AM. I need to monitor the progress continuously, so set up the auto refresh on every 5 minutes. Now I want to set the time in such a way that it will take the start time as 7AM today and end time is now or next day 5AM during every refresh. Please take a look and let me know the possibilities. Thanks in advance!!!

Timechart and overlay two columns?

$
0
0
I have a field outcomeIndicator in my data, that holds values 0,1,5,8. 0 and 1 mean a success of the event, and 5 and 8 mean failure. Now, I want to use timechart count to plot these values over a month, for a span of 1 day, i.e the timechart must show the total events in a day resulting in success and failures, for the previous 30 days. This timechart must strictly be graphical and must show the trend for both failures and successes over a month. ![alt text][1] [1]: /storage/temp/217833-success-and-failure.png Here the green colored trend-line represents the success per day for a month and the red colored trend-line represents failures per day over a month. The image is just for representation and I want to know the possibilities of achieving this. Thank you. Cheers. -Snipedown21

host name not showing correctly

$
0
0
I have several VM servers from an image. The host names have been changed but somewhere the old host name is populating the messages file. when I monitor the messages file on all the hosts they all have the same host name for that source `OCT 13 08:02:29 OLDHOST fprintd ** Message: No device in use, exit` Splunk sees this log as process fprintd coming from source "/var/log/messages" from host "OLDHOST" I have set the server.conf and the inputs.conf to the new host name but it is still pulling from the log file. Any help would be great

Count combination of Multivalue Field

$
0
0
Hi, I wonder whether someone can help me please. I'm using the query below to extract the different actions performed for each submission by detail.Id `submissions_wmf(Submission)` detail.isManualChange=true NOT ( detail.changeType=ChangeBank OR detail.changeType=ChangeBIK OR detail.changeType=ChangeOtherIncome OR detail.changeType=ChangeSocialSecurityBenefit OR detail.changeType=HaveBenefitsEnded OR detail.changeType=HavePartnerBenefitsEnded) | stats count list(detail.changeType) as ChangeType by detail.id | table ChangeType count The query works find and extracts data as per the attachment[1] But I'd like to extend this by adding another total which counts the number of times the combination of values in the ChangeType Column exist. So using the attachment as an example. Where Change A and Change B exist together this would be a count of **2**. I've looked at streamstats and evenstats and also changed the values to a string and count this, but I can't pull both totals together on the same table. I just wondered whether someone could look at this please and offer some guidance on how I may go about this. Many thanks and kind regards Chris [1]: /storage/temp/216806-changetype.png
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>