Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Best way to do this in Splunk? Tags? Lookup or perhaps something else?

$
0
0
Hello I have a complex search that I need to do. An example is something like: CONDITION=(ip.dst=lots of different IPs' && port=some interesting ports && ip.src != some more Ip's) What I would like to know is when condition is true. If I run this search over many events over a long period then it will take a long time. Is there anyway I can tag my events as they are being indexed so that I can do a search on CONDITION=True, so that searching just needs to lookup for some meta "CONDITION=true", rather than having to evaluate the whole condition against each event.

Field value getting " "

$
0
0
Hi All, Need help, here is the scenario index=test subject="hello world" >>> getting the output index=test subject="hello "world" test" >> not getting the output , reason is due to addtional "" (sometimes we are getting this in field values) Any suggestion to pick these values of field subject.

Streamstats Output Truncation

$
0
0
Hi All, We are using streamstats command in our query ,While i am searching we are not getting all results and it is limited to 10000.My results are truncating. I have updated below stanza in limits.conf : [stats|sistats] maxresultrows= 10000000 max_stream_window= 10000000 as well as , I tried the |sort 0 field query as suggest in below link : https://answers.splunk.com/answers/36815/the-sort-command-is-truncating-output-to-10000-rows.html Please suggest how i can fetch all results or what configuration i have to change to increase the default limit. Thanks !!

F5 Version 13.0

$
0
0
Dear All , I need to know how to configure F5 ASM version 13 to send logs to splunk as below format . Below link contains format for version 12 and not support version 13 . https://docs.splunk.com/Documentation/AddOns/released/F5BIGIP/Setup Regards ,

calculating SLA with unstructured date format

$
0
0
hi guys, can you please help me in how can we can try to convert this value **2019-01-28-20-32-49** to **2019-01-28 20:00:00** format . And calculate time difference between the two values with the date format **2019-01-28 20:00:00** . Thank you in advance. @jkat54 @woodcock @vnravikumar

Why Are My Search Results Truncated?

$
0
0
Hello, I'm running into behavior I don't quite understand and was hoping someone might be able to shed some light on it. 1.) I'm running a search as an admin on a default install of 7.2.0 Splunk (no changes to limits.conf). I perform that search on an index that would return over 40k events if it were to return every matching result of the query. 2.) If I run that search as is in the Splunk search bar, it shows the right number of events (as it does in the Job Manager as well). But if I try to navigate through all those results, on page 25 (listing 50 events per page) I get the following warning message in the pager: "Currently displaying the most recent 1250 events in the selected range. Select a narrower range or zoom in to see more events.". I have no ability to navigate beyond page 25 at that point. 3.) If I run that search with "| head 12626", all 12626 events are returned and can be navigated (allowing me to go well beyond page 25). 4.) If I run that search with "| head 12627", I get the "most recent 1250 events" warning message. 5.) If I compare the search job log file for the "| head 12626" and "| head 12627" searches, they are essentially identical. There are no indications that anything was truncated in either case. No mention of any limits being exceeded. The "| head 12626" search actually ends up showing more memory used in the job manager. 6.) If I run that search using a SearchManager and put the results into a TableView on a custom Splunk dashboard, the results are also truncated but differently. For instance, with the "| head 12627", I can navigate to page 229 in my TableView (which is still short of the 12627 events but considerably more than 1250). 7.) If I check the SearchManager when results are truncated for the "| head 12767" search I see: "eventCount: 12627", "eventIsTruncated: true", and "eventAvailableCount: 1227" (considerably less than the 11444 events that appear in my table). I'm curious if anyone knows why I would be running into this behavior and if there is anything I can do to get around it? I'm specifically hoping for a solution that allwos me to display all the results of the search in the table on my custom dashboard. Thank you very much for any help you can provide.

Create periodic Dashboards/Reports for selected users and frequency from a CSV and forward them via Email

$
0
0
Hi Experts, I may be getting over ambitious with Splunk! but I still have to ask this! Is it possible to schedule periodic reports/dashboards based on the information from a CSV table. The CSV table has got all information for the report including frequency (weekly, daily, monthly) and other information the report will be based on! Can I take the fields from the CSV and generate periodic reports/dashboards?

Exclude weekends when calculating expected end time

$
0
0
I am doing a support ticket with 4 levels of severity. Level 1 expects the ticket to be resolved in 4 hours Level 2 expects the ticket to be resolved in 8 hours. Level 3 expects the ticket to be resolved in 72 hours aka 3 days. Level 4 expects the ticket to be resolved in 120 hours aka 5 days. So when a ticket is raised on a **Thursday** with **level 4 severity**, it should be expected to be solved by **Next Wednesday**. However, my code now included **Saturday and Sunday** into the calculation, resulting it to be resolved by **next Monday** instead How do i exclude Saturday and Sunday out when calculating the expected time? index="test" sourcetype="incident_all_v3" | eval check = strptime(strftime(_time , "%d/%m/%Y") , "%d/%m/%Y") | eventstats max(check) as checktime | where checktime = check | dedup 1 ticket_id sortby -_time | join ticket_id type=left [ search index="test" sourcetype="incident_assigned" | eval check = strptime(strftime(_time , "%d/%m/%Y") , "%d/%m/%Y") | eventstats max(check) as checktime | where checktime = check | eval move_datetime = strptime(move_datetime, "%Y-%m-%d %H:%M:%S") | dedup 1 ticket_id sortby -move_datetime | eval move_datetime = strftime(move_datetime, "%Y-%m-%d %H:%M:%S") | fields ticket_id move_datetime] | eval realtime = if(isnotnull(move_datetime), move_datetime, create_time) | eval create_time_epoch = strptime(realtime, "%Y-%m-%d %H:%M:%S") | lookup app_name.csv queue_name output vendor, app_name | search vendor = "Company" AND ticket_type = "Incident" AND app_name = "*" | eval diff_seconds = now() - create_time_epoch | eval diff_days = diff_seconds / 86400 | eval status = if (ticket_state="Closed" OR ticket_state="Completed" OR ticket_state="For Verification" OR ticket_state="Verified", "resolved" , "unresolved") | where status = "unresolved" AND ticket_type = "Incident" | eval SEVERITY = case ( SLA == "SLA Level 1", "1", SLA == "SLA Level 2", "2", SLA == "SLA Level 3", "3", SLA == "SLA Level 4", "4") | eval SEVERITY = "Sev ".SEVERITY | lookup sev_target.csv SEVERITY output TARGET | eval SLA_DEADLINE = case(SEVERITY = "Sev 4", create_time_epoch + (TARGET*3600), SEVERITY = "Sev 3", create_time_epoch + (TARGET*3600), SEVERITY = "Sev 2", create_time_epoch + (TARGET*3600), SEVERITY = "Sev 1", create_time_epoch + (TARGET*3600)) | eval SLA_DEADLINE = strftime(SLA_DEADLINE,"%Y-%m-%d %H:%M:%S") | table * ![SLA DEADLINE][1] So for this picture, on 2019-01-18(Friday), the Severity is level 4 and the Deadline is 2019-01-23, which is not what i wanted because it included saturday and sunday inside. It should be 2019-01-25 instead. [1]: /storage/temp/263763-sla.png

preamble_regex not working on UI

$
0
0
Hello Im having a problem and my mind is already heated looking for the answer, here is a screenshot of what im trying to do PREAMBLE_REGEX is not working here are the examples of my logs. I even tried to test my regex here https://regex101.com/ and it is working perfectly fine. im trying to remove the 2nd line of the logs and it is said in the documentation that PREAMBLE_REGEX is the key to ignore preamble lines. here is the link for my reference in preamble_regex https://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf?utm_source=answers&utm_medium=in-answer&utm_term=props.conf&utm_campaign=refdoc .Please help me where i am wrong. Im loosing my mind here. "cdrRecordType","globalCallID_callManagerId","globalCallID_callId","origLegCallIdentifier","dateTimeOrigination","origNodeId","origSpan","origIpAddr","callingPartyNumber","callingPartyUnicodeLoginUserID","origCause_location","origCause_value","origPrecedenceLevel","origMediaTransportAddress_IP","origMediaTransportAddress_Port","origMediaCap_payloadCapability","origMediaCap_maxFramesPerPacket","origMediaCap_g723BitRate","origVideoCap_Codec","origVideoCap_Bandwidth","origVideoCap_Resolution","origVideoTransportAddress_IP","origVideoTransportAddress_Port","origRSVPAudioStat","origRSVPVideoStat","destLegIdentifier","destNodeId","destSpan","destIpAddr","originalCalledPartyNumber","finalCalledPartyNumber","finalCalledPartyUnicodeLoginUserID","destCause_location","destCause_value","destPrecedenceLevel","destMediaTransportAddress_IP","destMediaTransportAddress_Port","destMediaCap_payloadCapability","destMediaCap_maxFramesPerPacket","destMediaCap_g723BitRate","destVideoCap_Codec","destVideoCap_Bandwidth","destVideoCap_Resolution","destVideoTransportAddress_IP","destVideoTransportAddress_Port","destRSVPAudioStat","destRSVPVideoStat","dateTimeConnect","dateTimeDisconnect","lastRedirectDn","pkid","originalCalledPartyNumberPartition","callingPartyNumberPartition","finalCalledPartyNumberPartition","lastRedirectDnPartition","duration","origDeviceName","destDeviceName","origCallTerminationOnBehalfOf","destCallTerminationOnBehalfOf","origCalledPartyRedirectOnBehalfOf","lastRedirectRedirectOnBehalfOf","origCalledPartyRedirectReason","lastRedirectRedirectReason","destConversationId","globalCallId_ClusterID","joinOnBehalfOf","comment","authCodeDescription","authorizationLevel","clientMatterCode","origDTMFMethod","destDTMFMethod","callSecuredStatus","origConversationId","origMediaCap_Bandwidth","destMediaCap_Bandwidth","authorizationCodeValue","outpulsedCallingPartyNumber","outpulsedCalledPartyNumber","origIpv4v6Addr","destIpv4v6Addr","origVideoCap_Codec_Channel2","origVideoCap_Bandwidth_Channel2","origVideoCap_Resolution_Channel2","origVideoTransportAddress_IP_Channel2","origVideoTransportAddress_Port_Channel2","origVideoChannel_Role_Channel2","destVideoCap_Codec_Channel2","destVideoCap_Bandwidth_Channel2","destVideoCap_Resolution_Channel2","destVideoTransportAddress_IP_Channel2","destVideoTransportAddress_Port_Channel2","destVideoChannel_Role_Channel2","IncomingProtocolID","IncomingProtocolCallRef","OutgoingProtocolID","OutgoingProtocolCallRef","currentRoutingReason","origRoutingReason","lastRedirectingRoutingReason","huntPilotPartition","huntPilotDN","calledPartyPatternUsage","IncomingICID","IncomingOrigIOI","IncomingTermIOI","OutgoingICID","OutgoingOrigIOI","OutgoingTermIOI","outpulsedOriginalCalledPartyNumber","outpulsedLastRedirectingNumber","wasCallQueued","totalWaitTimeInQueue","callingPartyNumber_uri","originalCalledPartyNumber_uri","finalCalledPartyNumber_uri","lastRedirectDn_uri","mobileCallingPartyNumber","finalMobileCalledPartyNumber","origMobileDeviceName","destMobileDeviceName","origMobileCallDuration","destMobileCallDuration","mobileCallType","originalCalledPartyPattern","finalCalledPartyPattern","lastRedirectingPartyPattern","huntPilotPattern" INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,VARCHAR(50),VARCHAR(128),INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,VARCHAR(64),VARCHAR(64),INTEGER,INTEGER,INTEGER,INTEGER,VARCHAR(50),VARCHAR(50),VARCHAR(128),INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,VARCHAR(64),VARCHAR(64),INTEGER,INTEGER,VARCHAR(50),UNIQUEIDENTIFIER,VARCHAR(50),VARCHAR(50),VARCHAR(50),VARCHAR(50),INTEGER,VARCHAR(129),VARCHAR(129),INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,VARCHAR(50),INTEGER,VARCHAR(2048),VARCHAR(50),INTEGER,VARCHAR(32),INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,VARCHAR(32),VARCHAR(50),VARCHAR(50),VARCHAR(64),VARCHAR(64),INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,VARCHAR(32),INTEGER,VARCHAR(32),INTEGER,INTEGER,INTEGER,VARCHAR(50),VARCHAR(50),INTEGER,VARCHAR(50),VARCHAR(50),VARCHAR(50),VARCHAR(50),VARCHAR(50),VARCHAR(50),VARCHAR(50),VARCHAR(50),INTEGER,INTEGER,VARCHAR(255),VARCHAR(255),VARCHAR(255),VARCHAR(255),VARCHAR(50),VARCHAR(50),VARCHAR(129),VARCHAR(129),INTEGER,INTEGER,INTEGER,VARCHAR(50),VARCHAR(50),VARCHAR(50),VARCHAR(50) 1,1,218478,18189622,1425443400,1,0,-1391850998,"257","",0,16,4,-1391850998,16386,4,20,0,0,0,0,0,0,"0","0",18189628,1,0,50989578,"215","720","",0,0,4,50989578,19460,4,20,0,0,0,0,0,0,"0","0",1425443433,1425443437,"742","441fd7b3-4f6f-44f5-90c3-d1a2d46e80cd","Internal_PT","Internal_PT","Internal_PT","Internal_PT",4,"SEPE0D1730BB1F1","TAP-CUC-VI1",12,0,5,5,2,15,0,"StandAloneCluster",5,"","",0,"",3,3,0,0,64,64,"","","","10.10.10.173","10.10.10.3",0,0,0,0,0,0,0,0,0,0,0,0,0,"",0,"",0,0,0,"15db15ce-0c12-2273-383c-ee7d6f30c842","720",7,"","","","","","","","",0,0,"","","","","","","","",0,0,0,"215","720","742","720" 1,1,218475,18189609,1425443314,1,0,420088330,"423","",0,16,4,420088330,16392,4,20,0,0,0,0,0,0,"0","0",18189610,1,0,1292503562,"505","505","",0,0,4,1292503562,16390,4,20,0,0,0,0,0,0,"0","0",1425443316,1425443444,"505","720051a4-73af-48e9-ae06-cb9c8ef24df8","Internal_PT","Internal_PT","Internal_PT","Internal_PT",128,"SEPE0D1730A8BDD","SEPE0D1730BB036",12,0,0,0,0,0,0,"StandAloneCluster",0,"","",0,"",3,3,0,0,64,64,"","","","10.10.10.25","10.10.10.77",0,0,0,0,0,0,0,0,0,0,0,0,0,"",0,"",0,0,0,"","",2,"","","","","","","","",0,0,"","","","","","","","",0,0,0,"505","505","505","" 1,1,218476,18189616,1425443354,1,0,-485881334,"222","",0,16,4,-485881334,16388,4,20,0,0,0,0,0,0,"0","0",18189617,1,0,1896483338,"652","652","",0,0,4,1896483338,16390,4,20,0,0,0,0,0,0,"0","0",1425443361,1425443468,"652","ed4a4431-446c-4cd3-a093-0a9ed85483e5","Internal_PT","Internal_PT","Internal_PT","Internal_PT",107,"SEP84802D768E0F","SEPE0D1730A8CDB",12,0,0,0,0,0,0,"StandAloneCluster",0,"","",0,"",3,3,0,0,64,64,"","","","10.10.10.227","10.10.10.113",0,0,0,0,0,0,0,0,0,0,0,0,0,"",0,"",0,0,0,"","",2,"","","","","","","","",0,0,"","","","","","","","",0,0,0,"652","652","652","" ![alt text][1] [1]: /storage/temp/263764-1.png

resize bar in bar graph to be universal size

$
0
0
I have a bar graph with 3 fields labelled Memory, CPU and Disk Space. When there is no Memory value, only CPU and Disk space will show. However, the size of the bar will become fatter as less fields are being shown. Take for example: It only shows disk space and CPU ![Thin][1] vs ![fat][2] where there is only one field and the size of the bar is bigger. How do i make it so its the same size no matter how many fields? This is the code for the graph: [1]: /storage/temp/263765-thin.png [2]: /storage/temp/263766-fat.png

No fields are extracted from custom unix app script output

$
0
0
Hello, I'm currently using the Unix App to show the disk space of some nodes. This works fine, however, for some nodes, I'm only interested in one of the mounts. For this, I copied df.sh and modified it to my needs: . `dirname $0`/common.sh HEADER='Filesystem Type Size Used Avail UsePct MountedOn' HEADERIZE='{if (NR==1) {$0 = header}}' PRINTF='{printf "%-50s %-10s %10s %10s %10s %10s %s\n", $1, $2, $3, $4, $5, $6, $7}' if [ "x$KERNEL" = "xLinux" ] ; then assertHaveCommand df CMD='df -TPh' FILTER_POST='$7 !~ /cassandra_volume/ {next}' fi $CMD | tee $TEE_DEST | $AWK "$BEGIN $HEADERIZE $FILTER_PRE $MAP_FS_TO_TYPE $FORMAT $FILTER_POST $NORMALIZE $PRINTF" header="$HEADER" echo "Cmd = [$CMD]; | $AWK '$BEGIN $HEADERIZE $FILTER_PRE $MAP_FS_TO_TYPE $FORMAT $FILTER_POST $NORMALIZE $PRINTF' header=\"$HEADER\"" >> $TEE_DEST I modified `FILTER_POST` so that the mount must contain `cassandra_volume`. Because this is a new script I added the default config for it in **default/inputs.conf**: [script://./bin/df-cassandra.sh] interval = 300 sourcetype = df source = df index = os disabled = 1 And in **local/inputs.conf**: [script://./bin/df-cassandra.sh] disabled = false as well as setting `disabled = true` for the `df.sh` script. And great, it works! I get the logs when I use this search query: index=os host=intcassandra*_datacenter2 sourcetype=df There is one problem though. I changed this on one node and the others still use the default df.sh script, and for the logs collected from the one where I changed it to the custom script, no fields are extracted: ![example of logs][1] As you can see, intcassandra01_datacenter2 (the one I added the custom script on) DOES emit the log, but no fields are extracted, while the others (who use df.sh) do have the extracted fields. Details of the broken log: ![alt text][2] Details of a working log: ![alt text][3] Note that, for the very same log (of the same mount) but from a different host, for the custom script it doesn't work, and for the regular one it does. I have no idea what could cause this. I'm not entirely sure how the entire thing works either so maybe I'm missing something. The file was temporarily edited on a Windows machine, could it be due to some kind of encoding difference or different treatment of spaces or something? [1]: https://i.gyazo.com/c4cab2b62cc88b5d0d3ea45da0b586f2.png [2]: https://i.gyazo.com/8ccd1ebb461c213f492b7378dc816a47.png [3]: https://i.gyazo.com/8a62925b2dec61e0f186101c657c55aa.png

Run python Script on Universal Forwarder before taking input.

$
0
0
I want to take input from a forwarder but before that I want to filter the data with the help of a python script. Just like in normal monitoring option, I used script to monitor a folder, like that I want to monitor a folder on box drive but want to use script to pre filter the data coming to splunk.

How to display last 4 months in splunk starting from current month

$
0
0
How to display last 4 months in splunk starting from current month. Required output is: January 2019 December 2018 November 2018 October 2018

Identifying Keywords from a .CSV and reporting them.

$
0
0
Hi all, I'm a bit of Splunk newbie, please bear with me! Our web filtering software is currently forwarding events to Splunk and works well. I'd really like to achieve the title, but I'm not well versed in SPL yet so I'm looking for some help! I have a list of keywords in a CSV file, I'd like Splunk to identify those keywords in our web filtering events (separate data source). For example, If I have the word "HELLO" in my CSV, if an event (URL) contains it - https://somewebsite.com/hi/88HELLO88, I'd like Splunk to report this to me. Could someone please give me some guidance on this, I'd really appreciate it! Thank you!

Discarding Events fron cron.log

$
0
0
On my univesal forwarder I have a repeated entry in my cron.log file that I would like to discard however I am not very familiar with regex terms. The entry in cron.log is hostname CROND[27158]: (root) CMD (/bin/sh /etc/init.d/swiagentd swrestart > /dev/null 2&>1) I have followed the instructions at: https://docs.splunk.com/Documentation/Splunk/6.5.2/Forwarding/Routeandfilterdatad#Discard_specific_events_and_keep_the_rest and I am using the following: props.conf [source::/var/log/cron] TRANSFORMS-null= setnull transforms.conf [setnull] REGEX = swrestart DEST_KEY = queue FORMAT = nullQueue I have restarted but I am still getting the message in my search. Do I have the correct regex? and is there a specific place in each .conf file that I should put the stanzas.

converting a non time format value to a correct date format

$
0
0
Hi guys , can you please help me with the solution for this use case i have been joining two quries and calculate the time difference. In the main search i have got the time format as **2019-01-28 20:00:00** and in the subsearch i have got the time format as **2019-01-28-20-32-49** Now i want convert the **2019-01-28-20-32-49** into value like this **2019-01-28 20:32:49** and calculate the time difference. following is the query i m using FYR | inputlookup SLA.csv|table SOR_NAME SLA_THRESHOLD| join type=left SOR_NAME [ search index=xx source=xx|rex "info\s:\s\+{4}\s(?\w+)\s\+{4}\sJob run_ingest_(?\w+)_(?\d+-\d+-\d+-\d+-\d+-\d+)_" |where Datafeed_name!=""|rex field=Datafeed_name "^(?\w{2,5})_(?\w+)$" |fields SOR_NAME time_stamp]|dedup SOR_NAME |eval time_diff = (SLA_THRESHOLD - time_stamp)|table SOR_NAME SLA_THRESHOLD time_stamp time_diff @jkat54 @woodcock

How to use a subsearch with 'table' command?

$
0
0
Hello, In order to detect unused workstations in our computer park we are searching for all assets not connected to Active Directory (AD) AND to Ghost Solution Suite (GSS) since >90 days. We can easily perform two searches independently which are basically the same. First one: sourcetype=my_ad_sourcetype | eval it = strptime(ad_last_inventory,"%Y-%m-%d") | eval ot = strptime(nowstring,"%Y-%m-%d") | eval diff = (ot - it) | eval round = round(diff/86400, 0) | search round > 90 | table ad_wks_name, ad_last_inventory And the second one: sourcetype=my_gss_sourcetype | eval it = strptime(gss_last_inventory,"%Y-%m-%d") | eval ot = strptime(nowstring,"%Y-%m-%d") | eval diff = (ot - it) | eval round = round(diff/86400, 0) | search round > 90 | table gss_wks_name, gss_last_inventory What we can’t do is to combine those two searches. We tried to execute one of two queries as a subsearch and perform a simple comparison at the end like: | where gss_wks_name=ad_wks_name But every time we face an issue: the main search is executed correctly, but the subsearch doesn’t give out the correct result. Instead it repeats the `_wks_name` and the `_last_inventory` date for the last workstation. wks_123 | 2018-10-20 23:12:00.0 wks_123 | 2018-10-20 23:12:00.0 wks_123 | 2018-10-20 23:12:00.0 etc. Do you have an idea what we're doing wrong? Thanks for the help! Alex.

Loading screen on Splunkd Health Report feature ?

$
0
0
Hi all. After upgrading to 7.2.* we experienced that the health Report feature is not loading properly. I start it from "Settings" -> "Health Report Manager" The website having issues to show properlly and only the splunk logo in Left corner comes, and in the middle of screen saying: Loading... The health.log does not tell me any errors, and i cant find anything exact. I`m running Red Hat 7* Any ideas to where I can look further?

False alert - delay in log writing?

$
0
0
We are getting a random false alert from Splunk (6.5.2) search that's looking if certain string is not found in a logfile within the last 15m. When we did an investigation and try to search, the string were there for the alert period so it shouldn't have triggered any alert. We couldn't find any relevant error in the splunkd log on the forwarder, but I did notice the two consecutive entries on the metrics.log: 1/25/19 4:55:01.800 PM 01-25-2019 16:55:01.800 +1100 INFO Metrics - group=per_source_thruput, series="/XXX/systemerr.log", kbps=10.196221, eps=0.193555, kb=316.072266, ev=6, avg_age=1389.166667, max_age=1667 1/25/19 4:22:59.801 PM 01-25-2019 16:22:59.801 +1100 INFO Metrics - group=per_source_thruput, series="/XXX/systemerr.log", kbps=6.268667, eps=0.161285, kb=194.334961, ev=5, avg_age=211.600000, max_age=265 We got the false alert around 4:54, so if I understand correctly by looking at the time gap and the "avg_age" value, it might be possible that the alert was triggered because the data was only being read after 4:55; there was no update (new lines) on the file from 4:22 until 4:55. So the question is, is my understanding correct? Is the problem caused by delay in writing the data in the source logfile or is it because of processing delay in Splunk itself? Appreciate any advise,

How is Splunk utilizing Map Reduce?

$
0
0
How is Splunk utilizing Map Reduce and also if it uses the same tech for SPL and data compression.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>