I'm application analyst that monitors splunk alerts. We monitor OOM, CPU usage and other data. We receive alerts via MS outlook.
Is there a way to pull reports on the splunk alerts for the last 6 months. I'd like to see how many alerts we receive daily, monthly, etc. Is that possible via splunk?
Here are the details of Splunk version which I am using: 6.6.1
↧
How to pull a report on Splunk alerts?
↧
Does the Forefront Threat Management Gateway TA work with the SQL Express Server logs?
The TMG instance we work with is configured to log to the SQL Express DB and pushes out DB logs every day. Does the existing TA parse these files, or only the log files generated if we (re)configure it to produce file system logs?
↧
↧
How to send ESX logs via Splunk heavy forwarder in a Windows environment?
We have Splunk components (1 S.H + 1 IND + 2 H.F) installed in windows environment.
I would like to configure ESX host to send logs to Splunk Heavy Forwarder and be able to Search data through S.H.
However, Splunk App for VMware works on Splunk platform instances deployed in a *nix environment. Windows is not a supported operating system for this app.
Can someone please provide a solution on this ?
Thanks in advance.
↧
Is there any way I can use a custom AMI for Splunk in AWS?
Can I use a custom AMI which is hardened by a script for Splunk deployment in AWS?
If there is a possibility, how can I do it?
please suggest me.
I know just changing the AMI in the cloudformation does not work
There are some internal changes that need to be made.
Kindly help me with this.
Thank you in advance.
↧
Dynamic Search based on previous search output and if condition
Hello Splunk Community,
Business requirements pushing my knowledge on Splunk so far... just wondering if Splunk query can be subdivided into methods/functions? The current scenario I'm trying to figure out is depending on a search result which can have multiple fields... the 3rd search/subsearch field would vary.
*Pseudo Logic goes:*
if precheck field outputs A
do
search A1 cond, B1 cond, C1 cond
else
if precheck field outputs B
do
search B1 cond, D1 cond, E1 cond
I'm already doing join to arrive at the precheck output
tried this
...| eval search1 = "Field1=Y AND Field 2="xxxx" AND Field 3="bbbbb""
| eval search2 = "Field4=N AND Field5="zzzz""
| eval filter=if (COND=1, search1, search2)
| search filter
but getting some error: **"Error in eval command, Fields cannot be assigned a boolean result, Instead, try if(bool expr, expr, expr)"**
↧
↧
*metrics.log
In standalone environment why my splunk enterprise don't have "source=*metrics.logs " at certain hours.
↧
Splunk crashes when trying to install an app from "Browse more apps" section
Our splunk is running on RHEL 7 as a non-root user. Splunk is behind firewall and i configured proxy settings. As soon as i enter my splunk credentials for installing any app in "Browse more apps" section, splunk gets crashed everytime. I see following in crash log.
Received fatal signal 6 (Aborted).
Cause:
Signal sent by PID 18714 running under UID 40586.
Crashing thread: TcpChannelThread
Registers:
RIP: [0x00007F436326C1F7] gsignal + 55 (libc.so.6 + 0x351F7)
RDI: [0x000000000000491A]
RSI: [0x0000000000005DA9]
RBP: [0x00007F43633B7E68]
RSP: [0x00007F43475FCFC8]
RAX: [0x0000000000000000]
RBX: [0x00007F4364879000]
RCX: [0xFFFFFFFFFFFFFFFF]
RDX: [0x0000000000000006]
R8: [0x0000000000000148]
R9: [0xFEFEFEFF092D6364]
R10: [0x0000000000000008]
R11: [0x0000000000000202]
R12: [0x0000559FD2F96158]
R13: [0x0000559FD3074D40]
R14: [0x00007F43475FD1F0]
R15: [0x00007F43620F786A]
EFL: [0x0000000000000202]
TRAPNO: [0x0000000000000000]
ERR: [0x0000000000000000]
CSGSFS: [0x0000000000000033]
OLDMASK: [0x0000000000000000]
OS: Linux
Arch: x86-64
Backtrace (PIC build):
[0x00007F436326C1F7] gsignal + 55 (libc.so.6 + 0x351F7)
[0x00007F436326D8E8] abort + 328 (libc.so.6 + 0x368E8)
[0x00007F4363265266] ? (libc.so.6 + 0x2E266)
[0x00007F4363265312] ? (libc.so.6 + 0x2E312)
[0x0000559FD22C31C4] _ZN21HttpClientTransaction15_handleRedirectEv + 852 (splunkd + 0x11B31C4)
[0x0000559FD22C3932] _ZN21HttpClientTransaction18_finishTransactionEv + 354 (splunkd + 0x11B3932)
[0x0000559FD22C47C3] _ZN20HttpClientConnection10parseReplyEPKcS1_ + 1747 (splunkd + 0x11B47C3)
[0x0000559FD22C58C0] _ZN20HttpClientConnection13dataAvailableEv + 160 (splunkd + 0x11B58C0)
[0x0000559FD2360B88] _ZN11TcpOutbound6_do_ioE18PollableDescriptor + 440 (splunkd + 0x1250B88)
[0x0000559FD2361C01] _ZN11TcpOutbound11when_eventsE18PollableDescriptor + 33 (splunkd + 0x1251C01)
[0x0000559FD22AA9B6] _ZN8PolledFd8do_eventEv + 134 (splunkd + 0x119A9B6)
[0x0000559FD22AB8BB] _ZN9EventLoop3runEv + 651 (splunkd + 0x119B8BB)
[0x0000559FD23623B0] _ZN15TcpOutboundLoop3runEv + 16 (splunkd + 0x12523B0)
[0x0000559FD22C1712] _ZN21HttpClientTransaction22runSyncAndShutdownLoopEP15TcpOutboundLoop + 50 (splunkd + 0x11B1712)
[0x0000559FD22C17D5] _ZN21HttpClientTransaction7runSyncEv + 69 (splunkd + 0x11B17D5)
[0x0000559FD2235B41] _ZN18ApplicationUpdater20fetchUpdateFileByUriERK7FullUriRK3StrR28ApplicationUpdateTransaction + 145 (splunkd + 0x1125B41)
[0x0000559FD2235FB5] _ZN18ApplicationUpdater20fetchUpdateFileByUriERK7FullUriRK3StrR8Pathnameb + 357 (splunkd + 0x1125FB5)
[0x0000559FD20FF3AC] _ZN21LocalAppsAdminHandler13handleInstallER10ConfigInfoPK3StrS4_b + 2652 (splunkd + 0xFEF3AC)
[0x0000559FD20FF9ED] _ZN21LocalAppsAdminHandler12handleCreateER10ConfigInfo + 253 (splunkd + 0xFEF9ED)
[0x0000559FD1DF980C] _ZN14MConfigHandler14executeHandlerER10ConfigInfo + 620 (splunkd + 0xCE980C)
[0x0000559FD1E09C7D] _ZN14MConfigHandler2goER10ConfigInfo + 189 (splunkd + 0xCF9C7D)
[0x0000559FD1E0A844] _ZN29AdminManagerReplyDataProvider2goEv + 804 (splunkd + 0xCFA844)
[0x0000559FD1EA3078] _ZN33ServicesEndpointReplyDataProvider9rawHandleEv + 88 (splunkd + 0xD93078)
[0x0000559FD1E98B2F] _ZN18RawRestHttpHandler10getPreBodyEP21HttpServerTransaction + 31 (splunkd + 0xD88B2F)
[0x0000559FD22D9FE0] _ZN32HttpThreadedCommunicationHandler11communicateER17TcpSyncDataBuffer + 272 (splunkd + 0x11C9FE0)
[0x0000559FD19180B3] _ZN16TcpChannelThread4mainEv + 227 (splunkd + 0x8080B3)
[0x0000559FD2363440] _ZN6Thread8callMainEPv + 64 (splunkd + 0x1253440)
[0x00007F4363601E25] ? (libpthread.so.0 + 0x7E25)
[0x00007F436332F34D] clone + 109 (libc.so.6 + 0xF834D)
Linux / gssit-devops-qa / 3.10.0-693.1.1.el7.x86_64 / #1 SMP Thu Aug 3 08:15:31 EDT 2017 / x86_64
Last few lines of stderr (may contain info on assertion failure, but also could be old):
2017-10-05 16:38:25.291 -0400 Interrupt signal received
2017-10-05 16:40:18.548 -0400 splunkd started (build 4b804538c686)
2017-10-05 16:44:46.866 -0400 Interrupt signal received
2017-10-05 16:46:48.163 -0400 splunkd started (build 4b804538c686)
2017-10-05 22:55:50.030 -0400 Interrupt signal received
2017-10-05 22:57:42.807 -0400 splunkd started (build 4b804538c686)
splunkd: /home/build/build-src/kimono/src/util/HttpClientRequest.cpp:1860: void HttpClientTransaction::_handleRedirect(): Assertion `_redirectReply == REPLY_EATING_NORMAL' failed.
2017-10-05 23:02:25.471 -0400 splunkd started (build 4b804538c686)
splunkd: /home/build/build-src/kimono/src/util/HttpClientRequest.cpp:1860: void HttpClientTransaction::_handleRedirect(): Assertion `_redirectReply == REPLY_EATING_NORMAL' failed.
↧
VNX App - No data in the lun performance section
We have VNX App version 1.2 deployed with our Splunk Enterprise installation (ver. 6.5.1) If we try to generate "Heat Map - LUN Throughput (IOPs)" it shows no data in column "LUN Throughput (IOPs)↧" .
Please advise solution to get the data populated into column “LUN Throughput(IOPs)”.
↧
Joining/Appending queries
Hi guys,
quick question here: I have the following queries:
Q1: Sub-Search for userID
Q2: Main search, which provides username and department
Currently I can get a table with userID, Username & Department.
I would like to include in the result table each user's last access timestamp, but this field is in the sub-search index. What is the best approach to achieve that?
Table:
UserID | Username | Department | Last Access
Thank you.
↧
↧
How to make the div that can be fold/unfold on dashboard
I'd like to make the div that can be fold/unfold on dashboard.
So I tried to implement an easy thing using html, but it does not work.
How can I make this possible on Splunk's dashboard?
Please tell me that if someone know about it.
↧
Dashbaord creation for logs validation in each environment from different Index.
create dashboard where it can show the per day logs ingestion corresponding to it's relevant environment which shows the count of logs per day into it's related environment but i am facing issues while creating dashboard..please help ..thanks in advance.
↧
Can We Monitor /push logs for CA Certificates Expiry of different Servers through splunk , so that we can create alert and notable for these events .
Hello All ,
we have requirement to monitor Certificates expiry logs and data through splunk , SCOM manages the monitoring part of these , i was curious if these logs for expiry can be fetched from different servers to Splunk so that we can create relevant alerts/notable
Any document available for config of the same
↧
Unable to get output through Windows-add on
Hi,
I want some hardware informations remotely on the list of Windows servers. I have downloaded the Windows Add-on, created the inputs.conf in Splunk_TA_Windows, still unable to get the output, neither an error, seems no instance of this Add-On is found on the executing of Forwarder on the testing client?
![alt text][1]
[1]: /storage/temp/216727-splunk-windows.png
This is the link what i was following for this execution:
https://www.splunk.com/blog/2013/10/09/windows-host-monitoring-in-splunk-6.html
↧
↧
Merge similar field values
Running the following query gives me a result with different field values.
index="XXXX" host="POLO*" | stats count by URL | sort-count
URI | count
/pup/folks/xy/hollow/yellow/red | 7
/pup/folks/xy/hollow/yellow/1234567/usage | 1
/pup/police/xy/laptop/MASTER/hollow/1234567 | 1
/pup/folks/xy/hollow/yellow/1234567/usage | 1
/pup/police/xy/laptop/MASTER/hollow/123456 | 1
/pup/folks/xy/hollow/yellow/12345/usage | 1
/pup/folks/xy/hollow/yellow | 1
/pup/police/xy/laptop/MASTER/hollow/12345 | 1
/pup/folks/xy/hollow/yellow/123456/usage | 5
/pup/folks/xy/hollow/yellow/123456/usage | 5
/pup/folks/xy/hollow/yellow/123456/usage | 5
/pup/police/xy/laptop/MASTER/hollow/123456 | 5
/pup/police/xy/laptop/MASTER/hollow/123456 | 5
/pup/folks/xy/hollow/yellow/123456/usage | 4
Is there a way to show them like this? (Merge)
/pup/folks/xy/hollow/yellow/*/usage | 22
/pup/police/xy/laptop/MASTER/hollow/* | 13
↧
Duplicate keys in event conflicting the splunk result
Unfortunately, I have been indexing the events which have a key named "source" and splunk by default treat the key "source" as the source of the events.
Now, when I am trying to retrieve the values from key "source", it is providing me the event source.
Is there any way to retrieve the source key values from the events instead event sources(directories) or it is a bug/conflicts!
Can anyone help me in this situation, how can I get the values without using regex/rex cmds?
↧
Hello, Can someone please guide me how to setup Splunk to trigger alerts whenever i get an e-mail in outlook.
I need to setup an alert whenever i get a mail in my Outlook mailbox. Please help me with the detailed steps because I'm new to Splunk. Helps appreciated :)
Need a detailed "How to" type answer.
Thanks in advance !!
↧
Can i use table command instead of Stats and if there is any better why to have more efficient query
My Query is as follows
index=x source=y COMPLETED
| stats values(process_key) as "Process Key", values(process_start_time) as "Process Start Time", values(job_key) as "Job Key", latest(job_status) as "Job Status" latest(process_status) as "Process Status",values(total_run_duration) as "Process Duration", values(duration_process_inprogress) as "InProgress Duration", values(job_detail_lastupdate) as "Last Update Time" by process_key,process_name,job_detail_key
| rename process_name as "Process Name"
| table "Process Start Time", "Last Update Time", "Process Name", "Process Key", "Job Key" "Process Status" "Job Status" "Process Duration" "InProgress Duration"
| sort -"Process Start Time"
| fillnull "Process Duration" value=Opened
↧
↧
Adding hosts to splunk
I have installed universal forwarders on all of the servers I want to monitor with Splunk. If I go on the Splunk Server to "Settings" -> "Add Data" -> "Forward" I find all but one of the servers in that list. Lets call the server serverx
If I go from the Splunk dashboard to "Search and Reporting" and search for that server that is missing in the forwarders list I find information on it. host=serverx
Is there some way that I can get serverx into the list of forwareders so I can define monitoring parameters for the hosts? Or is there another way of doing this all together?
↧
Notable Review time
Hi all,
I need to create a dashboard which can provide me the total review time taken by the analyst. I have created the following query:
| datamodel Incident_Management Notable_Events search | stats earliest(_time) as _time by rule_id | `drop_dm_object_name("Notable_Events")`| `get_correlations` | join rule_id [| datamodel Incident_Management Incident_Review search | stats earliest(_time) as reviewtime by Incident_Review.rule_id,Incident_Review.reviewer_realname| `drop_dm_object_name("Incident_Review")`] | eval tot=reviewtime-_time | stats count,avg(tot) as avg_tot,max(tot) as max_tot ,min(tot) as min_tot by reviewer_realname | sort - avg_tot | `uptime2string(avg_tot, avg_tot)` | `uptime2string(max_tot, max_tot)` | `uptime2string(min_tot, min_tot)` |rename *_tot* as *_time_to_review* | fields - *_dec
This is working fine and giving me results close to my expectations. However i don't need to include the off-business hours in the review time. For e.g., if i acknowledged and alert today and i closed it tomorrow, the total review time should not have the Off-business hour time (possibly 8-10 hours) and it should get subtracted.
Can anybody help me here on this issue ?
↧
Detect/handle parsing error and log format change
Hi,
I have been asked about log parsing and parser error detection in Splunk.
The questions are: In general
- how can and should I detect parsing errors in Splunk? (New version of log source, etc without notification to Splunk admin, etc)
- how should I handle the new log format? There are already data in the index with the old source type. If I modify the sourcetype definitions, it will break the search time field extraction, is it? Clone and modify the source type?
I don't find a guide or best practice in the docs...
Thanks,
István
↧