Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Kubernetes Logging to Splunk through Fluentd

$
0
0
We’re looking to get our Kubernetes logs into Splunk and it appears the best (most cloud native) way to do that is to forward the logs from Fluentd to Splunk HEC (HTTP Event Collector). With that being said, we see where there are a number of plugins that people have developed for Fluentd for this use-case, see: [Fluentd Plugins][1] Could you guys please tell us if any of these were developed by Splunk employees or are officially vetted/supported? ![Fluentd Plugins for Splunk][2] Does Splunk have another cloud native solution that they recommend instead? Don’t say the UF (Splunk Universal Forwarder). I also found [this][3] Splunk Answers post regarding the same topic for a bit of background on what others were doing cloud natively. Thanks for any assistance with this question. Thanks & Best regards, Matt [1]: https://www.fluentd.org/plugins [2]: /storage/temp/216674-screen-shot-2017-10-03-at-120152-pm.png [3]: https://answers.splunk.com/answers/525617/how-can-we-log-and-containerize-the-logs-using-kub.html

Search backwards matching on value in current search result

$
0
0
Hello - I have a logging event like this one. We are searching on "Threshold Exceeded" AND "225" 9/26/17 13:45:18:690 EDT] 000215d9 SystemOut O 4580330012 [SIBJMSRAThreadPool **: 764**] ERROR com.hdx.routing.saf.SafUtils - ** SAF THRESHOLD EXCEEDED ** currently SAF count is: 100 for Node : BJH/BJC/225/302/4.0 and route info: When we hit on this we need to search backwards over one min looking for the same ThreadPool ID in the error above it's : 764. [9/26/17 13:45:18:675 EDT] 000215d9 SystemOut O 4580329994 [SIBJMSRAThreadPool **: 764**] WARN com.hdx.routing.delivery.DeliveryEventHandlerSafV1 - **SAF** Failed sending to node 840153625 at TCPfalsefalse**64.46.236.20****10202**03ACK with RLogPK For this result we need to pull out the IP / Port and generate an alert. I have not extracted any fields yet. We are still very new to Splunk. Thanks in advance for the help. Carl

Passing the argument to the shell script on custom alert action.

$
0
0
Hey there, I've created a custom alert action on splunk. This is my directory structure: /apps /bin [shell script] /default app.conf alert_actions.conf data/ ui/ alerts/ [html file] /appserver /static [png file] /README /alert_actions.conf.spec I'm making use of all the 8 arguments that the splunk provides, However I want to pass an argument to the shell script which the user gives on the UI. Please help

How can I extract fields as an array?

$
0
0
Dear friends, I have one event in my log file that my user want to extract fields as an array. The event is: RequestTime="14 Sep 2017 23:59:47.819" RequesterIP="10.108.18.9" HTTPDThreadID="@http-0.0.0.0-3043-6" RequestType="AddMeterReadJob" RequestLen="2755" RequestTimeUnix="1505433587.819" ResponseType="JobIDResponse" ResponseLen="3652" ResponseTimeUnix="1505433587.866" ResponseTime="14 Sep 2017 23:59:47.866" ElapsedTime="0.047" OpenThreadCount="0" RequestXML='MeterInquiry_MeterRead_SYNC00:13:50:03:00:42:3a:ce00:13:50:03:00:48:05:1e00:13:50:03:00:49:88:7a00:13:50:03:00:48:31:bc00:13:50:03:00:48:15:2100:13:50:03:00:42:2c:3f00:13:50:03:00:48:14:a900:13:50:03:00:48:3f:2f00:13:50:03:00:44:0d:1600:13:50:03:00:47:fb:8f00:13:50:03:00:48:fc:6300:13:50:03:00:43:8c:3c00:13:50:03:00:3b:14:d300:13:50:03:00:48:fb:8c00:13:50:03:00:3a:ef:9000:13:50:03:00:3b:10:3200:13:50:03:00:3e:b1:c300:13:50:03:00:3b:0b:9000:13:50:03:00:48:f6:6100:13:50:03:00:3d:1a:d900:13:50:03:00:61:fa:f900:13:50:03:00:48:fc:3800:13:50:03:00:3c:23:7400:13:50:03:00:3e:5e:b300:13:50:03:00:49:05:4500:13:50:03:00:3b:14:a700:13:50:03:00:42:bf:0900:13:50:03:00:42:6f:bd00:13:50:03:00:47:70:a000:13:50:03:00:3e:a1:2700:13:50:03:00:43:8c:4600:13:50:03:00:42:c9:c800:13:50:03:00:48:fb:e800:13:50:03:00:40:d2:1100:13:50:03:00:48:e1:ae00:13:50:03:00:41:fd:6f00:13:50:03:00:48:fb:65JOB_OP_REGISTER_CURR_READ0JOB_PRIORITY_HIGH208115248' I need to be able to get a list of **urn:DeviceMacID** and be able to manipulate them as field. Is there any way? Thank you Gerson Garcia

Best Methods to Improve Performance of Dashboard

$
0
0
I have a dashboard with ~38 panels with 2 joins per panel. I'm curious what is the most costly for Splunk performance of a dashboard- is it the large number of panels I have or is it the number of joins I have in each? What are some common ways to improve the performance of a dashboard? Below is an example of one of my panels. I am doing some weird things with my location info because using the default value setting in my lookup table was throwing me a weird error. index=example date_month=August date_year=2017 (assignment_group="*") | dedup number | fillnull value="UNKNOWN" location | eval regionblank= "UNKNOWN" | eval countryblank= "UNKNOWN" | eval locationblank="UNKNOWN" | lookup CurrentSiteInfo.csv location| eval site=coalesce(location2,locationblank) | eval Region=coalesce(Region,regionblank)| eval Country=coalesce(Country,countryblank) | search ((Region="*") (Country="*") (site="*")) |stats count as Tickets by contact_type | join overwrite=false contact_type [search index=example earliest="6/01/2017:00:00:00" latest="12/31/2017:24:00:00" (assignment_group="*") | dedup number | fillnull value="UNKNOWN" location | eval regionblank= "UNKNOWN" | eval countryblank= "UNKNOWN" | eval locationblank="UNKNOWN" | lookup CurrentSiteInfo.csv location| eval site=coalesce(location2,locationblank) | eval Region=coalesce(Region,regionblank)| eval Country=coalesce(Country,countryblank) | search ((Region="*") (Country="*") (site="*"))| bucket _time span=1mon | stats count as Tickets by contact_type _time | stats avg(Tickets) as Baseline by contact_type | eval Baseline = round(Baseline,0)] | eval "Baseline Variance" = Tickets - Baseline | join overwrite=false contact_type [search index=example earliest=-3mon@mon (assignment_group="*") | dedup number | fillnull value="UNKNOWN" location | eval regionblank= "UNKNOWN" | eval countryblank= "UNKNOWN" | eval locationblank="UNKNOWN" | lookup CurrentSiteInfo.csv location| eval site=coalesce(location2,locationblank) | eval Region=coalesce(Region,regionblank)| eval Country=coalesce(Country,countryblank) | search ((Region="*") (Country="*") (site="*")) | bucket _time span=1mon | stats count as Tickets by contact_type _time | stats avg(Tickets) as Average by contact_type | eval Average = round(Average,0)] | eval "Average Variance" = Tickets - Average | table contact_type Tickets Baseline "Baseline Variance" Average "Average Variance" | addcoltotals | sort 0 Tickets

Discussion: Practical cases of going against Splunk Best Practices

$
0
0
Hello all, Potentially a bit of a sensitive topic, but I wanted to see what others thought. Splunk Best Practice are *great* and really help installations to go smoothly and work optimally, but I can think of at least one case where it's not always practical to follow them. My example is something I have done on all of my ES deployments: Install DBX on the ES SH when needed (best practice is to have no additional apps installed on the ES SH). I do this because some environments use DBX to collect asset data and, while you could index it, it's much simpler to just write directly to a CSV using a scheduled search. Asset data is a type of data where (when using a well made search) the old data is of not actionable value because the newest data should be a complete picture of your environment, so installing DBX on a forwarder and indexing it is a waste of storage paste (regardless of how small) and adds additional complexity that does not need to be there. I understand the reasoning behind "no additional apps on the ES SH" is to prevent bloat and take precious resources away from a very hungry system, but I treat this best practice as a rule of thumb that should be approached at a case by case basis .Having a single search run at 1 AM every day is going to have exactly 0 performance impact, and if it does you've got bigger problems. I've never had any issues doing this, until recently were someone was told to remove DBX from the ES SH because it wasn't a best practice, which caused a few headaches and, in my opinion, caused more problems by fixing an issue that didn't exist. --- What are your thoughts on this? Do you have any other examples of best practices being a great guideline, but not a rule of law?

How to resolve error message after indexer went down: "too many tsidx files in bucket"?

$
0
0
One indexer just went down. As it came up we see the following message for a couple of the indexers - throttled: idx= Throttling indexer, too many tsidx files in bucket='/SplunkIndexData/splunk-indexes//db/hot_v1_1519'. Is splunk-optimize working? If not, low disk space may be the cause. What it is exactly?

Empty result subsearch in eval/case

$
0
0
I am trying to eval a new field based on matching several sub searches. The issue is that these sub searches can potentially return an empty result which breaks the syntax of the eval command. Example: index=A loglevel=error | eval group=case( [search "search string 1" | fields correlationField], "group 1", [search "search string 2" | fields correlationField], "group 2" 1=1, "other") In this example if "search string 1" is not found in the time range then an empty result is used for the condition for group 1, which causes an error: Error in 'eval' command: The expression is malformed. An unexpected character is reached at ')

What's the Splunk-wmi.path script that points to splunk-wmi.exe? Is this custom?

$
0
0
I'm trying to account for a number of Splunk configurations on a domain controller and I was trying to figure out what the splunk-wmi.path script was that points to splunk-wmi.exe. I wasn't sure if this was something that Splunk automatically configured or if sys admins prior to me wrote this custom. Trying to figure out if I need to account for this configuration or not. Thanks!

Are any Fluentd apps Splunk vetted/supported? Or is there a preferred cloud-native solution for logging Kubernetes logs?

$
0
0
We’re looking to get our Kubernetes logs into Splunk and it appears the best (most cloud native) way to do that is to forward the logs from Fluentd to Splunk HEC (HTTP Event Collector). With that being said, we see where there are a number of plugins that people have developed for Fluentd for this use-case, see: [Fluentd Plugins][1] Could you guys please tell us if any of these were developed by Splunk employees or are officially vetted/supported? ![Fluentd Plugins for Splunk][2] Does Splunk have another cloud native solution that they recommend instead? Don’t say the UF (Splunk Universal Forwarder). I also found [this][3] Splunk Answers post regarding the same topic for a bit of background on what others were doing cloud natively. Thanks for any assistance with this question. Thanks & Best regards, Matt [1]: https://www.fluentd.org/plugins [2]: /storage/temp/216674-screen-shot-2017-10-03-at-120152-pm.png [3]: https://answers.splunk.com/answers/525617/how-can-we-log-and-containerize-the-logs-using-kub.html

Applying Field Extractions across similarly named servers

$
0
0
Hey Gang, Here are the basics: We are running Splunk Enterprise 6.5.1. I have a distributed architecture that has two separate search heads, 4 indexers with AutoLB (but no clustering) and a deployment server (all 6.5.1 running on RedHat). Now for the actual question. We have 70 or more websphere servers that are all similarly named (i.e. prdwas01, prdwas02, prdwas03...tstwas01, tstwas02, tstwas03....stgwas01, stgwas02, stgwas03....etc.etc.etc.). I have a series of extracted fields that I pull from the "source" value (see below): EXTRACT-sourcefields = (?<WAS_Cluster_All>(?<=logs\/)[a-zA-Z0-9\.]++) in source EXTRACT-sourcefileds = \/logs\/(?<WAS_JVM_name>[a-zA-Z0-9]++)_(?[a-zA-Z0-9]++) in source Basically, I'm pulling out some cluster and JVM characteristics from the file path of the source. Now, I would like to apply this across all 70 websphere servers. As near as I can tell, you can only specify field extractions for host, source or sourcetype. Well, I want to be able to pull these values across 70 or more different servers without having to enter in over 70 separate stanzas. If I could apply it based on index I'd be fine, but that's not an option. I have seen some articles on answers that reference using regex as part of the stanza title for these extractions in source and sourcetype, and I have attempted some of those, and not gotten them to work. Ideally, I would like to have a stanza that said something along the lines of: [host::.{3}was\d\d] That would represent any hostname that had 3 characters, then the letters 'was' and then two digits and have it then apply the included field extractions. I spoke with my sales engineer, and he claimed that it wasn't possible to use regex as part of the stanza header, and I wasn't able to get any of the examples from answers to work, so I decided to ask a question that specifically dealt with what I was trying to do. Any thoughts or information would be very appreciated. Thanks, Matthew Granger

How to Compare Relative Times

$
0
0
Hello, I received help in building a search of mine, and I cannot figure out the syntax of comparing the time. I need help with this part of the search below (test the date for if this event is in baseline/average). My average is looking at the past 3 months and my baseline is looking at between 6/01/2017 and 12/31/2017. I tried using strftime and couldn't get it to work. | join overwrite=false contact_type [search index=example earliest=-6mon@mon latest=now (assignment_group="*") | fields contact_type ... whatever else you absolutely need... | eval _time = relative _time(_time,"@mon") | eval BaselineFlag = case(...test the date for if this event is in baseline...., 1) | eval AverageFlag = case(...test the date for if this event is in average...., 1) | rename COMMENT as "The above commands are streaming and distributable, so should be above the dedup unless you have LOTS of dups." | rename COMMENT as "By using dc instead of count, this stats eliminates the need for dedup." | stats dc(eval(case(BaselineFlag=1,number))) as BaselineTickets dc(eval(case(AverageFlag=1,number))) as AverageTickets by contact_type _time | stats avg(BaselineTickets) as Baseline avg(AverageTickets) as Average by contact_type | eval Baseline = round(Baseline,0) | eval Average = round(Average,0) Essentially my goal of the search is to look at the tickets by contact_type for the current month and then compare those against a baseline and average. So this part of the search included is the comparison of the baseline and average

Create a new row to the table which is the sum of existing rows

$
0
0
How to have an additional row on the top which basically adds up the sum of below rows of the table The consuming_app value as "ALL" and the remaining fileds as the sum of below rows.

How to set user permission for "view source"?

$
0
0
The splunk administrator in my organization removed some permission for my role, the consequence is that I don't have permission to run "View Source" action. Please advise, what is the configuration Item to add "View source" feature to a role permission.

Can I use strftime to compare relative times?

$
0
0
Hello, I received help in building a search of mine, and I cannot figure out the syntax of comparing the time. I need help with this part of the search below (test the date for if this event is in baseline/average). My average is looking at the past 3 months and my baseline is looking at between 6/01/2017 and 12/31/2017. I tried using strftime and couldn't get it to work. | join overwrite=false contact_type [search index=example earliest=-6mon@mon latest=now (assignment_group="*") | fields contact_type ... whatever else you absolutely need... | eval _time = relative _time(_time,"@mon") | eval BaselineFlag = case(...test the date for if this event is in baseline...., 1) | eval AverageFlag = case(...test the date for if this event is in average...., 1) | rename COMMENT as "The above commands are streaming and distributable, so should be above the dedup unless you have LOTS of dups." | rename COMMENT as "By using dc instead of count, this stats eliminates the need for dedup." | stats dc(eval(case(BaselineFlag=1,number))) as BaselineTickets dc(eval(case(AverageFlag=1,number))) as AverageTickets by contact_type _time | stats avg(BaselineTickets) as Baseline avg(AverageTickets) as Average by contact_type | eval Baseline = round(Baseline,0) | eval Average = round(Average,0) Essentially my goal of the search is to look at the tickets by contact_type for the current month and then compare those against a baseline and average. So this part of the search included is the comparison of the baseline and average

Is there any reason I shouldn't edit an add-on's bin directory files?

$
0
0
I want to add a few things to an app that sends off API commands when saved searches trigger. Basically a new field for the API, so a new GUI element to fill out during the alert trigger config, and then one new line in the API script that gets executed. Is there any reason I can't just go into that app's bin/ directory and edit the code right there? Will it break anything, or is there a process that needs to be followed during changes like this?

Why is my search showing the total column value per user rather than individual results?

$
0
0
I want to create a report that alerts of 7 or more failed TACACS+ authentication attempts in the past 10 minutes. I almost got it working, except the "Total" column adds up every user that failed and totals it next to each username. So, for example, say I have two users: UserA failed 4 times and UserB failed 3 times. The Total column would show as 7 next to both UserA and UserB instead of 4 and 3. Below is my syntax: index=cisco_ise Protocol=Tacacs AuthenticationResult=Failed Service=Login Type=Authentication | eventstats count as TOTAL_COUNT | stats latest(TOTAL_COUNT) as Total by user | where Total > 6

How to monitor USB Registry with more information?

$
0
0
I have issues displaying the picture A's information into Splunk, only the vague ones are forwarded shown in picture B. I want every parameter in picture A to be forwarded to Splunk, how do I do it? My current stanza: [WinRegMon://hklm_run] disabled = 0 hive = \\REGISTRY\\MACHINE\\SYSTEM\\CurrentControlSet\\Enum\\USBSTOR\\.* proc = .* type = set|create|delete|rename index = usbregistry ![alt text][1] ![alt text][2] [1]: /storage/temp/216679-picture-a.png [2]: /storage/temp/216681-picture-b.png

Can't get tag to work

$
0
0
I have a dashboard with several prebuilt panels and several non-prebuilt panels. At the top of the form I have: 1 minute5 minutes10 minutes15 minutesNeverNever For the value attribute I have tried integers (as above) and SPL time formats (e.g. 5m). Within various searches in both prebuilt and non-prebuilt panels I have added: $refresh$ shown in several online postings as well as the documentation for our version of Splunk: 6.6.2, build 4b804538c686. Nothing I have tried seems to work as I expect. The prebuilt panels run once and stop. The non-prebuilt panels refresh every minute. I think (it's a little hard to tell by hovering over the little controls). Am I doing something wrong here? I have other dropdowns on the page that work, and the variables defined by those dropdowns interpolate in the prebuilt panels. If I add $refresh$ to a panel title it shows up just as I expect. As far as I can tell the tag does not work, or at least it does not work as I expect. I have tried constants in it and they don't seem to change anything. Then once in a while it surprises me in some non-reproducible way. Can someone tell me what I'm doing wrong?

Redrawing chart changes y axis maximum

$
0
0
I have a table which drills down to change a chart:Exchanges`MS_DDI_Microservices` metric_name="Rate:Exchange:*" | rex field=metric_name ".*:(?<Exchange>[^:]*):(?<direction>[^:]*)$$" | chart avg(Average) by Exchange, direction$time.earliest$$time.latest$1$refresh$$row.Exchange$
$exchange$ Rate`MS_DDI_Microservices` metric_name="Rate:Exchange:$exchange$:*" | rex field=metric_name ":(?<direction>[^:]*)$$" | timechart avg(Average) by direction$time.earliest$$time.latest$1$refresh$
When the dashboard draws initially the chart has a y-axis that just includes the data (currently 7.5). When I select a row in the table the chart redraws with the y-axis up to 100, well over what is required. Selecting back to the original row keeps the y-axis maximum value of 100, which renders the data as a tiny curve at the very bottom of the chart. Any thoughts? I've left the y-axis max at the default, documented as auto. I think it works right the first time but not afterwards. Splunk Enterprise 6.6.2
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>