Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Properties/Arguments in Endpoint URL for REST Modular Input

$
0
0
Hi Splunkers. I'm trying to set up a REST input to bring back output from an API. These are the parameters used to form the API Endpoint URL. i.e. https:///<1st_parameter>//token?api-version= In the above example I need to provide the above parameters to build the entire endpoint URL both before and after the "?" etc as opposed to hard-coding them in the endpoint URL field in the setup screen. The config screen in Splunk web config for the REST input provides an area for URL arguments and and HTTP Header properties but nothing used in either of these two areas seems to get substituted into the actual URL that Splunk calls when it tries to contact the endpoint. Any advice on where these parameters go so they can flesh out the endpoint URL when it's called? Note that the initial call to the API is a POST to get an access token with all subsequent calls being a GET. Finally, in case it's relevant to answering the question, this input will be running on a HF. Cheers and thanks in advance.

Issue with Blacklist in Inputs.conf

$
0
0
Hi Experts I have following monitor stanza . I want to blacklist "data/xyz/logs/router.jar.log" but want to monitor "/data/xyz/logs/abc/abc-router/abc-router.jar.log" . Though I have mentioned router.* still is blacklisting "abc-router.jar.log". Please help here [monitor:///data/xyz/logs/] index = test sourcetype = test_st whitelist=\\.jar\\.log$ blacklist=discovery.\*|router.\*|java.\* disabled = 0

Access Granted/Denied query

$
0
0
Hi, I have the following table: _time usernameOK _time usernameFail example: 2017-09-28 00:10:00 usernameOK=robE 2017-09-28 01:10:20 usernameFail=jonasH 2017-09-28 02:20:23 usernameOK=timN 2017-09-28 02:20:35 usernameOK=robE 2017-09-28 02:30:46 usernameOK=robE Basically I am trying to get the count of BOTH usernameOK and usernameFAIL, by time (bucketed 1h) by user, akin to a pivot table but my count command is coming back with an error ... Any ideas? Thank you.

How do I color a single value based on a text value/on a different value than the one displayed?

$
0
0
I'm interested in coloring single value displays based on the text value of the single value, and/or based on a different value than the one displayed. I've seen the first part of this question around (at least [here][1] and [here][2]), but all answers point to range or rangemap and using deprecated features. For example, for the following result: text value foo 12 I want the single value to display "foo" and choose a color based on the value 12. Also, I am looking for a way to have the single value work with results such as text value foo severe and use a color mapping from text to color, as in The single value should use the color at the matching index to color "foo", or gray if none match (same behavior as with more rangeValues than rangeColors). Better yet, I would like to provide a hex color value in spl: text value foo 0xd93f3c that would then be applied to "foo" in the single value. Is any of this possible without resorting to custom js/css? [1]: https://answers.splunk.com/answers/103239/change-color-of-single-value-visualization.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev [2]: https://answers.splunk.com/answers/525809/change-single-value-color-based-on-textual-value.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev

Time_format_change_procedure

$
0
0
Hi Guys, I am trying to create a use-case as " date when any single user was created in AD" it's done but I need to change time format to readable format, right now it coming like this "20170905133223.0Z" how can I convert it to as " 05-September-2017" I tried with eval command as follows but no effect in results. search:| ldapsearch domain=default search="(objectClass=user)" | table displayName,whenCreated |eval epochtime=strptime(whenCreated, "%Y %m %d %H:%M:%S") | eval desired_time=strftime(epochtime, "%d/%m/%Y") result: ABC|20170905133223.0Z **desired result:** ABC|05-September-2017

Many duplicate events since a major outage / corrupt buckets?

$
0
0
Hi guys, since I still can not open a support case, I can only try it here (I've tried so many times to get this issue resolved, but yea, it's not like we're paying a lot of money for support). We already had some issues with duplicate events in the past and always resolved them. But not this time, it seems. And it is quite a problem now. Recently, there was a storage outage in our datacenter. The Splunk VMs were running all the time (single site cluster with its VMs in two datacenters) but there were quite some disturbances and replication issues. We have two environments. Due to our many differnt (V)LAN zones, we have a couple of Heavy Forwarders set up in each zone we need to collect data from. **Now here's the problem:** In our testing environment, there is a total of 4 HFs, but **only one** is sending duplicate data to our two indexers. This is not a lot of data, so I could live with it being resolved at a later point. In our productive environment however, there is a total of five HF and now we got 80-100 hosts sending its data duplicated (in the last 15 minutes: 87 hosts from 15 different sources, into 37 differnt indexes). I have searched errors and warnings for some time now and found a few log events, googled them but still have those problems. Restarting the indexer cluster, removing excess-buckets from the master or restarting the Heavy Forwarders did not resolve any issues. No configurations have been changed (I tried disabling/re-enabling useACK on one HF but had no luck whatsoever). Here are a few errors: SERVER2 14:13:25.737 +0200 ERROR S2SFileReceiver - event=statSize replicationType=eJournalReplication bid=_internal~380~B3A9C962-6814-49A6-A47E-593741B331A3 path=/opt/splunk/var/lib/splunk/_internaldb/db/380_B3A9C962-6814-49A6-A47E-593741B331A3/rawdata/journal.gz status=failed SERVER2 14:12:44.855 +0200 ERROR S2SFileReceiver - event=statSize replicationType=eJournalReplication bid=_internal~337~E432BEC8-63C5-4DCE-A500-90756157F30F path=/opt/splunk/var/lib/splunk/_internaldb/db/337_E432BEC8-63C5-4DCE-A500-90756157F30F/rawdata/journal.gz status=failed SERVER2 14:12:44.732 +0200 ERROR S2SFileReceiver - event=statSize replicationType=eJournalReplication bid=00_p_INDEX4_14~264~E432BEC8-63C5-4DCE-A500-90756157F30F path=/opt/splunk/var/lib/splunk/00_p_INDEX4_14/db/264_E432BEC8-63C5-4DCE-A500-90756157F30F/rawdata/journal.gz status=failed SERVER2 14:12:44.735 +0200 ERROR S2SFileReceiver - event=statSize replicationType=eJournalReplication bid=_audit~112~E432BEC8-63C5-4DCE-A500-90756157F30F path=/opt/splunk/var/lib/splunk/audit/db/112_E432BEC8-63C5-4DCE-A500-90756157F30F/rawdata/journal.gz status=failed SERVER2 14:12:44.718 +0200 ERROR S2SFileReceiver - event=statSize replicationType=eJournalReplication bid=60_p_INDEX1_14~96~E432BEC8-63C5-4DCE-A500-90756157F30F path=/opt/splunk/var/lib/splunk/60_p_INDEX1_14/db/96_E432BEC8-63C5-4DCE-A500-90756157F30F/rawdata/journal.gz status=failed SERVER2 14:12:44.844 +0200 ERROR S2SFileReceiver - event=statSize replicationType=eJournalReplication bid=00_p_INDEX2_14~301~E432BEC8-63C5-4DCE-A500-90756157F30F path=/opt/splunk/var/lib/splunk/00_p_INDEX2_14/db/301_E432BEC8-63C5-4DCE-A500-90756157F30F/rawdata/journal.gz status=failed SERVER2 14:12:44.825 +0200 ERROR S2SFileReceiver - event=statSize replicationType=eJournalReplication bid=00_p_INDEX3_14~533~E432BEC8-63C5-4DCE-A500-90756157F30F path=/opt/splunk/var/lib/splunk/00_p_INDEX3_14/db/533_E432BEC8-63C5-4DCE-A500-90756157F30F/rawdata/journal.gz status=failed ..and a few warnings: SERVER2 14:12:44.825 +0200 WARN S2SFileReceiver - unable to remove dir=/opt/splunk/var/lib/splunk/00_p_INDEX2_14/db/533_E432BEC8-63C5-4DCE-A500-90756157F30F for bucket=00_p_INDEX2_14~533~E432BEC8-63C5-4DCE-A500-90756157F30 SERVER1 09-28-2017 14:13:25.737 +0200 WARN S2SFileReceiver - unable to remove dir=/opt/splunk/var/lib/splunk/_internaldb/db/380_B3A9C962-6814-49A6-A47E-593741B331A3 for bucket=_internal~380~B3A9C962-6814-49A6-A47E-593741B331A3 SERVER2 09-28-2017 14:12:44.855 +0200 WARN S2SFileReceiver - unable to remove dir=/opt/splunk/var/lib/splunk/_internaldb/db/337_E432BEC8-63C5-4DCE-A500-90756157F30F for bucket=_internal~337~E432BEC8-63C5-4DCE-A500-90756157F30F SERVER2 14:12:44.844 +0200 WARN S2SFileReceiver - unable to remove dir=/opt/splunk/var/lib/splunk/00_p_INDEX1_14/db/301_E432BEC8-63C5-4DCE-A500-90756157F30F for bucket=00_p_INDEX1_14~301~E432BEC8-63C5-4DCE-A500-90756157F30F These messages only occur after a rolling-restart of the indexer cluster. It's interesting that the "Indexer Clustering Status" shows "everything is fine" and also the Health Check is not finding any issue at all. Does this mean, all of those buckets are corrupt (there are listed around 20-30 different ones? That will be interesting to get explained by the storage people. But even if so: What does this have to do with new incoming duplicated data? **Edit**: Two indexers are clustered with a RF=2, one indexer in datacenter X and one indexer in datacenter Y, three Search Heads with a SF=2. The Search Heads seem to be working fine (2 in datacenter X, 1 in datacenter Y) Skalli

Data Model: Change Root Event Constraint returns 0 results.

$
0
0
Hi all, I've been working on a Data Model, and have a root event with constraint: `index=test_index` Now, when I change the constraint to: `index=prod_index` In the preview, nothing gets returned. **1) Can you change the index in the constraint? 2) Also, can you have wildcard in the constrain such as `index="*_index"`?** There is data in both indexes and I'm using Splunk Enterprise 6.4.2. Thanks all.

sorting date/time

$
0
0
Hi, I have example of date/time as below Mon 28 Dec 2015 06:26:19 PM ICT Mon 26 May 2014 04:52:02 PM ICT Fri 17 Feb 2017 04:01:59 PM ICT Wed 28 Jun 2017 05:49:04 PM ICT Wed 05 Oct 2016 06:46:30 PM ICT I want to sort it by date, month year... in the correct order. could you please tell me how to do it.

JournalSliceDirectory: Cannot seek to rawdata offset 0, path="..." on running search in Splunk Web on Splunk non clustered indexer

$
0
0
I am using Splunk 6.6.2 When I ran search in Splunk Web for index for more than 30 days timeline "index="indextest" , I get this error: ![alt text][1] **JournalSliceDirectory: Cannot seek to rawdata offset 0, path="/opt/splunk/var/lib/splunk/indextest/db/db_1502353482_1504459082_1/rawdata'** I have gone through some answers posted in Splunk and tried few fsck commands to repair i ran the fsck scan command identified the corrupted buckets: **Eg:** *splunk scan --all-buckets-all-indexes* **output in unix:** Operating on: idx=indextest bucket='/opt/splunk/var/lib//splunk/indextest/db/db_1502353482_1504459082_1/rawdata' JournalSliceDirectory: Cannot seek to rawdata offset 0, path="/opt/splunk/var/li b/splunk/indextest/db/db_1502353482_1504459082_1/rawdata" Corruption: corrupt slicesv2.dat or slices.dat Then tried to repair them: *splunk repair --all-buckets-all-indexes* **Eg:** *splunk fsck repair --one-bucket --index-name=indextest--bucket-name=db_1502353482_1504459082_1 --try-warm-then-cold* **output in unix:** Operating on: idx=indextest bucket='/opt/splunk/var/lib/splunk/indextest/db/db_1502353482_1504459082_1/' (entire bucket) Rebuild for bucket='/opt/splunk/var/lib/splunk/indextest/db/db_1502353482_1504459082_1' took 64.23 milliseconds Repair entire bucket, index=indextest, tryWarmThenCold=1, bucket=/opt/splunk/var/lib/splunk/indextest/db/db_1502353482_1504459082_1, exists=1, localrc=7, failReason=No bloomfilter in finalDir='/opt/splunk/var/lib/splunk/indextest/db/db_1502353482_1504459082_1' The issue is not resolved.. Then I even tried disabling the index */opt/splunk/bin/splunk disable index name_of_your_index* I started splunk up and enabled the index from the web gui and restarted splunk Still the issue is not resolved. Any help and hints appreciated [1]: /storage/temp/217688-error2.png

How to configure splunk to convert numeric data from English to Italian?

$
0
0
I followed the document to translate splunk to a specific language [http://docs.splunk.com/Documentation/Splunk/6.5.2/AdvancedDev/TranslateSplunk#Localize_dates_and_numbers][1]. Though I copied all the contents of the folder in `locale/en_US` to `locale/it_IT` (for Italian), and restarted splunk, the conversion didn't happen. Is there something that I'm missing out or is there another way to do it? [1]: http://docs.splunk.com/Documentation/Splunk/6.5.2/AdvancedDev/TranslateSplunk#Localize_dates_and_numbers

Graph from key/value pairs

$
0
0
Hello, I am extracting from a database the list of the largest 20 tables. The format would be something like =: For example: TableSizeMB LargestTable=2012 VeryLargeTable=2008 SomeTable=500 Obviously, the list is not fixed as some tables might become larger and make it to the list while others would disappear. Would it be possible to have a graph of these tables and their sizes? If yes, how should I define the search? Thank you in advance.

Error in 'dbxquery' command: Invalid message received from external search command during setup, see search.log.

$
0
0
Hello, When I configured a getting data from Oracle DB I got error after execution query. Error: Error in 'dbxquery' command: Invalid message received from external search command during setup, see search.log. DBConnect version is 3.1.1 Please, help me to solve this problem. Thanks!

Debugging app breakpoints fail in VS & PyCharm

$
0
0
I am working to setup debugging for app development in Splunk 6.6.3. My challenge has been getting the breakpoints in the app to trigger. Following the blog post below, I have tried setting up both VS 2015 & PyCharm 2017.2.3 with success in starting SplunkWeb. Post: "How to debug Django applications with pdb, PyCharm, and Visual Studio" I can get a breakpoint to trigger in root.py, but not in my app. I start debugging, hit the root.py breakpoint, continue and open a browser. When I perform an operation (save/delete of kvstore) the expected breakpoints in the .py code do not trigger. But, the operation completes and the kvstore is updated. I am at a loss to understand why breakpoints set in myapp in "C:\Program Files\Splunk\etc\apps\myapp" will not trigger. VS project folder: C:\Projects\SplunkPython VS project home: C:\Program Files\Splunk\ VS startup file: C:\Program Files\Splunk\Python-2.7\Lib\site-packages\splunk\appserver\mrsparkle\root.py VS working dir: C:\Program Files\Splunk I have also tried setting breakpoints in an installed 3rd party app with the same result. Any suggestions from you developers out there?

How to make my search more efficient? Help to remove joins

$
0
0
My search is running pretty slow and I am looking to edit/remove the joins to make it run faster. It looks pretty messy and the reason I have weird things going on with my location information is because for any location that does not match my lookup- location, Country, and Region to be filled as Unknown. And unfortunately, I was getting a weird error with using a lookup table and setting a default value so I had to do it manually. I am using two joins because I am looking over 3 different time periods. Are there any ways to improve the efficiency of this search and make it run quicker? index=example date_month=August date_year=2017 (assignment_group="*") | dedup number | fillnull value="UNKNOWN" location | eval regionblank= "UNKNOWN" | eval countryblank= "UNKNOWN" | eval locationblank="UNKNOWN" | lookup CurrentSiteInfo.csv location| eval site=coalesce(location2,locationblank) | eval Region=coalesce(Region,regionblank)| eval Country=coalesce(Country,countryblank) | search ((Region="*") (Country="*") (site="*")) |stats count as Tickets by contact_type | join overwrite=false contact_type [search index=example earliest="6/01/2017:00:00:00" latest="12/31/2017:24:00:00" (assignment_group="*") | dedup number | fillnull value="UNKNOWN" location | eval regionblank= "UNKNOWN" | eval countryblank= "UNKNOWN" | eval locationblank="UNKNOWN" | lookup CurrentSiteInfo.csv location| eval site=coalesce(location2,locationblank) | eval Region=coalesce(Region,regionblank)| eval Country=coalesce(Country,countryblank) | search ((Region="") (Country="") (site=""))| bucket _time span=1mon | stats count as Tickets by contact_type _time | stats avg(Tickets) as Baseline by contact_type | eval Baseline = round(Baseline,0)] | eval "Baseline Variance" = Tickets - Baseline| join overwrite=false contact_type [search index=example earliest=-3mon@mon (assignment_group="*") | dedup number | fillnull value="UNKNOWN" location | eval regionblank= "UNKNOWN" | eval countryblank= "UNKNOWN" | eval locationblank="UNKNOWN" | lookup CurrentSiteInfo.csv location| eval site=coalesce(location2,locationblank) | eval Region=coalesce(Region,regionblank)| eval Country=coalesce(Country,countryblank) | search ((Region="*") (Country="*") (site="*")) | bucket _time span=1mon | stats count as Tickets by contact_type _time | stats avg(Tickets) as Average by contact_type | eval Average = round(Average,0)] | eval "Average Variance" = Tickets - Average | table contact_type Tickets Baseline "Baseline Variance" Average "Average Variance" | addcoltotals | sort 0 Tickets

search logs show up only when i restart UF on DC

$
0
0
Hi Guys, I have installed splunk UF 6.3.3 on our Domain Controller 2k12 and following is my inputs.conf [WinEventLog://Security] disabled = 0 start_from = newest current_only = 1 evt_resolve_ad_obj = 0 checkpointInterval = 5 # exclude these event IDs from being indexed. blacklist = 4634,4648,5156,4776,5145,4769,5158,5140,4658,4768,4661,4771,4672,5136,4770,4932,4933,4760,4625,4656,4663,4690,5154,4670,5152,5157,4724,4738,4931 index = wineventlog renderXml=false ISSUE is I can see in data summary count of logs increasing for this source type realtime that is events are getting indexed but when i do a search does not show any new events only when i restart the UF i began to see logs which stop again and i have to keep repeating the restart of spluknd on UF to see the new logs in search. Any help would be appreciated thanks in advance

Splunk App Babel Fish - Anyone knows about it?

$
0
0
I'm at a .conf2017 session on Splunk NLP and the demo'ed app is "App:Babel Fish" in a test environment - that converts the language queries into SPL and presents visualizations. This can integrate really well with Alexa. Anyone knows - where to get this?

How to extract a JSON part from an incomming stream from journald to output only one value with /opt/splunk/etc/slave-apps/_cluster/local/transforms.conf

$
0
0
The JSON part to extract is MESSAGES. We crated a REGEX which works in the search, but it should be also added permanently to this "transforms.conf" file. Our solution whitch didn't work is: [journald_clean_index_k8s] REGEX=MESSAGE\":\"(?.*)\" DEST_KEY = MetaData:Message FORMAT= message:$1

stats count zeroes

$
0
0
I have the following search term .... | | stats count(eval(action="failure")) as fails, count(eval(action="success")) as successes by user, host | stats list(host) as "Hosts Contacted", dc(host) as "Count of Hosts", list(fails) as "Fails per Hostname", count(fails) as "Total Fails", count(successes) as "Successful Logins" by user Im getting a table like follows: user.................hosts contacted...count of hosts,......fails per hostname...........total fails...........successful logins username1............somehost.................2................1...................................2...........................2 .....................somehost2.................................1................................................................ As we can see, the query unsuccessfully determines the result of the login attempts. For comparison, if i add `list(fails)` to the final `stats` command they will show up as 0s, but the column with Total fails will still add them up. Does my query count 0s as values and add them in the `count()` function, or am I missing something else here? The goal is to list amount of fails and successful logins (e.g) display the total amount of failed logins per host and the amount of successful logins, grouped by a user. Essentially, it's the same for the successful logins, if i have 4 successful logins and 0 failed, both columns will show 4.

Data retention of at least 6 months

$
0
0
Hello guys, I'm built this query, do you think it's reliable to check which index should be increased for home/cold sizes? | tstats latest(_time) as latest,earliest(_time) as earliest WHERE index=* by index host source | eval lasttime=strftime(latest, "%Y-%m-%d") | eval firstevent=strftime(earliest, "%Y-%m-%d") | eval stoday=strftime(now(),"%Y-%m-%d") | eval months_ago=(now()-15552000) | eval diff=months_ago-earliest | eval resultat=if(match(diff,"-"),"- 6 mois","+ 6 mois") | sort index,host,source,firstevent | fields - latest lasttime stoday months_ago earliest diff Thanks.

Splunk and OSX High Sierra APFS

$
0
0
Splunk 7.0 doesn't start in new MACOS X with the APFS (Encrypted) filesystem. Is APFS not supported?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>