Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

change colon to dash in Search

$
0
0
First, let me start by saying I am not a programmer, a Splunk expert, highly experienced with Regex or SED. I say this so you understand if you offer an answer please do not leave any steps out expecting I know what should fill in the blanks. I get MAC addresses in the format of 00:00:00:00:00:00 but the logs I need to search are in the format of 00-00-00-00-00-00, I'm looking for a way for Search to take the input with colons and convert the colons to dashes before executing the search so we do not have to manually change before executing our search.

Field extraction stanza help in props.conf?

$
0
0
I have the username filed extraction as follows in the props.conf which extracts the username:- [sourcetype_X] EXTRACT-XYZ = username="(?[^+\"]*)" which extracts the field as follows x12345@abc-def-ghij-01.com y67891@klm-def-ghij-01.com z45787@abc-def-ghij-01.com ABC-DEF Now what would be regex stanza to extract the username as follows from the above x12345 y67891 z45787 ABC-DEF

How to save the CSV file to external location

$
0
0
I would like to save the CSV file to an external location. I am using the |outputcsv command which is saving the file to a Linux but I need the file to be picked up from there and move to external location such as WVDCCRVFASS\ETL\FlatFiles\Splunk. Can you please all let me know how can this be done?

blocked=true messages

$
0
0
I noticed on my splunk instance that I am getting messages like these: 02-07-2020 15:20:36.038 -0500 INFO Metrics - group=queue, name=typingqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=993, largest_size=993, smallest_size=993 02-07-2020 15:21:35.038 -0500 INFO Metrics - group=queue, name=aggqueue, blocked=true, max_size_kb=1024, current_size_kb=1023, current_size=2035, largest_size=2035, smallest_size=2035 02-07-2020 15:21:35.038 -0500 INFO Metrics - group=queue, name=auditqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=809, largest_size=809, smallest_size=809 02-07-2020 15:21:35.038 -0500 INFO Metrics - group=queue, name=indexqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=998, largest_size=998, smallest_size=998 02-07-2020 15:21:35.038 -0500 INFO Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=6144, current_size_kb=6143, current_size=99, largest_size=99, smallest_size=99 02-07-2020 15:21:35.038 -0500 INFO Metrics - group=queue, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=995, largest_size=995, smallest_size=995 How can I resolve this?

ERROR DeployedApplication - Failed to install app=/web/splunk/etc/master-apps/s; reason=Application does not exist

$
0
0
I am receiving the above error when trying to deploy update and new apps from the Cluster Master to the Indexer Cluster. The aspps do exist in /web/splunk/etc/managed-apps directory structure on the cluster master, but I still get this error. I have reviewed several Answers and none of them have worked for us. Recently we worked on the indexes.conf to standardize some features, I'm wondering if that may have caused it. All we did was remove The repFactor-auto from each index (there are about 35) and put it in the [default] and also removed the maxTotalDataSizeMB from each of the index configuration and put it in the default stanza. Now when we push a bundle, we get the Failed to install app with the application does not exist. Would really appreciate any assistance you can give me.

how to setup a triggered alert on a index based on usage?

$
0
0
Hello, I would like to setup ongoing alert to be triggered anytime an index ingests 20gb of logs. This is to prevent a license violation due to developers turning on debug mode and leave it one resulting in a lot of unnecessary logs after the issue is resolved. Thank you!

Future Request: Epoch Time Correction

$
0
0
| makeresults | eval time=-62167252739 | eval _time=time | eval time_text=strftime(_time,"%c %::z") `-62167252739` is "0000/01/01 00:00:00 +0000" but, my result is _time time time_text 0000/01/01 00:00:59 -62167252739 Sat Jan 1 00:00:00 0000 +09:18:59 my TZ=JST. Is this problem only JST? I don't think so. Hopefully it will be fixed.

How to get the correct URL to the Splunk collector

$
0
0
1.While creating a splunk it is showing "Please enter a valid URL beginning with https:// " even though my URL format starts with https://

Can Splunk find love?

$
0
0
Since Valentine's Day is near, Splunk can search for everything. And it might find love, I thought. How?

Use of Timewrap command to control the time range

$
0
0
Hi, I am trying to plat a graph of response time over a period of time. I am using timewrap command to plot it for yesterday, day before yesterday and last week. The problem is I only want it for a certain period of time on the day. For Example between 12:00 PM to 10:00 PM (peak hours). I am snapping the time in the search itself like this earliest=-7d@d+3h latest=@d but is not working. Please see the graph - on the x-axis it is still plotting from 12:00 AM but what i want is from 12:00 PM. earliest=-7d@d+3h latest=@d Any help is appreciated.![alt text][1] [1]: /storage/temp/282594-resp-time.jpg

Line Break Assistance required

$
0
0
Hello Splunkers, required yous assistance with a line break for below-mentioned logs at `],[` {"time":1581014469,"states":[["4b1803","SWR55X ","Switzerland",1581014469,1581014469,8.7818,46.8227,6880.86,false,206.91,354.01,-7.8,null,7063.74,"1000",false,0],["3cf0a4","IFA509 ","Germany",1581014469,1581014469,7.9657,46.878,8534.4,false,143.86,32.44,0,null,8679.18,"5344",false,0],["3c6758","DLH1333 ","Germany",1581014469,1581014469,8.545,47.7009,11582.4,false,212.56,30.23,0,null,11681.46,"1030",false,0],["3c5442","DLH02J ","Germany",1581014469,1581014469,6.6594,46.3485,10363.2,false,226.41,39.01,0,null,10492.74,"1000",false,0],["3c658e","DLH15U ","Germany",1581014468,1581014469,9.0273,46.5254,10355.58,false,229.56,358.2,0,null,10347.96,"1000",false,0],["4a8159","SCW3P ","Sweden",1581014469,1581014469,6.9469,46.9315,8557.26,false,221.02,229.15,-10.08,null,8557.26,"0763",false,0],["440344","LDM74J ","Austria",1581014469,1581014469,10.1866,46.0682,5631.18,false,197.18,242.83,-14.96,null,5814.06,"4131",false,0] current `props.conf` used for above-mentioned logs is (REST mechanism is used for data integration) [ geomonitor] CHARSET=UTF-8 DATETIME_CONFIG=CURRENT LINE_BREAKER=([\r\n,])\[" NO_BINARY_CHECK=true SHOULD_LINEMERGE=false category=Structured disabled=false pulldown_type=true Thanks in advance

How often should I upgrade Splunk Enterprise?

$
0
0
The [software support policy for Splunk Enterprise](https://www.splunk.com/en_us/legal/splunk-software-support-policy.html) is now two years. My company has a policy to wait a few releases before upgrading any software to make sure that new features are stable. But then we only have a year before that version moves out of support. How do we get in the sweet spot of Splunk Enterprise updates?

How to manage reports and alerts for 150+ indexes?

$
0
0
We have a ton of indexes and need to better understand which ones have stopped receiving events so that we can report and alert on them. We have a Splunk Enterprise v7.3.3 distributed environment with multiple (non-clustered) indexers, and non-pooled search heads configured in standalone mode. Our DSV, SH, and ES are each individual hosts and our ES is configured as a secondary SH. We manage index changes via CLI edits of indexes.conf, a deployment app, and redeployment of server classes. We currently use the below in a dashboard panel, which generates a list of all "0-count" indexes that haven't received events in over 24 hours, but as a static list, there's a lot of additional work to get a holistic view of what's changed and when. I'd prefer query logic over a new app, as we're already hoping to pare down some of (our own) 'bloat.' ## generates a list of all "0-count" indexes that haven't received events in over 24 hours... |tstats count where (index=* earliest=-24h latest=now()) by index |append [|inputlookup index_list.csv |eval count=0] |stats max(count) as count by index |where count=0 Thanks in advance!

Allowed characters for metadata fields source and sourcetype

$
0
0
My question is simple: which characters are allowed for the values of the metadata fields `source` and `sourcetype`? I could not find any documentation on this.

Arithmetic on multi field values

$
0
0
I am new to Splunk, and I need to perform arithmetic on some multi-field values. What is the best way to do this? Here is an example of an event (where the "stuff" field is an array containing any number of key-value pairs with "A" and "B"): event1 { name: foo stuff: [ { A: 10 B: 220.0 } { A: 2 B: 50.0 } ] } event2 { name: foo stuff: [ { A: 2 B: 100.0 } ] } Here is the search I am using: | mvexpand stuff{} | rename stuff{}.* as * | eval test=B/A | table _time A B test However, test is empty whenever there is more than 1 "stuff" in my event. In the example above: test=null, null, 50 My goal is to calculate "test" so that: test=22, 25, 50

Make extractions in props.conf from search query

$
0
0
| makeresults | eval _raw="Nov 14 03:23:42 hostname rsyslogd-pstats:{ \"name\": \"global\", \"origin\": \"dynstats\", \"values\": { } } Nov 14 03:23:42 hostname rsyslogd-pstats:{ \"name\": \"imuxsock\", \"origin\": \"imuxsock\", \"submitted\": 0, \"ratelimit.discarded\": 0, \"ratelimit.numratelimiters\": 0 } Nov 14 03:23:42 hostname rsyslogd-pstats:{ \"name\": \"action 0\", \"origin\": \"core.action\", \"processed\": 50996, \"failed\": 0, \"suspended\": 0, \"suspended.duration\": 0, \"resumed\": 0 } Nov 14 03:23:42 hostname rsyslogd-pstats:{ \"name\": \"action 1\", \"origin\": \"core.action\", \"processed\": 50996, \"failed\": 0, \"suspended\": 0, \"suspended.duration\": 0, \"resumed\": 0 }" | makemv delim=" " _raw | stats count by _raw | rex "(?{.*)" | spath input=json This query works fine. If I want to extract by _props.conf_, what's setting? TIME_FORMAT = %B %d %T INDEXED_EXTRACTIONS = JSON KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true I created it above, but I don't know the other settings. If possible, please do not use SEDCMD and use it. FIELD_HEADER_REGEX = ^.*?(?={) Is this it? cf. [Extract fields from files with structured data][1] [1]: https://docs.splunk.com/Documentation/Splunk/8.0.1/Data/Extractfieldsfromfileswithstructureddata

Can't see newly created indexes on search head in distributed search

$
0
0
I have a single indexer and single search head with the indexer attached as a search peer and I created one index called "winevent" on the indexer. I don't understand why the search head cannot see this index or auto complete it when I type it in search. Is there another file I need to modify to make my search head aware of the indexes in an indexer? I haven't seen a real clear answer on this and I am trying to expand my Splunk instance from all in one to distributed architecture

Unable to login to freshly installed Splunk Enterprise (just reloads login page)

$
0
0
On a fresh Splunk Enterprise install, I cannot log in to the web GUI. When I get the password wrong, I am told it is wrong. When I get the password right, it just reloads the login page. Here are the symptoms: - Fresh Install of Splunk 8.0.1 (Have also tried 8.0.0 and 7.3.4) - During startup, I have set a strong password (another post mentioned in 7.1+ it enforces strong passwords, so I tried this) - No errors on install - From web GUI (localhost:8000), when I try my username and password, the login screen simply reloads. When I try a known incorrect password, I get a notification that "Login Failed." So, I am led to believe that my password declared during install is actually recognized since I am not told the login failed when I enter the correct password...but I can't get past the login page. Here's what I have tried so far: -Uninstalled and reinstalled Splunk Enterprise 8.01, 8.00, and 7.3.4. -Tried weak and strong passwords. -Tried default admin / changeme combo. But again, when I get the password wrong, I am told it is wrong. When I get the password right, it just reloads the login page. - Tried to reset admin password by renaming /etc/passwd to /etc/passwd.bak and then creating a new file called user-seed.conf in /etc/system/local with [user_info] PASSWORD = "my strong password" - Also tried to reset password with "splunk cmd splunkd rest -noauth POST /services/admin/users/admin "password=changeme" I am running on Windows 10. Of note, I have run Splunk 8.0.0 on this laptop before. I had the 60 day trial, switched to the free license group, but when I did that I had license violations leftover from when I used it is the 60 day trial. Since I was not able to search because of the violations, I re-installed from scratch (I didn't have much data ingested so I didnt mind) but this is when I stopped being able to login and have had this issue ever since. When I uninstall Splunk I verify that there are no files left in C:\Program Files\. (i.e. no Splunk directory anymore) Has anyone seen this issue before? Anyone know a way past it?

Subtract One Field from Another

$
0
0
Hi guys, I'm having trouble making a simple subtraction (well, I thought it would be simple!). Field1 is a number in string format, Field2 is a count of events. What am I doing wrong? index=index_name | convert num(Field1) as Field1Total | stats count(Field2) as Field2Total | eval Difference=Field2Total - Field1Total | table Difference Thanks for your help!

Search heads cannot connect to web login after 8.0 upgrade

$
0
0
Hello. After upgrading from 7.3 to 8.01 my search heads no longer work. It will not load up the search head web page. Any ideas? If I load from a backup back to 7.3 all of my real time indexed data is there and still working and I can search it all. If no ideas. Is there a way to keep the config but apply it to a new search head? Effectively rebuilding the search head from scratch but with a 8.01 build on it? Either will suffice. Many thanks in advance
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>