Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Config Explorer error

$
0
0
Hi at all, I tried to use the Config Explorer app on a stand-alone Splunk server (on italian Windows 10), but when opening it I have the following error message: An error occurred! init: An exception of type TypeError occurred. Arguments: ('environment can only contain strings',) and then I haven't any function or dashboard or interface. Is there an installation or configuration action to perform to start the app? Ciao. Giuseppe

Logs not received into splunk

$
0
0
Hi Team, HF has been installed in a server, connectivity has been created to splunk, but we are not able to see any logs in splunk. We have two different hosts. For one of the hosts we are able to see the logs, but not able to see the logs for another host. Note: 1) Host2 is using the same index name and log files are placed in same path as of host 1

need to add 45 days in a field

$
0
0
i have a field "add_time" with the values as "05-27-2020 08:57:34.024" i want to create a field which will show 45 days ahead of the given time. i.e output should be "07-11-2020 08:57:34.024" please help me in writing this spl. Thanks in advance

Don't Expire Alerts

$
0
0
Hello All, Sorry to ask a silly question, I had a look around, but unable to find a solution. When we set an alert in Splunk, there is an Expires Parameter. I understand this is TTL for the Alert (Sorry if I have misunderstood it). I don't want my Alert to Expire. How can I achieve this please? If there is no means to achieve this, is there a way to trigger a notification, when that alert is about to expire please? I tried couple of options in alert setting, to see if splunk triggers a notification when an alert expires, I am afraid no notification was triggered. For example set "Trigger Condition", "Trigger Time" and set the alert to Expire in 10 mins. The alert Expired but no notification was triggered via email. I had a feeling it won't work, as Trigger Condition means - The condition that triggers the alert and NOT alert expiry - but just tried my luck! Best Regards,

Matching fields from different indices to return another field

$
0
0
Hi, I have two different indexes where I need to match a field and if true, return another field. First Search (Index1) FileName DeviceName explorer.exe myserver.test.com processor.dll anothersystem.xyz.abc third.exe yetanother.aaa.bbb another.exe myserver.test.com Second search (Index2) HostName Owner MYserver.test.com bob@sample.com nonEXistent.abc.ccc larry@sample.com yetANOTHER.aaa.bbb charlie@sample.com Desired search result DeviceName FileName Owner myserver.test.com explorer.exe bob@sample.com another.exe yetanother.aaa.bbb third.exe charlie@sample.com Couple of things to notice - I need to show results where DeviceName and HostName match. Both fields may be in different case (so case insensitive matching is required) - If DeviceName==HostName, I need the Owner field returned from Index2 - One DeviceName/HostName may have many FileNames under it and I need to display all (explorer.exe + another.exe) I've been tinkering around and am having a hard time finding the right query. Here's where I'm at. (index=index1 sourcetype=type1 FileName=somecondition*) OR (index=index2 sourcetype=type2) | fields FileName, DeviceName, Owner, HostName | eval magic=case(DeviceName==HostName, Owner) | stats list(FileName) as FileName, list(magic) as SysOwner by DeviceName Although it doesn't work. I tried variations of the eval statement using `if`, `coalesce` and a few other solutions from other questions. But I believe the case difference between the two fields is what is hindering me. I'm still new to Splunk and any help would be appreciated! :)

ProcessRunner: No such file or directory

$
0
0
Hello! I’m working on streaming telemetry data to Splunk. I use Splunk Universal Forwarder v7 x86_64 to capture and stream data to Splunk Enterprise 8. I use the `script://` to capture data and run them at certain specified intervals. The data is being successfully streamed to the server. But, intermittently, `splunkd` (SUF) crashes, and I see the following error in my `splunkd.log.` . . 06-02-2020 17:12:27.975 -0700 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/btool.log'. 06-02-2020 17:12:27.993 -0700 INFO WatchedFile - Will begin reading at offset=1182 for file='/opt/splunkforwarder/var/log/splunk/splunkd-utility.log'. 06-02-2020 17:12:56.832 -0700 INFO ScheduledViewsReaper - Scheduled views reaper run complete. Reaped count=0 scheduled views 06-02-2020 17:30:37.696 -0700 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 06-02-2020 17:53:37.315 -0700 ERROR ProcessRunner - Error from ProcessRunner helper process: ERROR - Failed opening "": No such file or directory 06-02-2020 17:53:37.316 -0700 ERROR ProcessRunner - Error from ProcessRunner helper process: terminate called after throwing an instance of 'EventLoopException' 06-02-2020 17:53:37.316 -0700 ERROR ProcessRunner - Error from ProcessRunner helper process: what(): Main Thread: about to throw an EventLoopException: error from EventLoop poll: No such file or directory 06-02-2020 17:53:37.676 -0700 FATAL ProcessRunner - Unexpected EOF from process runner child! I have tried to grok through Splunk answers and on Google; but, I couldn’t find much documentation/articles on what file `ProcessRunner` was trying to open? Could someone help me or point me to the right channel to understand how I can fix this issue. Here’s my `inputs.conf` ’s script stanzas: [script://$SPLUNK_HOME/bin/scripts/.py] source = source-one sourcetype = source-one [script://$SPLUNK_HOME/bin/scripts/.path] source = source-two sourcetype = source-two interval = 60 [script://$SPLUNK_HOME/bin/scripts/.path] source = source-three sourcetype = source-three interval = 1800 [script://$SPLUNK_HOME/bin/scripts/.path] source = source-four sourcetype = source-four interval = 1800 Thank you!

Can we delete frozen data in Splunk

$
0
0
Recently we encountered a problem. /opt file system on the indexer server has reached 100% due to which users were unable to do search. we found that /opt/splunk/archive/main folder is consuming most of the disk space (499 GB out of 500 GB). This is the folder which contains frozen data. Please find the below line in the configuration of main index in indexes.conf coldToFrozenDir = /opt/splunk/archive/main [main] coldPath = $SPLUNK_DB/defaultdb/colddb bucketRebuildMemoryHint = 0 compressRawdata = 1 syncMeta = 1 frozenTimePeriodInSecs = 15552000 enableOnlineBucketRepair = 1 homePath = $SPLUNK_DB/defaultdb/db enableDataIntegrityControl = 0 coldToFrozenDir = /opt/splunk/archive/main thawedPath = $SPLUNK_DB/defaultdb/thaweddb enableTsidxReduction = 0 maxTotalDataSizeMB = 50000 Can we delete frozen data?

Splunk Db connect app running on Windows with python3 is not working

$
0
0
Hi Splunkers, We have the following environment: • Splunk - 8.0.0 • OS – Windows server 2016 • Splunk db_connect_app – 3.2.0/3.3.1 • Python – python3 • Jre – 1.8 NOTE: Machine has timezone variable set (TZ) With the above configurations splunk db_connect_app throws exception on the UI as **“Not able to communicate with task server”** and this is due to the fact that **dbx_logging _formatter.py** throws an exception while calling this line “*os.unsetenv(‘TZ’)*” and the exception is as follows: **“module ‘os’ has no attribute ‘unsetenv’”** After looking at python3 sdk, we found *unsetenv* method is not present in the *os module*. This particular piece of code has to be replaced by *os.putenv(‘TZ’, None)*, when running with python3 Please let us know, if this is an issue or there is some work around. But we can not **unset ‘TZ’** as a work around and we can not degrade python3 to python2 TIA Hanika

How to use .json file as input in a POST call to the REST API

$
0
0
Im trying to update a role in our environment via the Splunk REST API and Im using POSTMAN like app with an input file which is holding several changes in parameters for the specified role. The post call looks like this: POST https://localhost:8089/services/authorization/roles/rl_user Authorization: Bearer xxx < ...\input.txt And this is the content of the input file: srchIndexesAllowed=main;srchJobsQuota=3;srchDiskQuota=300 Even though this is working quite fine, editing the input file manually like this is really unpractical. What I would like to know is whether there is an option to first call the API to export the role in atom(xml)/json format. Take that export, update the values I need and then import the file again via a POST API call to change the params? I`ve been playing with this for quite some time but no luck. Any advise is appreciated. Thanks.

Corrupted fields problem

$
0
0
I have a problem on this search below for last 25 days: index=syslog Reason="Interface physical link is down" OR Reason="Interface physical link is up" NOT mainIfname="Vlanif*" "nw_ra_a98c_01.34_krtti" Normally field7 values are like these ones: Region field7 Date mainIfname Reason count ASYA nw_ra_m02f_01.34pndkdv may 9 GigabitEthernet0/3/6 Interface physical link is up 3 ASYA nw_ra_m02f_01.34pldtwr may 9 GigabitEthernet0/3/24 Interface physical link is up 2 But recently they wee like this: 00:00:00.599 nw_ra_a98c_01.34_krtti 00:00:03.078 nw_ra_a98c_01.34_krtti I think problem may be related to: It started to happen after the disk free alarm. (-Cri- Swap reservation, bottleneck situation, current value: 95.00% exceeds configured threshold: 90.00%. : 07:17 17/02/20) Especially This is not about disk, it's about swap space, the application finishes memory and then goes to swap use. There was memory increase before, but obviously it was insufficient, it is switching to swap again. I need to understand: ''Why they use so many resources?''

Check Deployer and search head status in internal logs

$
0
0
I am trying to monitor deployer and search head service status using _internal logs. Which fields should I consider to monitor whether Splunk service on deployer and SH are up and running? Note: I am building a dashboard to monitor splunk service status

command modifier what is the use of it in simple terms

$
0
0
What is the use of command modifier in layman terms, please I don't know what it does apart from the understanding that it modifies the commands?

String matches

$
0
0
I have an events for each device with multiple checks as below and i want to find the device count which has "Pass" on all the fields and the device count which has "Fail" in even one field Device1 check1: Pass check2: Fail check3: Pass Device2 check1: Pass check2: Pass check3: Pass Device3 check1: Fail check2: Fail check3: Pass I'm looking something similar to this ----------------------------------------------------- Healthy_Device_Count =1 Un_Healthy_Device_Count=2

search with parameters

$
0
0
Hello, I have this query: index=prod eventtype="csm-messages-dhcpd-lpf-eth0-listening" OR eventtype="csm-messages-dhcpd-lpf-eth0-sending" OR eventtype="csm-messages-dhcpd-send-socket-fallback-net" OR eventtype="csm-messages-dhcpd-write-zero-leases" OR eventtype="csm-messages-dhcpd-eth1-nosubnet-declared" | transaction maxpause=2s maxspan=2s maxevents=5 | eval Max_time=(duration + _time) | eval Min_time=(_time) | table _time,eventcount, eventtype ,Min_time, Max_time,tail_id,kafka_uuid | foreach eventtype [eval flag_eventtype=if(eventcount!=5,"no", "yes")] now i have a lookup table and i want to set parameters in my query that will be taken from the lookup table. for example , instead of searching eventtype="csm-messages-dhcpd-lpf-eth0-listening" OR eventtype="csm-messages-dhcpd-lpf-eth0-sending" OR eventtype="csm-messages-dhcpd-send-socket-fallback-net" OR eventtype="csm-messages-dhcpd-write-zero-leases" OR eventtype="csm-messages-dhcpd-eth1-nosubnet-declared" i want to take the values of the eventtype from the lookup table how can i do that ? thanks

the scripts of Splunk Add-on for Unix and Linux pending in ps queue

$
0
0
I run Universal Forwarder 8.0.3 & Splunk Add-on for Unix and Linux 8.0.0 on AIX 7.1 while I found no event came to index = OS after I used ps -ef | grep splunk I found some script ex. Iostat.sh ,cpu.sh ... pending in the queue after we kill the jobs ex. Iostat.sh ,cpu.sh ... ,the event came to index and the schedule scripts were running agent . how can I trouble shooting the issue ????

Two overlays, using different time span

$
0
0
I have the following timechart, that I display in a column chart, where I use the average value as an overlay. timechart span=1d avg(time), count However, if possible, I'd like a second overlay that should show a flat line with the average time over the entire search period, not just the daily span for the timechart.

Search only displaying 24 hours of data

$
0
0
1. There are approximately 1.5 Billion ingested entries from 40 forwarders. 2. Performing search with any criteria on windows hosts lists all events -all time 3. Performing same search on linux hosts only returns 24 hours of data regardless of time/date ranges supplied. Each day- the data only covers the last 24. What settings could be causing this

"too_small" sourcetype gets appended in some Splunk versions.

$
0
0
I have added a monitor stanza for the log folder which contains log files that I want to ingest into Splunk. I have set sourcetype for each log file in props.conf but in some Splunk version(like 7.3.3, 8.0.0, 8.0.1) it is not working and Splunk set sourcetype for those log files to one of the following:     1) log_file_name-too_small     2) log_file_name-{digit}(like log_file_name-2, log_file_name-4) I have read some answers like this is happening because of the small size of the log file, but this is not an issue for some Splunk version(like 8.0.4), this is happening for windows and Linux both(mostly with windows). I have tried the below approaches in props.conf but none of them seem to be working. 1) [source::.../etc/apps//local/logs/log_file.log(.\d+)?]       sourcetype =     2) [source::...*etc*apps**local*logs*log_file.log(.\d+)?]       sourcetype =     3) [source::....*etc.*apps.*.*local.*logs.*log_file.log(.\d+)?]       sourcetype =     4) [source::...(.)*etc(.)*apps(.)*(.)*local(.)*logs(.)*log_file.log(.\d+)?]       sourcetype =     5) [source::...(.*)etc(.*)apps(.*)(.*)local(.*)logs(.*)log_file.log(.\d+)?]       sourcetype =     6) [source::...\\etc\\apps\\\\local\\logs\\log_file.log(.\d+)?]       sourcetype =     7) [source::...\etc\apps\\local\logs\log_file.log(.\d+)?]       sourcetype =     8) [source::C:\\Program Files\\Splunk\\etc\\apps\\\\local\\logs\\log_file.log(.\d+)?]       sourcetype =     9) [source::C:\Program Files\Splunk\etc\apps\\local\logs\log_file.log(.\d+)?]       sourcetype = I have tried all of this because I thought in windows it might be an issue with path separator but none of them are working, but I got one solution that is working and giving right sourcetype which is like this. In props.conf:     [source::...log_file.log(.\d+)?]     sourcetype = but I don't wont to rely upon this approach because it is possible that the same log file name is present in some other apps so it may get into that way and also this approach is time-consuming as it will going to find the file in all folders. I have tried to solve this in another way like providing sourcetype stanza of "log_file_name-too_small" sourcetype and changing sourcetype with help of transform.conf file, it is working for "log_file_name-too_small" as below. In props.conf: [log_file_name-too_small] TRANSFORMS-remove_too_small_sourcetype = remove_too_small In transform.conf: [remove_too_small] DEST_KEY = MetaData:Sourcetype REGEX = .* FORMAT = sourcetype:: but as I mentioned above that sourcetype value might be "log_file_name-{digit}" so I need to do solve this the same way as above (like specifying [log_file_name-2]) but I think this is not the right way as the value of digit may be anything, so I tried regex (log_file_name-*) in sourcetype stanza of props.conf but it is not working maybe because sourcetype stanza does not allow regex. It would be great if anyone able to solve this problem. Regards.

I want to remove my unwanted logs into nullQueue.But no luck

$
0
0
#### #### #### #### 2020-05-12 14:34:52,060 2020-05-12 14:34:52,060 2020-05-12 14:34:52,060 I want to remove ####< from my events, so i used props.conf along with transforms.conf with this below setting. But still ####< is not removed from the events. My props.conf [hast_sourcetype] BREAK_ONLY_BEFORE_DATE = CHARSET = UTF-8 DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 29 NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TRANSFORMS-remove-hash = include-date-item category = Custom description = hash_sourcetype pulldown_type = true My transforms.conf [eliminate-hash-item] DELIMS = ####< DEST_KEY=queue FORMAT=nullQueue Please help me to solve this issue.

The rest api add-on works in with version 1.5.3 but when I upgrade to 1.8.1 or 1.8.2 the data stops being ingested into splunk. any idea why?

$
0
0
I've got about 10 or 12 rest api inputs setup in the add-on that are all working fine with 1.5.3 but stop working whenever I upgrade the add-on to 1.8.X is there anything I need to be changing to make it work? I'm on splunk 7.3.1 currently with RHEL7.4
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>