Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How to display more than 100 rows on statistic table

$
0
0
Hi all, How can I display more than 100 rows in an statistic table? Thanks in advance :)

How to display more than 100 rows per page on statistic table

$
0
0
Hi all, How can I display more than 100 rows per page in an statistic table? Thanks in advance :)

Single value chart default value

$
0
0
How do I pass in a default value for a single value chart? As in I am not looking to search anything for now in the search query. For example, I just want the chart to display the number 1.10. Is this possible? Tried following in the search query but it doesn't work returning no result. | eval myVal=1.10 | table myVal

Search with multiple trivial 'append' commands taking much longer in Splunk 7.1

$
0
0
We have a dashboard that lists a series of events representing alarms that need to be 'cleared' by the user as non-issues; we have a 'Clear-All'-style button interface that clears multiple events at once matching a given field value, it's implemented in Javascript and triggers a search similar to the one below: | inputlookup cleared.csv | append [| makeresults | eval Id="1000" | eval Reason="Low Level" | eval Timestamp=now()] | append [| makeresults | eval Id="1234" | eval Reason="Low Level" | eval Timestamp=now()] | append [| makeresults | eval Id="1301" | eval Reason="Low Level" | eval Timestamp=now()] ... | append [| makeresults | eval Id="1567" | eval Reason="Low Level" | eval Timestamp=now()] | table Reason,Id,Timestamp | sort Timestamp desc | outputlookup cleared.csv i.e. the cleared events have their unique "Id" field appended to a lookup file, which is then used to hide them in the original search. We've been using this tool successfully for a couple of years now; usually this list of alarms is checked daily and around 20-30 events are cleared simultaneously with a few clicks, however since upgrading to 7.1 we are finding that attempting to clear a large number of alarms causes a hanging behaviour and it can take tens of minutes for the clearing to complete. Further testing of the search above, with around 30 entries appended to the lookup table, shows that the search can take an extremely long time, over 30 minutes, in Splunk 7.1, while the search runs in 2 seconds in 7.0. Also when it eventually completes the job inspector in 7.1 is erroneously reporting that the search only took seconds. It would be simple enough to manually run the search 30 times with a single 'append' each time, but this would be massive change to the Javascript we had to put together to run the search as it is now. I've not seen anything in the release notes to suggest what might be causing this. Any one else having similar problems? Should this be reported as a bug?

Can't select action in "Execute external workflow action" 's window

$
0
0
I have in my EWA (External Workflow Actions) diferent alert_action for testing it and all of them enabled. I checked the alert_action column with the names used in alert_actions.conf. When I use the Execute EWA action, the dropdown is like is disabled. Don not know what more to check, thanks for reading. ![EWA window][1] [1]: /storage/temp/251053-captura.png

Azure AD log missing

$
0
0
Hi there, I had followed the installation instructions to install and configure Microsoft Azure Active Directory Reporting Add-on for Splunk on Heavy Forwarder. The sign-in activities log can be collected from Azure AD. However, about 90% logs are missing while comparing with Azure portal. Does anyone has an idea about it? Thanks in advance. Cheers, Ray

authenticate error in splunk sdk c#

$
0
0
System.Net.Http.HttpRequestException: An error occurred while sending the request ---> System.Net.WebException: Error: SecureChannelFailure (One or more errors occurred.) ---> System.AggregateException: One or more errors occurred. ---> System.Security.Authentication.AuthenticationException: A call to SSPI failed, see inner exception. ---> Mono.Security.Interface.TlsException: Unknown Secure Transport error `PeerProtocolVersion'.

skipping indexing of internal audit events

$
0
0
I have 3 Indexers in a cluster and recently I changed the indexing path from defalut to different mount point. Initially everything was working fine, recently after 2 days I am getting a pop-up on searchead stating error pasted below: --- 1. *Search peer Sample_Indexer03 has the following message: Audit event generator: Now skipping indexing of internal audit events, because the downstream queue is not accepting data. Will keep dropping events until data flow resumes. Review system health: ensure downstream indexing and/or forwarding are operating correctly.* 2. *Search peer Sample_Indexer03 has the following message: Index Processor: The index processor has paused data flow. Too many tsidx files in idx=_introspection bucket="/media/data/hot/_introspection/db/hot_v1_11" , waiting for the splunk-optimize indexing helper to catch up merging them. Ensure reasonable disk space is available, and that I/O write throughput is not compromised. Learn more.* --- From Internal logs I could see the below errors: --- 3. *ERROR SplunkOptimize - (child_18426__SplunkOptimize) optimize finished: failed, see rc for more details, dir=/media/data/hot/_introspection/db/hot_v1_11, rc=-13 (unsigned 243), errno=2 host = Sample_Indexer03 source = /opt/splunk/var/log/splunk/splunkd.log sourcetype = splunkd* 4. *ERROR SplunkOptimize - (child_18426__SplunkOptimize) merge failed for path=/media/data/hot/_introspection/db/hot_v1_11 rc=-13 wrc=-13 errno=2 file=/media/data/hot/_introspection/db/hot_v1_11/1530062373-1530062373-7514170618120332262.tsidx hint=invalid magic]* --- I have checked other 2 Indexers and all are fine except this one. The settings done was sameon all 3. Also there isn't any space issue as such. Any help would be appreciated.

ERROR StreamGroup - failed to drain remainder total_sz and other errors

$
0
0
Hi, we have strange events in our internal index; we are assuming the loss of data. Could someone please help ous out finding the cause? We can provide the following logs: *06-26-2018 10:58:07.829 +0200 ERROR StreamGroup - failed to drain remainder total_sz=4 bytes_freed=1123 avg_bytes_per_iv=280 sth=0x7f1840bffd70: [1530003401, /hot-index/local/fw_vpn/db/hot_v1_369, 0x7f1806d64ca0] reason=st_sync failed rc=-6 warm_rc=[-4,17] 06-26-2018 10:13:07.674 +0200 ERROR StreamGroup - failed to add corrupt marker to dir=/hot-index/local/_internal/db/hot_v1_2211 errno=File exists 06-26-2018 09:38:29.530 +0200 ERROR StreamGroup - unexpected rc=-8 from IndexableValue->index 06-26-2018 04:39:54.208 +0200 ERROR StreamGroup - failed to add corrupt marker to dir=/hot-index/local/windows_server_security/db/hot_v1_764 errno=File exists 06-26-2018 11:25:10.106 +0200 WARN HttpListener - Socket error from 127.0.0.1 while idling: error:1409441B:SSL routines:ssl3_read_bytes:tlsv1 alert decrypt error 06-26-2018 10:49:26.112 +0200 WARN HttpListener - Socket error from 160.xx.xxx.xx while accessing /services/streams/search: Broken pipe* We are using Splunk 6.6.4 (Build 00895e76d346 ). If you need more info, please ask. Thank you in advance for any help on this. Best wishes Ron

Splunk Apps

$
0
0
Hello all, I am currently looking for using Splunk for achieving the below requirements :- Smart proactive monitoring: One of our major requirements is smart proactive monitoring, we already have our proactive monitoring which oversees our customers' connections and our infrastructure links. Keeping in mind the number of monitored items to monitoring team members' ratio, we are always over whelmed with the reported alarms, consequently decreasing our response to the ratio of 10% of the reported alarms. What we are looking for is an intelligent proactive monitoring system with comprehensive algorithms to increase our response with the same human resources. Alarms prioritization: Mapping critical services to custom KPIs and using AI and machine learning to adapt thresholds. Event correlation: Utilizing AI and machine learning to detect patterns and correlate events to enhance our troubleshooting and root cause analysis time, that includes environmental alarms, link states, etc..., and have end-to-end transactions tracking across our entire infrastructure. Can anyone pinpoint what is the application i need to achieve that requirements?

post DB query output and logs Splunk HEC

$
0
0
Hi All, Need assistance, I have requirement to send Oracle DB query output data/logs to Splunk HEC and some of the log files also need to be sent to HEC. Could you please let me know how can we achieve this? Thanks! Pavan

Splunk using rest api to fetch app name / id

$
0
0
Hello fellow Splunkers, I am using the following query to fetch the splunk app name in standalone search head - | rest /services/search/jobs splunk_server=local | addinfo | where sid = info_sid | rename eai:acl.app as app_name | fields + app_name However, this same query is not working in SHC. It shows *No results found* Any suggestions would be appreciated. Thanks!

Multiselect not working

$
0
0
I'm trying to create a dashboard with 3 different inputs. The first is a dropdown, the second is a multiselect, and the last input is a text search. The dropdown and text input searches seem to be working fine, but I'm having issues with the multiselect. Here is the search I'm using to dynamically populate the multiselect field (this is working how it should): `index=cms_vm | eval StorageArray=upper(StorageArray) | eval StorageArray=replace(StorageArray,"^[^_]*_[^_]*\K.*$","") | table StorageArray | dedup StorageArray | sort StorageArray` However, when I select an option from this dropdown menu, either no results are displayed in my statistics table or I get an error message "Search is waiting for input". Below is my search for the statistics table: `(index=cms_vm) $datacenter$ $arrayfield$ DatastoreName=$lun$ | dedup VM | eval VM=upper(VM) | eval DatastoreName=replace(DatastoreName,".+_(\d+)$","\1") | eval StorageArray=upper(StorageArray) | eval StorageArray=replace(StorageArray,"^[^_]*_[^_]*\K.*$","") | join type=outer VM [search index="cms_app_server" | fields VM Application] | table VM OperatingSystem_Code Datacenter StorageArray DatastoreName Application | rename OperatingSystem_Code AS "Operating System", StorageArray AS "Storage Array", DatastoreName AS "LUN"` When selecting an option from the multiselect, only options where the name is affected by this line (| eval StorageArray=replace(StorageArray,"^[^_]*_[^_]*\K.*$","") are not appearing in the statistics table below.

Tyco (Software House) CCURE event collection

$
0
0
Is anyone collecting Audit and Activity events from the CCURE 9000 application? The logs are in a SQL DB so I assume using the DBConnect2 app is the way to go. I am interested in any advice on what and how to collect the data. Also, any information on the impact to the application caused by collecting the data. Thanks, Ken

Splunk - Return events from a different time range dependent upon field value

$
0
0
Completely new to Splunk, and hoping to find help with a search I'm using for a dashboard, but cannot get this working. I am using the following search to return a table of events based on the "BKSTAT" field, which I setup as a field extraction (which is basically the value of "Success" or "Failure" etc for a backup job in the "backuplogs" sourcetype and outputs to a table by host name:- *sourcetype=BackupLogs BKSTAT=Successful OR BKSTAT=Canceled OR BKSTAT=Failed [search * | eval earliest=if(lower(strftime(now(),"%A"))="monday", "@w5", "-1d") | return earliest ] | stats latest(BKSTAT) by host"* Table Output: Host BKSTAT ServerA Successful ServerB Successful ServerC Failed Server1 Successful (Want to include this server with logs from a different lookup date) The above search works fine and if it runs on a Monday, it captures the log from before the weekend, otherwise it captures the log from the previous day as needed for several hosts that backup daily. We have another server, lets call it "Server1", that has backup logs which populate the same "backuplogs" source. However this server only backs up on a Friday. I need to modify this search so events for "Server1" are always returned looking back to the previous Friday, ie earliest set to "@w5" solely for this server Is there anyway to incorporate this servers events in the table as per example above alongside the existing search, but specifying the different time range lookup for just this host?

Please Suggest some alerts to set up other than the default DMC alerts

$
0
0
What are the alerts we can set up in a Distributed Management Console in a large organization to monitor our splunk whole instance ther than the existing alerts setup by DMC.We are using splunk version 7.1

Report delivery to a file

$
0
0
I have the need to deliver PDF reports externally. Is there a way to have report generated on a schedule and the resulting pdf file be written a location on a filesystem ?

Sort question

$
0
0
I am trying to get the highest used process percentage by user however I am unable to sort by the field I want to. index=os sourcetype=top host=hostname | chart sum(pctCPU) as CPU_USAGE by USER,COMMAND | sort sum(pctCPU) desc | head 5 This produces a table but I'd like the chart to only show the top 5 users and the commands they are running sorted by their CPU_USAGE

How to ignore days with no data in timechart?

$
0
0
Hello, I want to be able to ignore days where data was not collected. I am using the following search: index="x" | timechart span=1d count(Number) What command can I use to ignore these non value added days?

Index only new lines

$
0
0
Hi. I have a requirement of a client, he has a file that indexes every day, but that file is modified at different times, for example modifications to lines 8 and 10000 at 20:00hrs, after modifications of lines 2 and 10100 at 22:00 hrs , Is it possible to index only the lines that have been modified?, at 2:00 am the file not change more.
Viewing all 47296 articles
Browse latest View live