Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Can a search head cluster can be implemented without integrating with deployer?

$
0
0
I have a standalone search head connected to only one search peer. Now I am introducing another search head to the environment and trying to implement a search head cluster with two search heads. Now can I achieve that without integrating these search heads with a deployer instance OR deployer is mandatory to implement search head cluster?

i want help in regular expression.

$
0
0
i have the below expression and which is a keys and i want to check whether the same keys are matching so help me in building regular expression. ":\"aerfsdn:awfsdsdf:kfgms:us-asa-1:13v6030114722:key/rwefnsdlk8-9bbnf8-fsdiufsdk8-9e04-faljfkld95\"

Set Value if there is nothing found

$
0
0
Hi, I'm running Splunk 6.6 and I like to set something like a "default" value in the case that there was nothing found with the SPL querry. The result I get is: SystemA_primary 4000 SystemA_secondary 100 SystemB_secondary 3000 But I like to get something like this: SystemA_primary 4000 SystemA_secondary 100 SystemB_primary 0 SystemB_secondary 3000 SystemX_primary 0 SystemX_secondary 0 I tried it with the following querry without success: index=log 'gateway' | rex field=source "\/\w+\/\w+\/log\/(?\w+)\/\w+\_(?\w+)\/.*" | eval Inst= Env+"_" + instance | stats count as connections by Inst | makecontinuous source | fillnull value=0 connections Thanks for your tips and answers.

High disk usage in /opt/splunk/var/run/splunk/srtemp

$
0
0
Hello , I have splunk search head installed on Linux server . I received an alert for high disk space usage. While troubleshooting , I found that **/opt/splunk/var/run/splunk/srtemp** has two directory which were consuming almost 95% of the disk space. In search of an answer , I found one cause . That is dashboards and reports can generate charts and that may be stored in **/opt/splunk/var/run/splunk/srtemp** . An hour later , I found that it reduced to 56% as one of two directories has been removed. I have two questions regarding this: 1. Is the directory removed automatically from **/opt/splunk/var/run/splunk/srtemp** as it is a temp directory. 2. If I need to do it manually , then what is the best practise to limit the disk size for **/opt/splunk/var/run/splunk/srtemp** so that I can avoid such high disk usage alert I would appreciate your help

How to pass token from a pie chart in dashboard to another dashboard which is not $click.value$

$
0
0
I have a panel with Pie chart which has drilldown. Below is the code on the samePlatform Error Distribution - $tokPanel1$index=app host="prod*" error $tokPanel1_release_timerange$| eval layer="Application"| append [search index=app host="prod*" MQ _raw="*ERROR* $tokPanel1_release_timerange$ | eval layer="Queue"] | append [search index=app host="prod*" dataservice $tokPanel1_release_timerange$|eval layer = "DataService"] |stats count by layer0 I want to switch on drilldown here to choose link based on the click.value which could be 'Queue' or 'dataservice' or 'Application'. Also along with the click.value, I want the $tokPanel1$ value in the title to be passed to the next dashboard. Is this achievable ?

How to use "where" and "not in" and "like" in one query

$
0
0
I have the following query : sourcetype="docker" AppDomain=Eos Level=INFO Message="Eos request calculated" | eval Val_Request_Data_Fetch_RefData=Round((Eos_Request_Data_Fetch_MarketData/1000),1) Which have 3 host like perf, castle, local. I want to use the above query bust excluding host like castle and local sourcetype="docker" AppDomain=Eos Level=INFO Message="Eos request calculated" | eval Val_Request_Data_Fetch_RefData=Round((Eos_Request_Data_Fetch_MarketData/1000),1) | where host NOT like 'castle' AND 'local' ?? Will it work

How to ingest the data into splunk from different servers

$
0
0
While ingesting the data all the logs from the server are falling into single source type. Can any one suggest me how data should be ingested so that source type are classified?

i want help in building query

$
0
0
i have below concern to be solved and sed command

Get metadata results as search events

$
0
0
I need to obtain `| metadata` generated results as search events because I need to associate an alert to `hosts` with a too old `recentTime`. What's the search corresponding to: | metadata type=hosts index=_internal

Regex parsing xml

$
0
0
Hi! I can not extract three fields from xml using regex. Please tell me how it can be done Thank you P.S. Also there is lines like this: Does it work for everything?

Four Single Values in the same panel is it possible to fix alignment?

$
0
0
Hi at all: I have a dashboard divided into three columns. In one of this columns I have a panel with four Single Values, two for each row. I'd like to maintain this alignment also with different monitor or Windows dimensions, in other words I'd like to have a risize of Single Values dimensions but maintaining their positions, Is this possible without using iframes? how? I'm using Splunk 6.5.2. Thank you in advance. bye. Giuseppe

Adding simple Javascript,css, html in splunk dashboard

$
0
0
Dear Splunkers, Please check this https://codepen.io/tieppt/pen/vKJNaE . question is can i have that sonar animation in splunk dashboard using splunk js or any other method. Thanks in advance .

Rearranging the columns

$
0
0
![alt text][1]I want my to rearrange the columns of my query in a particular order as shown below ,but due to dates (01-jun-2017) ,the first part of the query is working fine but other columns are coming after dates(01-jun-2017,-2-jun-2017).So I am rearranging them like 0.1_MTD_last_mon,0.2_CSI_pre_year .Help me sorting the table columns without using this 0.1,0.2 prefixes. Location MTD_Pre_mon MTD_last_mon CSI_pre_year CSI_last_year 01-jun-2017 02-jun-2017 03-jun-2017 abc 1 2 5.5 6.6 90 88 99 |chart sum(MTD) as MTD_Present_Month by Location |chart sum(MTD) as 0.1_MTD_Last_Month by Location |chart values(CSI) as 0.2_CSI_Present_Year by Location |bin span=1d _time |convert ctime(_time) timeformat="%d-%b-%y %A" |chart sum(daily) over Location by _time limit=0 These are the little parts of the query.The whole query is very long with different indexes. [1]: /storage/temp/212581-untitled2.png

Unable to load Algorithm in Splunk ML Toolkit

$
0
0
I followed the link (http://docs.splunk.com/Documentation/MLApp/2.4.0/API/Registeranalgorithm) to load an algorithm MLPRegressor from scikit into Splunk. I did the entry in algos.conf as "[MLPRegressor]" I created a new file in "SPLUNK_HOME\etc\apps\Splunk_ML_Toolkit\bin\algos" as MLPRegressor.py and copied the algorithm code in it. Restarted Splunk. Now when I am applying the algorithm to Predict Numeric Fields it gives an error as below I am running as **SEARCH** ************************************************************************* | inputlookup server_power.csv |fit MLPRegressor hidden_layer_sizes=1 activation=logistic ************************************************************************* **ERROR** ************************************************************************* 09-13-2017 12:42:16.516 INFO ChunkedExternProcessor - Running process: "C:\Program Files\Splunk\bin\python.exe" "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\fit.py" 09-13-2017 12:42:17.028 INFO ChunkedExternProcessor - stderr: Running C:\Program Files\Splunk\etc\apps\Splunk_SA_Scientific_Python_windows_x86_64\bin\windows_x86_64\python.exe C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\fit.py 09-13-2017 12:42:18.650 ERROR ChunkedExternProcessor - Error in 'fit' command: Error while initializing algorithm "MLPRegressor": Failed to load algorithm "algos.MLPRegressor" 09-13-2017 12:42:18.650 INFO UserManager - Unwound user context: NULL -> NULL *************************************************************************

Stats Values Into Timechart

$
0
0
Hi, I wonder whether someone could help me please. I've put together this query: | multisearch [ search `frontenda_wmf(Payments)` detail.dueDate="2018-01-31"] [ search `frontendb_wmf(RequestReceived)` detail.queryString="*AUTHORISED*HM00*"] | stats values(detail.dueDate) as due values(detail.queryString) as query values(auditSource) as auditSource values(auditType) as auditType by tags.IP | where (auditSource="frontenda" AND auditSource="frontendb" AND auditType="Payments" AND auditType="RequestReceived") | timechart span=1d count(due) The problem I'm having is that I can't get the timechart to work. I've also tried just using chart and that doesn't work either. Could someone possibly look at this please and let me know where I've gone wrong? Many thanks and kind regards Chris

Splunk Enterprise free downlaod

$
0
0
Team, I've installed Splunk Enterprise free version in my machine since i am learning splunk, installation was successful but getting error whevever i launch spulnk. =========== Splunk> Another one. Checking prerequisites... Checking http port [8001]: open Checking mgmt port [8080]: open Appserver port 8001 conflicts with http port. Would you like to change ports? [y/n]: =========== I also, tried to cheange the port but did not help me, could you please help me here.

if and statement

$
0
0
Hi, How can I use a combination of an IF statement along with AND. I'm looking to run a count whereby IF the _hour is greater than a certain time, AND a server name matches a list, dont include the server in the results. I have something like this; mysearch... | eval hour=tonumber(strftime(_time,"%H")) | if(hour>2 AND NOT (dest="server1" OR dest="server2" OR dest="server3")) | stats count by _time, hour, dest, status Essentially I dont want to include results of a server between certain hours. Any ideas? Thanks.

{"customized_settings"{}}

$
0
0
Hi, I've a fresh Splunk installation. 1 SH which is also a Master for an indexer cluster with 2 indexers. I just installed the Palo Alto Add-on and App on the SH. I then deployed to my indexers as a configuration bundle. So far so good. Following the configuration guide on my master I opened Manage Apps https://localhost:8000/en-US/manager/search/apps/local I located 'Palo Alto Networks Add-on for Splunk' and clicked 'Set-up' When the set-up page loads I seen a single field with {"customized_settings"{}} ![alt text][1] I tried uninstalling the app/add-on and starting again but there was no change. Any ideas what this could be or how to start troubleshooting it? Thanks, Michael [1]: /storage/temp/212584-configscreen.png

How to rearrange table by values in a column

$
0
0
So I have the following data as output statistics from a search: User Group Number Andy A 123 Andy B 123 Andy C 123 Bob A 123 Bob B 123 Cam A 123 Cam B 123 Cam C 123 How can I rearrange it so that it becomes: User A B C Andy 123 123 123 Bob 123 123 0 Cam 123 123 123 Also, what is this rearranging called?

Query about WEB datamodel

$
0
0
When I restart Splunk, accelerated data in data-model WEB is deleted. I update the WEB, then the model gets the data slowly. if the Splunk restarted, and the data will be deleted again by the Splunk system. Any one can help me to solve the problem ? thanks a lot ! PS:I choose to keep the accelerated data-model WEB for 3 months. other data-models such as Network Intrusion work well. When the Splunk restarted, the accelerated data kept well.
Viewing all 47296 articles
Browse latest View live