Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How to remove VMware addon in the clustered environment

$
0
0
I am removing the add-on from SH cluster but unable to do. It is created the folder on its own even after deletion and restart. Please help

Single value with trend - Why it works like this?

$
0
0
Hi, In my dashboard I have time picker and many single value components with trend. The query behind each single value contains span of 24 hours and the sinle value is comared to "Custom" with "Auto" option. For example: my_query... | timechart span=24h count I dont actually understand single value behavior. 1. If I choose "Last Year" in the time picker, I see the value **0** (which is correct) but when I press on it and drill down, it opens the results in "new search" with **"Date time range" (9/3/18 3:00:00 AM to 9/4/18 3:00:00 AM) and not "Last Year"** 2. If I choose "All time" in the time picker, I see the value **89** (which I don't know if its correct because on the last 24 hours I didn't ingest any event). When I press on it, it opens the results in "new search" with **"Date time range" (10/9/17 3:00:00 AM to 10/10/17 3:00:00 AM) and not "All Time"**. Can someone please explian to me: 1. Why the time range is changed when I drill down? 2. Why I get 89 and not 0? Thank you

Is there any difference between top and stats in tstats?

$
0
0
I could see the same result in index=* ~~~ | top abc index=* ~~~ | stats count by abc | sort -count (ignore percent column and so on) but I got totally different results between | tstats prestats=true ~~~ | top abc | tstats prestats=true ~~~ | stats count by abc | sort -count is there any critical difference between in this case?

Figure out Pass/Fail status of test cases based on the info provided and display in a bar chart.

$
0
0
Input to splunk is a CSV file which has columns as TestCaseID, ExpectedTime (Time which is expected for a test case to execute) and ActualTime (Time which test case has taken to execute). If Actualtime is <=ExpectedTime, then it is passed. I want to figure out pass/fail test cases count by comparing ActualTime

Synchronize many searches in a dashboard

$
0
0
Hello, For front purposes (I'm building a dashboard using the JS framework), I want to synchronize all my searches for them to start at the same time. But the refresh countdown seems to start when a search **GET** the results, so if it takes longer, it is delayed from the other searches of the dashboard (which have the same refresh time though). => Do you know if there is a simple way to make searches start at the very same time (I don't care if they get results at different time) ? Or have I to use JS timers ? Thanks for your help :) Guillaume

Process delayed - perhaps system was suspended?

$
0
0
We have a search with some subsearches that runs for about 40 seconds. "This search has completed and has returned 11 results by scanning 6.296 events in 42,58 seconds". Total runtime of the search is 84 seconds, from 09:25:39.645 until 09:27:05.635. The last line in search.log is this: 09-03-2018 09:27:05.635 INFO PipelineComponent - **Process delayed by 84.962 seconds, perhaps system was suspended?** Further examination of search.log shows these lines: **09-03-2018 09:25:39.861 INFO DispatchThread - Error reading runtime settings: File :/opt/splunk/var/run/splunk/dispatch/subsearch_tmp_1535959539.1/runtime.csv does not exist** and a number of these: **09:25:47.590 ERROR DispatchThread - Failed to read runtime settings: File :/opt/splunk/var/run/splunk/dispatch/subsearch_subsearch_subsearch_subsearch_subsearch_tmp_1535959542.9_1535959545.20_1535959545.21_1535959546.23_1535959546.25/runtime.csv does not exist** In search.log a total of 276 of both the INFO and the ERROR mentioning runtime.csv in some directory is present. We are running Splunk 7.1.2 on an SH cluster with 2 indexer clusters. All machines run Linux and have SSD's with plenty of free memory, no swapping, plenty of free diskspace and the dispatch directory has about 2500 entries. The directory names are not too long for Linux. Any ideas what we as Splunk admins can do to speed up the search? Eliminating the subsearches might solve the problem but I would like to make sure this is not an "undocumented feature" or misconfiguration on the server side. Until last month we were running on 6.6.2 and this did not occur as far as we know.

error connection timeout for Splunk app

$
0
0
While installing DB Connect I am getting error connection time out from splunk UI(Find More APPS) please help me why I am getting this error. Thanks

After dowloading free splunk the license page keeps asking for me to pick a license

$
0
0
After picking free license at `/en-US/manager/system/licensing` I get redirected to the same page without any success or error. This is preventing me from continuing developing.

Loadjob loads oldest version of job instead of latest

$
0
0
Hi have noticed issue that loadjob sometimes gets stale & loads oldest version of job instead of loading latest. For small interval searches it may not be a major issue, but is certainly very dubious when you have search spanning last 24 hours etc. Splunk Enterprise Version: 7.1.1 & Build: 8f0ead9ec3db with SHC Is this fixed in newer version or has been raised a a bug already? This was an issue in past with version 6.4.4 & seems to have cropped up again!

How to work with Splunk App for Unix and Linux

$
0
0
I have installed the 'Splunk App for Unix and Linux' on a non-distributed Splunk environment. How and where can I set the host machine details which I need to monitor and fetch data from.

Charing three things and something else...

$
0
0
G'Day. I'm trying to get a search and chart working, but it doesn't want to play. The events I'm using are generated hourly and are like this: TROLLY=1 TROLLY_SIZE =150 BAG=1 CONTENTS=15 TROLLY=1 TROLLY_SIZE =150 BAG=2 CONTENTS=25 TROLLY=1 TROLLY_SIZE =150 BAG=3 CONTENTS=10 TROLLY=1 TROLLY_SIZE =150 BAG=4 CONTENTS=10 TROLLY=1 TROLLY_SIZE =150 BAG=5 CONTENTS=15 TROLLY=1 TROLLY_SIZE =150 BAG=6 CONTENTS=20 TROLLY=1 TROLLY_SIZE =150 BAG=7 CONTENTS=25 TROLLY=2 TROLLY_SIZE =100 BAG=1 CONTENTS=15 TROLLY=2 TROLLY_SIZE =100 BAG=2 CONTENTS=15 TROLLY=2 TROLLY_SIZE =100 BAG=3 CONTENTS=10 TROLLY=2 TROLLY_SIZE =100 BAG=4 CONTENTS=10 TROLLY=2 TROLLY_SIZE =100 BAG=5 CONTENTS=15 TROLLY=2 TROLLY_SIZE =100 BAG=6 CONTENTS=20 TROLLY=2 TROLLY_SIZE =100 BAG=7 CONTENTS=10 What I've got at the moment is something that draws an area fill graph of the total contents of all the bags for the selected Trolly. (At the point of time above, Trolly 1 holds 120 items and Trolly 2 holds 95 items. | search TROLLY=$tk_trolly$ | chart sum(CONTENTS) over day_hour by BAG What I want to add is a line that shows the TROLLY_SIZE (basically a straight line at items=150 if Trolly 1 is selected and at 100 if Trolly 2 is selected). There may be more or less than 7 bags in a trolly. Any hints on how to do it? Charting avg(TROLLY_SIZE) get the line repeated for each BAG, sum(TROLLY_SIZE) gets me a line that's to big... Mik

Splunk DB Connect: save result as json

$
0
0
I have several DB connections with inputs configured. Everything works very well. But there is one performance issue which I couldn't solve yet: when using Splunk search queries take long time to execute. Usually I make sure that all fields are indexed and then it's possible to use |tstats and make super-fast dashboards. But I couldn't do the same for DB connect data as indexed time field extractions don't work there. Is there a way to have input data in json or csv format to have automatic field indexing by Splunk? Anyway all data coming from databases has predefined fields which could be immediately indexed to improve query performance greatly. Thanks for your answer!

KVStore record deletion

$
0
0
hello, Short background.. One of the application populates some ids for deletion of multiple types like type A, B, C, D. There are four different applications to delete each type of ids. I am interested to find only D type of ids. So I created a KVStore to store all ids for deletion. Then using lookup I am searching type D ids. Till this point all works fine. But as data volume is huge, I want to delete the old records from KVStore which are already searched and displayed on dashboard. Please help me to build the last KVStore deletion query. index=xyz STATUS="DELETED" | lookup scheduled_lookup SCHE_ID AS DEL_ID OUTPUT SCHE_ID AS DELETED_ACCT | stats count(DELETED_ACCT) as DELETED_ACCT I want to build a query like below (please ignore syntax) |inputlookup scheduled_lookup |eval key=_key |where NOT key=[index=xyz |fields DEL_ID] |outputlookup scheduled_lookup

pass token to a dashboard in a default inputc checkbox

$
0
0
Hi at all, I have a main dashboard that calls a secondary dashboard where there is a checkbox input. I'm trying to pass the default value to the checkbox and it correctly runs if I have only one default value, instead it doesn't run if I have more values, it seems like the problem is the coma. This is my code: main dashboard drilldown: /app/my_app/home_page_overview_servers?TimeFrom=$Time.earliest$&TimeTo=$Time.latest$&Status=low,severe&System_Type=Server Windows Secondary dashboard input checkbox:PresentMissingOut of Perimeter$Status$()Status= OR Anyone encountered a similar problem? Bye. Giuseppe

AWS addon s3 generic error - TypeError: 'int' object is not iterable

$
0
0
Following a server crash (unknown reason) the collection of S3 logs has partially broken down. Seeing the following in _internal (after enabling DEBUG logging for the S3 generic part). Any ideas on how to get it working again? 2018-09-03 14:38:22,122 level=INFO pid=15104 tid=Thread-4 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_do_index_data:104 | datainput="securitylogs_blahblah_staging" bucket_name="blahblah-staging-ext-securitylogs-20180215145qqqqqqq00000001", job_uid="757azzzz-xxxx-4a5b-b735-yyyyy31754e" start_time=1535978302 | message="Start processing." 2018-09-03 14:38:22,123 level=DEBUG pid=15104 tid=Thread-4 logger=splunksdc.checkpoint pos=checkpoint.py:build_indexes:169 | datainput="securitylogs_blahblah_staging" bucket_name="blahblah-staging-ext-securitylogs-20180215145qqqqqqq00000001", job_uid="757azzzz-xxxx-4a5b-b735-yyyyye31754e" start_time=1535978302 | message="Key was set." pos=221 key="securitylogs/ls.s3.2cf5f2c6-xxxx-4947-bb01-yyyyy96c7e22.2018-08-01T22.30.part11921.txt.gz" some more of the above DEBUG message - different key, but same Job_uid, and then it ends with: 2018-09-03 14:38:22,123 level=ERROR pid=15104 tid=Thread-4 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:index_data:90 | datainput="securitylogs_blahblah_staging" bucket_name="blahblah-staging-ext-securitylogs-20180215145qqqqqqq00000001" | message="Failed to collect data through generic S3." job_uid="757azzz-xxxx-4a5b-b735-yyyyy31754e" start_time=1535978302 Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 85, in index_data self._do_index_data() File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 106, in _do_index_data self.collect_data() File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_data_loader.py", line 139, in collect_data index_store = s3ckpt.S3IndexCheckpointer(self._config) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_checkpointer.py", line 141, in __init__ config[asc.data_input], config[tac.checkpoint_dir] File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_checkpointer.py", line 80, in get S3CkptPool.S3CkptItem(ckpt_name, ckpt_dir) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/generic_s3/aws_s3_checkpointer.py", line 65, in __init__ self.idx_ckpt = LocalKVStore.open_always(ckpt_file_idx) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunksdc/checkpoint.py", line 156, in open_always indexes = cls.build_indexes(fp) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunksdc/checkpoint.py", line 163, in build_indexes for flag, key, pos in cls._replay(fp): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunksdc/checkpoint.py", line 92, in _replay flag, key, _ = umsgpack.unpack(fp) TypeError: 'int' object is not iterable Something seems to have gotten corrupted. Have tried to disable/enable the inputs from within the AWS-addon-app-GUI, but with little success.

Splunk Db connect

$
0
0
Hello I am using Splunk enterprise 7.1.2, Splunk DB connect app is installed on same , But I am not able to establish connection Please share the document to install and configure the system Thanks

Course registration problem

$
0
0
I would like to register the courses for Splunk Enterprise Certified Admin(Fundamentals II , System Administrator, Database Administrator), but I have an error message in the last step of the payment method ("The request is missing a required field"), although I have to enter all the required fields. Any help

I have install Splunk Add-on for Microsoft Windows.

$
0
0
I have install Splunk Add-on for Microsoft Windows. Now I want to monitor this server name ( PROD-SQL10-011-A ,Port 6410) to see the Utilization report of ( CPU, disk, I/O,). Could you pls help me an d let me know how to configure this. Thanks in advance

Lookup between a Value in a kv store

$
0
0
I got a number in my first lookup and i want to compare this number with a start and end number in a lookup, how do i do it? | inputlookup IPaddress | table IPtoNumber IPAddress UserID expected output in table format IPtoNumber IPAddress UserID 100 1.1.1.1 john base on the above result i looking up another table that have the country name | inputlookup geoIP | table startNumber EndNumber Country expected output in table format startNumber EndNumber Country 1 10 Somewhere1 11 90 Somewhere2 91 100 Somewhere3 how do I pass the IPtoNumber from IPaddress lookup into geoIP lookup and return me the Country in geoIP?

REST API Modular Input - 401 Client Unauthorized

$
0
0
I am trying to access Carbon Black via The REST API. As expected, this works in Postman: Console Output (keys and tokens changed): GET https://api-prod06.conferdeploy.net/integrationServices/v3/event/ Proxy: host:"127.0.0.1" port:9000 match:"http+https://*/*" Request Headers: x-auth-token:"CYLVZZZZZZZZZZZZZ/3IZNXXXX" cache-control:"no-cache" postman-token:"c092b323-fc8e-4c1d-9a6d-c6f042000000" user-agent:"PostmanRuntime/7.2.0" accept:"*/*" host:"api-prod06.conferdeploy.net" cookie:"__cfduid=ddfebf41ad8fce5ba32a3bd7b71e891e61535000000" accept-encoding:"gzip, deflate" Response Headers: content-type:"application/json;charset=ISO-8859-1" date:"Mon, 03 Sep 2018 09:21:47 GMT" server:"Apache-Coyote/1.1" content-length:"491061" connection:"Close" Response Body: 200 590 ms It starts off OK ![alt text][1] [1]: /storage/temp/255904-start.jpg And then crashes out with a 401 error. Any help would be appreciated. Thanks in advance.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>