Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How to use a Deployer to install and distribute add-ons?

$
0
0
We have 8 servers 3 index 3 search a master and a deployer. We are running a single instance currently and want to move to this new HA (High Availability) environment. We currently have a lot of add-ons on the single instance What is the proper way to install the add-ons onto the HA cluster? Or do you just have to go to each search and index instance and install from the web interface?

Splunk App for AWS: Why is the Account Ids on All Resources "000000000000"?

$
0
0
We are facing some issues with our Splunk App for AWS configuration where all the data seems to be flowing in correctly, but the Account ID on every resource is "000000000000". This is a similar issue to this Splunk Answers question: Splunk App for AWS: Why is all_account_ids.csv not being created? - https://answers.splunk.com/answers/337877/splunk-app-for-aws-why-is-all-account-idscsv-not-b.html And I have gone through the troubleshooting steps that were provided in the answers on that page. The account_ids exist in the all_accounts_ids.csv lookup file. I am not seeing any errors in the saas_app_aws.log file that scream out that something with the aws_account_handler.py is failing. What I would be interested in knowing would be when and how the account id value is appended to the resource before it is inserted into the data so that I could try figuring out what piece would be failing. I'm not seeing a lookup or macro that would insert the account_id value into each resource (EC2, EBS, etc.). Any help is appreciated! Thanks!

Jenkins dashboard load delay

$
0
0
Hi, We installed and configured the Splunk app for Jenkins couple of weeks ago. The app works very good, all the data come and fill the dashboards, and also the DevOps guys are mentioned that it opens new and better visibility on what is going on their Jenkins environment. Nevertheless, there is one big issue - the dashboards load in delay of 15 sec! Just after this the panels shows and the dashboard fill. We tried it on several environments but unfortunately it's the same behavior - for every dashboard there is delay of 15 sec. I guess it something with the fact the is it HTML dashboard (not XML) and maybe wait for something 15 sec - something that I don't know what is it. I hope someone can help with that. Thanks, Omer Rudik.

Why am I unable to install Splunk universal forwarder on Windows server 2012 R2?

$
0
0
Hi Unable to install Splunk universal forwarder on Windows server 2012 R2, please help to solve this issue. Logs 04-04-2017 21:49:01.089 +0530 INFO ServerConfig - Found no hostname options in server.conf. Will attempt to use default for now. 04-04-2017 21:49:01.089 +0530 INFO ServerConfig - Host name option is "". 04-04-2017 21:49:03.538 +0530 INFO loader - Running utility: "check-transforms-keys" 04-04-2017 21:49:03.538 +0530 INFO loader - Getting configuration data from: C:\Program Files\SplunkUniversalForwarder\etc\myinstall\splunkd.xml 04-04-2017 21:49:03.538 +0530 INFO loader - SPLUNK_MODULE_PATH environment variable not found - defaulting to C:\Program Files\SplunkUniversalForwarder\etc\modules 04-04-2017 21:49:03.538 +0530 INFO loader - loading modules from C:\Program Files\SplunkUniversalForwarder\etc\modules 04-04-2017 21:49:03.553 +0530 INFO loader - Writing out composite configuration file: C:\Program Files\SplunkUniversalForwarder\var\run\splunk\composite.xml 04-04-2017 21:49:05.363 +0530 INFO loader - Splunkd starting (build 67571ef4b87d). 04-04-2017 21:49:05.363 +0530 INFO loader - System info: Windows, TSMSRV2, 2, 6, x64. 04-04-2017 21:49:05.363 +0530 INFO loader - Detected 1 (virtual) CPUs, 1 CPU cores, and 16383MB RAM 04-04-2017 21:49:05.363 +0530 INFO loader - Maximum number of threads (approximate): 8191 04-04-2017 21:49:05.363 +0530 INFO loader - Arguments are: "rest" "--noauth" "POST" "/services/apps/local/SplunkUniversalForwarder/enable" 04-04-2017 21:49:05.363 +0530 INFO loader - Getting configuration data from: C:\Program Files\SplunkUniversalForwarder\etc\myinstall\splunkd.xml 04-04-2017 21:49:05.363 +0530 INFO loader - SPLUNK_MODULE_PATH environment variable not found - defaulting to C:\Program Files\SplunkUniversalForwarder\etc\modules 04-04-2017 21:49:05.363 +0530 INFO loader - loading modules from C:\Program Files\SplunkUniversalForwarder\etc\modules 04-04-2017 21:49:05.363 +0530 INFO loader - Writing out composite configuration file: C:\Program Files\SplunkUniversalForwarder\var\run\splunk\composite.xml 04-04-2017 21:49:05.379 +0530 INFO ServerConfig - Found no hostname options in server.conf. Will attempt to use default for now. 04-04-2017 21:49:05.379 +0530 INFO ServerConfig - Host name option is "". 04-04-2017 21:49:05.394 +0530 WARN AuthenticationManagerSplunk - Seed file is not present. Defaulting to generic username/pass pair. 04-04-2017 21:49:05.410 +0530 WARN UserManagerPro - Can't find [distributedSearch] stanza in distsearch.conf, using default authtoken HTTP timeouts 04-04-2017 21:49:06.720 +0530 ERROR LimitsHandler - Configuration from app=SplunkUniversalForwarder does not support reload: limits.conf/[thruput]/maxKBps 04-04-2017 21:49:06.720 +0530 ERROR ApplicationUpdater - Error reloading SplunkUniversalForwarder: handler for limits (access_endpoints /server/status/limits/general): Bad Request 04-04-2017 21:49:06.720 +0530 ERROR ApplicationUpdater - Error reloading SplunkUniversalForwarder: handler for server (http_post /replication/configuration/whitelist-reload): Application does not exist: Not Found 04-04-2017 21:49:06.720 +0530 ERROR ApplicationUpdater - Error reloading SplunkUniversalForwarder: handler for web (http_post /server/control/restart_webui_polite): Application does not exist: Not Found 04-04-2017 21:49:06.720 +0530 WARN LocalAppsAdminHandler - User 'splunk-system-user' triggered the 'enable' action on app 'SplunkUniversalForwarder', and the following objects required a restart: default-mode, limits, server, web 04-04-2017 21:49:07.095 +0530 INFO loader - Splunkd starting (build 67571ef4b87d). 04-04-2017 21:49:07.095 +0530 INFO loader - System info: Windows, TSMSRV2, 2, 6, x64. 04-04-2017 21:49:07.095 +0530 INFO loader - Detected 1 (virtual) CPUs, 1 CPU cores, and 16383MB RAM 04-04-2017 21:49:07.095 +0530 INFO loader - Maximum number of threads (approximate): 8191 04-04-2017 21:49:07.095 +0530 INFO loader - Arguments are: "rest" "--noauth" "POST" "/servicesNS/nobody/SplunkUniversalForwarder/data/outputs/tcp/server" "name=192.168.6.74:9997" 04-04-2017 21:49:07.095 +0530 INFO loader - Getting configuration data from: C:\Program Files\SplunkUniversalForwarder\etc\myinstall\splunkd.xml 04-04-2017 21:49:07.095 +0530 INFO loader - SPLUNK_MODULE_PATH environment variable not found - defaulting to C:\Program Files\SplunkUniversalForwarder\etc\modules 04-04-2017 21:49:07.095 +0530 INFO loader - loading modules from C:\Program Files\SplunkUniversalForwarder\etc\modules 04-04-2017 21:49:07.095 +0530 INFO loader - Writing out composite configuration file: C:\Program Files\SplunkUniversalForwarder\var\run\splunk\composite.xml 04-04-2017 21:49:07.126 +0530 INFO ServerConfig - Found no hostname options in server.conf. Will attempt to use default for now. 04-04-2017 21:49:07.126 +0530 INFO ServerConfig - Host name option is "". 04-04-2017 21:49:07.142 +0530 WARN UserManagerPro - Can't find [distributedSearch] stanza in distsearch.conf, using default authtoken HTTP timeouts 04-04-2017 21:49:07.563 +0530 INFO loader - Splunkd starting (build 67571ef4b87d). 04-04-2017 21:49:07.563 +0530 INFO loader - System info: Windows, TSMSRV2, 2, 6, x64. 04-04-2017 21:49:07.563 +0530 INFO loader - Detected 1 (virtual) CPUs, 1 CPU cores, and 16383MB RAM 04-04-2017 21:49:07.563 +0530 INFO loader - Maximum number of threads (approximate): 8191 04-04-2017 21:49:07.563 +0530 INFO loader - Arguments are: "rest" "--noauth" "POST" "/servicesNS/nobody/SplunkUniversalForwarder/admin/deploymentclient/deployment-client" "targetUri=192.168.6.74:8089" 04-04-2017 21:49:07.563 +0530 INFO loader - Getting configuration data from: C:\Program Files\SplunkUniversalForwarder\etc\myinstall\splunkd.xml 04-04-2017 21:49:07.563 +0530 INFO loader - SPLUNK_MODULE_PATH environment variable not found - defaulting to C:\Program Files\SplunkUniversalForwarder\etc\modules 04-04-2017 21:49:07.563 +0530 INFO loader - loading modules from C:\Program Files\SplunkUniversalForwarder\etc\modules 04-04-2017 21:49:07.563 +0530 INFO loader - Writing out composite configuration file: C:\Program Files\SplunkUniversalForwarder\var\run\splunk\composite.xml 04-04-2017 21:49:07.563 +0530 INFO ServerConfig - Found no hostname options in server.conf. Will attempt to use default for now. 04-04-2017 21:49:07.563 +0530 INFO ServerConfig - Host name option is "". 04-04-2017 21:49:07.594 +0530 WARN UserManagerPro - Can't find [distributedSearch] stanza in distsearch.conf, using default authtoken HTTP timeouts 04-04-2017 21:49:07.610 +0530 WARN DC:PhonehomeThread - Phonehome thread is now shutdown.

How to get the Splunk App for Salesforce and Salesforce to make a connection and integrate together?

$
0
0
Hi Elias Haddad We are trying to get Salesforce & Splunk integration working using the Splunk App for Salesforce developed by you. Followed all the pre-requisites but still not able to pull any data & gone through all the FAQs. Any insights would be greatly appreciated. Thanks, Ravi C

Calculating data with multiple transactions per order

$
0
0
Hi, I have the following data with the following columns, OrderNo, Transaction Start, Transaction Stop. I wrote a search by OrderNo to get the time difference for each order. The problem is that Order Number 333 below has multiple transactions and I need to calculate based on every 2 lines of data based on OrderNo. ![alt text][1] [1]: /storage/temp/192180-1.png It works fine until I get to Orders that have multiple transactions. index=myindex source=mysource Service=myservice OrderNo=* |eval start_time = strrptime(transaction_start, "%Y-%m-%d %H:%M:%S") | eval stop_time = strptime(transaction_stop, "%Y-%m-%d %H:%M:%S") | stats earliest(start_time) as start_time earliest(stop_time) as stop_time by OrderNo, Service | eval duration=tostring(stop_time-start_time) | stats mean(duration) as avg_duration by Service | table Service, avg_duration Is it possible to read through one OrderNo to split it up into several transactions. It's obvious I shouldn't be using earliest but I just realized some of the orders have multiple transactions and after searching and coming up empty I ended up here. Thanks!

Comparing the firewall ip to the inputlook which contains the blacklisted ip and need to display the count and source

$
0
0
Hi, I have a blacklisted inputlookup csv which contains 20000 blacklisted ip. I need to compare the inputlookup with the fortinet firewall and display the count of the destination IP along with the srcip As of now i'm having a query which will compare the firewall outbound traffic and display any blacklisted ip which is present in the inputlookup. > | inputlookup Blackipfortinet.csv | search [ search index=fortinet | dedup dstip | fields dstip ] What i need is the count of the destination ip followed by the src ip and time? is it possible

Why am I unable to upgrade the Splunk Add-on for Tenable?

$
0
0
I am trying to update the Splunk Add-on for Nessus from version 3.0.2 to 5.1.1, but got error message "An error occurred while installing the app: 400" Any ideas or suggestions? Thank you!

Splunk Oracle TA collect Unified Auditing Logs

$
0
0
I'm wondering what is the best way to collect events from the Oracle Unified Auditing db. It know without OUA it was easy to install the Splunk UF on the server then collect the .aud logs. Is the same functionality there with Unified Auditing, where the UF monitors the files or do we have to actually query the DB to get the logs?

Comparing variables in a table

$
0
0
I want to create a search that runs through a variable that contains many mac addresses that correspond to a specific store number, then compare it to another variable that has mac addresses that correspond to a specific store number from a different source but many of them should be identical. I want them to show up in rows that would like: Store #----SCCM Store #------Mac Address-----SCCM Mac Address 1500----------1500-------------10:20:15:02:01-----10:20:15:02:01 Likewise when it doesn't match Store #----SCCM Store #------Mac Address-----SCCM Mac Address 1500----------1200-------------10:20:15:02:01-----10:20:15:02:01 |inputlookup rnddata.csv |rename "Store #" as Store_Number|rename mac as Mac_Address | stats values(Mac_Address) as Mac_Address values("SCCM Store") as "SCCM Store" by Store_Number SCCM_MAC_ADDRESS | sort "SCCM Store" desc |table "SCCM Store" Store_Number Store_Desc Mac_Address SCCM_MAC_ADDRESS Right now when I do this without sccm data it works perfectly and will show me macs in a specific store_number but when trying the compare the numbers are all off

Splunk UF clients phoned but are not indexing logs

$
0
0
I've deployed the deployment-app on the deployment client from deployment server. The server appeared on the phoned list. But is not indexing the logs. The splunkforwarder logs don't show any error and the splunk user has the permissions to read the logs. Where can I check what is not causing the indexing?

Single index, Role Based Access Control using Dashboard and Saved Searches

$
0
0
Hi All, Below is my problem statement : 1. We receive all the events into a single index ("main"). We cannot change this as its a 3rd party sending these events. 2. We have different source-types like syslog, applog etc 3. We have different roles like sysadmin, developer etc 4. We have separate users pertaining to each role like sysad, dev etc. Now, I want to restrict the sysad from viewing the contents of dev, and vice-versa. We receive all the indexes into "main". I have a user dev, I removed the dev's permission on "main" index. I created a dashboard and provided read-only access to dev. This dashboard uses a saved-search, which also has read-only permissions to dev. I removed dev's permission on "main" because I want to restrict dev user from "search" facility available on dashboard. Ideally, once I remove dev's permission on "main" index, it should not be able to find any events, but it does find all the events. Is there any other way to achieve role based access control over same index ? Thanks and Regards, Abhay Dandekar

How to get a count of stats list that contains a specific data?

$
0
0
Hi all, How to get a count of stats list that contains a specific data? Data is populated using stats and list() command. Boundary: date and user. There are at least 1000 data. Sample example below. Date User list(data) 3/31/17 user1 1, 2, 4 3/31/17 user2 1, 3 3/31/17 user3 8 Let say I want to count user who have list(data) that contains number bigger than "1". Then, the user count answer should be "3". I tried using "| where 'list(data)' >1 | chart count(user) by date" , but it gives me a userCount of "1" for this case, as it ignores the list that have 3 or 2 data.

REX expression for multiple extractions in columns

$
0
0
Hello all, I was hoping I could get a bit of assistance in figuring out a rex expression I could use to extract part numbers that are in column, I have a sample data set below, part_num serial_num type abc 123 a bcd 234 a cde 456 b Essentially I'm trying to extract all the "part_num" and "serial_num" for "types" of "a", I can extract the first part that matches however I've been unable to figure out how I can extract all fields I need of type a for my events, essentially it would look like this (FYI, I already have the host machine serial number extracted) rex.... |stats list(part_num) as part_num list(serial_num) as serial_num by host_machine host_machine. part_num serial_num 981-aabbc abc 123 bcd 234 and this would display for all my machines. Thank you, and please let me know if there are any questions, I appreciate any help

How do I replicate internal index data across a Hunk search head cluster with no indexers?

$
0
0
We currently have a search head cluster setup to use HDFS as a backend. We'd like the _audit index data to be the same on each box for statistic gathering purposes, but the Splunk documentation guides regarding doing this all mention pushing the data to Splunk indexers, which we don't use. Has anyone else solved this problem?

Splunk Predict - Is it possible to forecast future data (not from the last data point) using past data?

$
0
0
I have created a panel that predicts future ticket volume given past values over time as shown below. From this panel created, it can be observed that Splunk shows its future predictions (March 1 onwards) from the last data point (February 28). Is it possible for Splunk to predict future data not starting from the last data point selected? Example would be forecasting data for the month of April 2017 considering only the volumes of April from the previous years (e.g. April 2015, April 2016). ![alt text][1] [1]: /storage/temp/192182-capture.png Hoping someone could confirm. Responses are highly appreciated. Thank you.

Can I reload the values of a form input?

$
0
0
Hi, I have a form with a drop-down select which passes values (index) to a multi-select form (sourcetype). What I've noticed is that if I've chosen some sourcetypes, and then decide to change the index, the sourcetypes remain in the multi-select input, and I have to delete them manually. Is there a way to reload the sourcetype input after the index selection changes? Hope this makes sense...

Problem indexes.conf splunkd not restarting !

$
0
0
Hello , I have a distributed architecture of Splunk SH with Splunk ES and an indexer . I get suddenly this error message on the indexer and it's stopped I had that message error when I restart splunkd service : "Problem parsing indexes.conf: default index disabled - quit! Validating databases (splunkd validatedb) failed with code '1'. Please file a case online at http://www.splunk.com/page/submit_issue" . Please I need Help ! :( Thank you very much for your helps !

Time difference (practical values) between event-time and index-time in large clustered environments

$
0
0
I know it is a weird question (like how long piece of string), but this is more of values from your experience/real-time practical value in your large clustred environment. We are estimating for how fast Splunk can respond in real-time, but on analysing difference between _time and _indextime , the values are much higher than I thought. It is coming up in 300seconds for 90th Percentile of data. - The data comes from syslog and from Universal Forwarders. - No queueing/pipeline blocks - No delay from source as such Just wanted to verify how you guy's systems are looking? is 300 seconds too much or good enough for most of the data?

Diagnosing Issues with Python and Splunk Add-on for EMC VNX data_loader scripts "hanging"

$
0
0
We are trying to perform storage monitoring and both the EMC VNX and EMC XtremIO seem to be running python scripts as part of the Splunk Add-on for EMC VNX that break after a period of time. I think it's due to sockets staying open or the .py scripts not ending cleanly, but I am not proficient in python enough to diagnose... This post is specific to the Splunk Add-on for EMC VNX. We have 2 heavy forwarders that run the Splunk Add-on for EMC VNX against several different arrays, it's consistently seems to be the python scripts staying running and doing a splunk stop and splunk start fixes the issue for half-day to several days... Here is the error from the VNX log : [splunk@log1 splunk]$ tail -f data_loader.log File "/opt/splunk/etc/apps/Splunk_TA_emc-vnx/bin/timed_popen.py", line 55, in timed_popen return _do_timed_popen(args, timeout) File "/opt/splunk/etc/apps/Splunk_TA_emc-vnx/bin/timed_popen.py", line 41, in _do_timed_popen sub = Popen(args, stdout=PIPE, stderr=PIPE) File "/opt/splunk/lib/python2.7/subprocess.py", line 710, in __init__ errread, errwrite) File "/opt/splunk/lib/python2.7/subprocess.py", line 1335, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory it repeats this over and over in both Heavy Forwarders, until Splunk is stopped and started. Running a Splunk restart from cron every morning at 5am did not work as a workaround for this issue. The healthy log looks like this (you can see it ending from Splunk stop also) : 2017-04-05 11:39:19,173 INFO 140455369918272 - Data loader is going to exit... 2017-04-05 11:39:19,173 INFO 140454121158400 - Worker thread Thread-16 going to exit 2017-04-05 11:39:19,174 INFO 140454146336512 - Worker thread Thread-13 going to exit 2017-04-05 11:39:19,174 INFO 140455200052992 - Worker thread Thread-1 going to exit 2017-04-05 11:39:19,174 INFO 140454137943808 - Worker thread Thread-14 going to exit 2017-04-05 11:39:19,174 INFO 140454154729216 - Worker thread Thread-12 going to exit 2017-04-05 11:39:19,175 INFO 140455191660288 - Worker thread Thread-2 going to exit 2017-04-05 11:39:19,175 INFO 140454129551104 - Worker thread Thread-15 going to exit 2017-04-05 11:39:19,175 INFO 140454691600128 - Worker thread Thread-5 going to exit 2017-04-05 11:39:19,175 INFO 140454683207424 - Worker thread Thread-6 going to exit 2017-04-05 11:39:19,175 INFO 140455174874880 - Worker thread Thread-4 going to exit 2017-04-05 11:39:19,175 INFO 140455183267584 - Worker thread Thread-3 going to exit 2017-04-05 11:39:19,176 INFO 140454658029312 - Worker thread Thread-9 going to exit 2017-04-05 11:39:19,176 INFO 140454666422016 - Worker thread Thread-8 going to exit 2017-04-05 11:39:19,176 INFO 140454649636608 - Worker thread Thread-10 going to exit 2017-04-05 11:39:19,176 INFO 140454674814720 - Worker thread Thread-7 going to exit 2017-04-05 11:39:19,176 INFO 140454641243904 - Worker thread Thread-11 going to exit 2017-04-05 11:39:19,178 INFO 140455369918272 - ProcessPool is going to exit... 2017-04-05 11:39:19,210 INFO 140454112765696 - Event writer thread is going to exit... 2017-04-05 11:39:19,229 INFO 140454104372992 - TimerQueue thread is going to exit... 2017-04-05 11:39:43,188 INFO 140321121437504 - thread_pool_size = 16 2017-04-05 11:39:43,190 INFO 140321121437504 - process_pool_size = 2 2017-04-05 11:39:43,807 INFO 140321121437504 - Get 0 ready jobs, next duration is 5.506924, and there are 12 jobs scheduling 2017-04-05 11:39:49,318 INFO 140321121437504 - Get 1 ready jobs, next duration is 3.996371, and there are 12 jobs scheduling 2017-04-05 11:39:49,321 INFO 140320742307584 - thread work_queue_size=0 2017-04-05 11:39:53,315 INFO 140321121437504 - Get 1 ready jobs, next duration is 8.999508, and there are 12 jobs scheduling 2017-04-05 11:39:53,315 INFO 140320733914880 - thread work_queue_size=0 2017-04-05 11:40:02,315 INFO 140321121437504 - Get 1 ready jobs, next duration is 0.999262, and there are 12 jobs scheduling 2017-04-05 11:40:02,315 INFO 140320725522176 - thread work_queue_size=0 2017-04-05 11:40:03,314 INFO 140321121437504 - Get 1 ready jobs, next duration is 11.999513, and there are 12 jobs scheduling 2017-04-05 11:40:03,315 INFO 140320717129472 - thread work_queue_size=0 2017-04-05 11:40:15,315 INFO 140321121437504 - Get 2 ready jobs, next duration is 7.999429, and there are 12 jobs scheduling 2017-04-05 11:40:15,315 INFO 140320708736768 - thread work_queue_size=1 2017-04-05 11:40:15,315 INFO 140320700344064 - thread work_queue_size=0 2017-04-05 11:40:23,315 INFO 140321121437504 - Get 1 ready jobs, next duration is 0.999395, and there are 12 jobs scheduling 2017-04-05 11:40:23,315 INFO 140320691951360 - thread work_queue_size=0 2017-04-05 11:40:24,315 INFO 140321121437504 - Get 1 ready jobs, next duration is 3.999494, and there are 12 jobs scheduling 2017-04-05 11:40:24,315 INFO 140320205436672 - thread work_queue_size=0 2017-04-05 11:40:28,315 INFO 140321121437504 - Get 1 ready jobs, next duration is 7.999428, and there are 12 jobs scheduling 2017-04-05 11:40:28,315 INFO 140320197043968 - thread work_queue_size=0 2017-04-05 11:40:36,314 INFO 140321121437504 - Get 1 ready jobs, next duration is 0.999498, and there are 12 jobs scheduling 2017-04-05 11:40:36,315 INFO 140320188651264 - thread work_queue_size=0 2017-04-05 11:40:37,314 INFO 140321121437504 - Get 1 ready jobs, next duration is 2.999524, and there are 12 jobs scheduling 2017-04-05 11:40:37,315 INFO 140320180258560 - thread work_queue_size=0 2017-04-05 11:40:40,314 INFO 140321121437504 - Get 1 ready jobs, next duration is 95.000096, and there are 12 jobs scheduling 2017-04-05 11:40:40,315 INFO 140320171865856 - thread work_queue_size=0
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>