Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How do you join 2 tables while showing whats not in table 1?

$
0
0
This successfully shows a combined table with users that are in Table1 and Table2. However, I want to show all users in table1 that are NOT in table 2? How can i do that? | inputlookup table1.csv | join type=inner userColumn [ inputlookup table2.csv ]

vCPU Core Count Wrong Single Splunk Enterprise Instances

$
0
0
I have a single splunk instance on an Amazon AMI RHEL box. I upgraded instance type from 2vCPU to 4vCPUs and Splunk for some reason cannot see the additional cpus. However it did pick up the memory upgraded (8 to 16). I have yet to find an answer to this online.

Detected unclean shutdown - /home/dbindex/kvstore/mongo/mongod.lock is not empty.

$
0
0
Hi, I have several errors related to kvstore as: -Failed to start KV Store process. See mongod.log and splunkd.log for details. -KV Store changed status to failed. KVStore process terminated. -KV Store process terminated abnormally (exit code 100, status exited with code 100). See mongod.log and splunkd.log for details. So when reviewing the mongod.log I see: Detected unclean shutdown - /home/dbindex/kvstore/mongo/mongod.lock is not empty. I STORAGE [initandlisten] I STORAGE [initandlisten] ** WARNING: Readahead for /home/dbindex/kvstore/mongo is set to 4096KB I STORAGE [initandlisten] ** We suggest setting it to 256KB (512 sectors) or less I STORAGE [initandlisten] ** http://dochub.mongodb.org/core/readahead I STORAGE [initandlisten] ************** old lock file: /home/dbindex/kvstore/mongo/mongod.lock. probably means unclean shutdown, but there are no journal files to recover. this is likely human error or filesystem corruption. please make sure that your journal directory is mounted. found 76 dbs. see: http://dochub.mongodb.org/core/repair for more information And I tried to run the "./mongod --dbpath /DB/kvstore/mongo --repair and obtained the error: ./mongod: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory So I need help to solve the problem!

How do you generate self-signed certificate for a Windows universal forwarder?

$
0
0
We have a requirement to enable TLS on traffic from a universal forwarder (UF) to a heavy forwarder. We will be using self-signed certificates for this. From the following Splunk documentation, we understand how to generate and apply certificates for heavy forwarders. We are not clear on how to generate certificates for UF (client.pem). Splunk Universal Forwarder - v6.1.7.1 Splunk Enterprise - v6.6.5 Can you please point me in the right direction for generating client certificates. https://docs.splunk.com/Documentation/Splunk/7.2.0/Security/Howtoself-signcertificates http://docs.splunk.com/Documentation/Splunk/7.2.0/Security/HowtoprepareyoursignedcertificatesforSplunk http://docs.splunk.com/Documentation/Splunk/7.2.0/Security/ConfigureSplunkforwardingtousesignedcertificates

How can I search on a dashboard for all events related to a specific individual?

$
0
0
How can I search on a dashboard for all events related to a specific individual? I have searched this site and the web, with no luck (so far). Thanks. Mac

After trying to upgrade a cl-master to version 7.0.1, why am I getting the following error message: "Could not find new UI modules directory to install"

$
0
0
Hi, Have any of you seen the message "Could not find new UI modules directory to install" after doing an upgrade of Splunk? Got it when trying to upgrade a cl-master to version 7.0.1. The install went(seemingly) ok, but when trying to start Splunk, it gets stuck after: "Copying '/opt/splunk/etc/myinstall/splunkd.xml.cfg-default' to '/opt/splunk/etc/myinstall/splunkd.xml'. Deleting '/opt/splunk/etc/system/local/field_actions.conf'. Could not find new UI modules directory to install" Thanks

After upgrading a single Splunk Enterprise instance type from 2vCPU to 4vCPUs, why isn't Splunk seeing additional CPUs?

$
0
0
I have a single Splunk instance on an Amazon AMI RHEL box. I upgraded instance type from 2vCPU to 4vCPUs, and Splunk for some reason cannot see the additional CPUs. However, it did pick up the memory upgraded (8 to 16). I have yet to find an answer to this online.

Can you help me with the following KV Store error: "Detected unclean shutdown - /home/dbindex/kvstore/mongo/mongod.lock is not empty"

$
0
0
Hi, I have several errors related to KV Store as: -Failed to start KV Store process. See mongod.log and splunkd.log for details. -KV Store changed status to failed. KVStore process terminated. -KV Store process terminated abnormally (exit code 100, status exited with code 100). See mongod.log and splunkd.log for details. So when reviewing the mongod.log I see: Detected unclean shutdown - /home/dbindex/kvstore/mongo/mongod.lock is not empty. I STORAGE [initandlisten] I STORAGE [initandlisten] ** WARNING: Readahead for /home/dbindex/kvstore/mongo is set to 4096KB I STORAGE [initandlisten] ** We suggest setting it to 256KB (512 sectors) or less I STORAGE [initandlisten] ** http://dochub.mongodb.org/core/readahead I STORAGE [initandlisten] ************** old lock file: /home/dbindex/kvstore/mongo/mongod.lock. probably means unclean shutdown, but there are no journal files to recover. this is likely human error or filesystem corruption. please make sure that your journal directory is mounted. found 76 dbs. see: http://dochub.mongodb.org/core/repair for more information And, I tried to run the ./mongod --dbpath /DB/kvstore/mongo --repair and obtained the error: ./mongod: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory So I need help to solve the problem!

On a modular input built with JavaScript SDK, how do I set and retrieve a password that is entered on creation of the input?

$
0
0
Hi, I have a customer modular input built with JavaScript SDK. I am trying to set and retrieve a password that is entered on creation of the input. I am looking for assistance on how this endpoint works and where it should reside within the code. In "validateInput" var obj = { username: username, password: password } // Create a new storage password sp.create(obj, function(err, storagePassword) { if (err) { /* handle error */ } else { // Storage password was created successfully Logger.log(sp.properties()); } } ); In "streamEvents" sp.fetch(function(err, storagePasswords) { if (err) { /* handle error */ } else { Logger.log("Found " + storagePasswords.list().length + " storage passwords"); //get the pwd value somehow? list = storagePasswords.list() for(var i = 0; i

How do I prevent duplicate data being indexed from csv files that is forwarded using a UF?

$
0
0
i have multiple applications that place login information (Logon Date/Time, Logoff Date/Time, userid, etc.) into existing csv files (one per application). I am monitoring these files but when they are indexed, the old data is reindexed, so I have multiple events per logon. This is causing errors in reporting (I shouldn't have to do a dedup) and is ballooning the size of each index (wasting disk space). My understanding is that when a file being monitored, a beginning and end CRC is generated to fingerprint the file, along with a Seek Address. Documentation states - "A matching record for the CRC from the file beginning in the database, the content at the Seek Address location matches the stored CRC for that location in the file, and the size of the file is larger than the Seek Address that Splunk Enterprise stored. While Splunk Enterprise has seen the file before, data has been added since it was last read. Splunk Enterprise opens the file, seeks to Seek Address--the end of the file when Splunk Enterprise last finished with it--and starts reading the new from that point." I take this to mean that existing events are not added and only new events are indexed. This isn't happening in my case. I have read the questions concerning "duplicate data" and two settings keep appearing. One is "followTail", reading the doc for this, i see "WARNING: Use of followTail should be considered an advanced administrative action." and "DO NOT leave followTail enabled in an ongoing fashion.". This doesn't look to be a good fit for my problem. The second is "crcSalt". The question I have on that setting is if I do set it, does that ignore the Seek Address causing the entire file to be indexed, which is where I am now. Thank you in advance for any help that can be provided. Scott

Calling external Python3 via Script

$
0
0
Splunk still uses Python 2.7 internally but has the ability to call external scripts to generate data via [Scripted Inputs](https://docs.splunk.com/Documentation/Splunk/7.1.3/AdvancedDev/ScriptedInputsIntro) I would like to pull data using an external API which has a Python 3 library. I have installed Python 3 to a separate place on the file system and written a Windows batch script which calls this and the invokes the Python 3 API. When run from the OS, this generates the data I would like but if I try to add this batch script as an input to Splunk I get an error along the lines of the following: ERROR ExecProcessor - message from "e:\splunk\bin\scripts\test.bat" Fatal Python error: Py_Initialize: unable to load the file system codec ERROR ExecProcessor - message from "e:\splunk\bin\scripts\test.bat" File "E:\splunk\Python-2.7\Lib\encodings\__init__.py", line 123 ERROR ExecProcessor - message from "e:\splunk\bin\scripts\test.bat" raise CodecRegistryError,\ ERROR ExecProcessor - message from "e:\splunk\bin\scripts\test.bat" ^ ERROR ExecProcessor - message from "e:\splunk\bin\scripts\test.bat" SyntaxError: invalid syntax I get a similar error even if I minimally set the batch script to only contain a minimal statement. \python.exe -c 'print("Hello")' This implies the problem is something from Python 2 being passed to the Python 3 environment since the Splunk script call is ultimately from that. I asked Splunk support about this and was told it "isn't supported" and was directed to contact professional services which seems like overkill for what is likely just an environment issue. **Is there a way to wrap the python commands such that this still works?** As of now my workaround is to have the batch file called as a scheduled task in Windows and write the results to a file which is then monitored by Splunk. Additionally, at some point Splunk will make the jump internally to Python 3 and likely people will have the reverse problem with older Python 2.X libraries.

How to load splunk UF on Citrix non persistant system citrix image.

$
0
0
We have a farm of Citrix servers that are built from a Gold image. The systems act as desktops for users. Each night the system is rebooted and it comes up like the day the Gold image was built. All of the Windows logs have been redirected to an E: drive that is persistant. I can't load Splunk on the E drive because it is not accessible when the Gold Image is built. I can't load Splunk on the gold image because it starts everyday like a brand new install and the guid changes each day plus we get all the logs back to the date the gold image was built. I think we have 3 choices but don't know how to do any of them. 1) Load Splunk on E drive and figure out how to tell the gold image to start splunk from E: at boot. 2) Load splunk on C drive and tell splunk to only forward events form boot time forward. 3) Load Splunk on C: but tell Splunk to put all the temp files and fish bucket or anything else that needs to change, on E: Anyone have any suggestions? Even a list of all the splunk UF file locations and registry entries would be a big help.

How to send an alert when a Job does not start within expected time?

$
0
0
If JOB1 doesn't start by 4:00 AM then alert should trigger, If the JOB1 starts before 4;00 AM then no issues. we need to mentioned any case command?

Why is output stopping all outputs routing when a 3rd party server goes down?

$
0
0
Hi, I am getting a weird issue. If the syslog server fails, it stops all data being indexed by the default TCP out, and then Splunk fills its buckets and falls over. Am I missing something to set it to continue if it can't connect to a output. cat outputs.conf [syslog] defaultGroup = xxxxx_indexers [syslog:xxxxx_indexers] server = xxx.xxx.xxx.xxx:9997 type = tcp timestampformat = %Y-%m-%dT%T.%S cat transforms.conf [mehRouting] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = xxx_cluster_indexers [Routing_firewalls] SOURCE_KEY = MetaData:Sourcetype REGEX = (fgt_traffic|fgt_utm) DEST_KEY = _SYSLOG_ROUTING FORMAT = xxxx_indexers cat props.conf [host::xxxxxxx1c] TRANSFORMS-routing = mehRouting, Routing_firewalls [host::xxxxxc] TRANSFORMS-routing = mehRouting, Routing_firewalls

Splunk Architecture : Between Amazon Web Services(AWS) Accounts & VPC's : Multi-site or single site deployment.

$
0
0
We are deploying hosting to various organizations in our "company". Each organization in our company may consist of numerous apps (100+ and 5,000+ employees). Our intention is to provide these organizations with an AWS Account, which would be consumed into our AWS deployment infrastructure. Each VPC/AWS Account will hold various apps and types of data. My query is should I be looking to treat each of these accounts as a separate Splunk site (Multisite deployment) and searches are local to that VPC? Or instead, should I route log traffic to a separate "master" VPC deployment as a larger clustered deployment? Qty of apps/users is a sliding scale as our project grows. Today it's 1 app only - next year it could be 100 per organization. I had initially intended to route logs securely to a single Splunk Enterprise cluster made up of say 1 search head & 2-3 indexes and grow out as demand grows. But on reading about multisite, there seems to be quite a lot of benefits. However, suspect costs saved via VPC traffic cost vs oodles of nodes/indexers/search heads per AWS account will be lost. Or would it be better to view Multisite as a longer term strategy deployment of Splunk — as the project grows etc.. — and then migrate deployment at a later date? Thoughts welcome.

How do I prevent duplicate data being indexed from CSV files that is forwarded using a universal forwarder (UF)?

$
0
0
i have multiple applications that place login information (Logon Date/Time, Logoff Date/Time, userid, etc.) into existing CSV files (one per application). I am monitoring these files, but when they are indexed, the old data is reindexed, so I have multiple events per logon. This is causing errors in reporting (I shouldn't have to do a `dedup`) and is ballooning the size of each index (wasting disk space). My understanding is that when a file being monitored, a beginning and end CRC is generated to fingerprint the file along with a Seek Address. **Documentation states:** "A matching record for the CRC from the file beginning in the database, the content at the Seek Address location matches the stored CRC for that location in the file, and the size of the file is larger than the Seek Address that Splunk Enterprise stored. While Splunk Enterprise has seen the file before, data has been added since it was last read. Splunk Enterprise opens the file, seeks to Seek Address--the end of the file when Splunk Enterprise last finished with it--and starts reading the new from that point." I take this to mean that existing events are not added and only new events are indexed. This isn't happening in my case. I have read the questions concerning "duplicate data" and two settings keep appearing. One is "followTail", reading the doc for this, i see "WARNING: Use of followTail should be considered an advanced administrative action." and "DO NOT leave followTail enabled in an ongoing fashion.". This doesn't look to be a good fit for my problem. The second is "crcSalt". The question I have on that setting is if I do set it, does that ignore the Seek Address causing the entire file to be indexed, which is where I am now. Thank you in advance for any help that can be provided. Scott

In Splunk Enterprise, can you help me call an external Python3 via Script?

$
0
0
Splunk still uses Python 2.7 internally but has the ability to call external scripts to generate data via [Scripted Inputs](https://docs.splunk.com/Documentation/Splunk/7.1.3/AdvancedDev/ScriptedInputsIntro) I would like to pull data using an external API which has a Python 3 library. I have installed Python 3 to a separate place on the file system and written a Windows batch script, which calls this and invokes the Python 3 API. When run from the OS, this generates the data I would like, but if I try to add this batch script as an input to Splunk, I get an error along the lines of the following: ERROR ExecProcessor - message from "e:\splunk\bin\scripts\test.bat" Fatal Python error: Py_Initialize: unable to load the file system codec ERROR ExecProcessor - message from "e:\splunk\bin\scripts\test.bat" File "E:\splunk\Python-2.7\Lib\encodings\__init__.py", line 123 ERROR ExecProcessor - message from "e:\splunk\bin\scripts\test.bat" raise CodecRegistryError,\ ERROR ExecProcessor - message from "e:\splunk\bin\scripts\test.bat" ^ ERROR ExecProcessor - message from "e:\splunk\bin\scripts\test.bat" SyntaxError: invalid syntax I get a similar error even if I minimally set the batch script to only contain a minimal statement. \python.exe -c 'print("Hello")' This implies the problem is something from Python 2 being passed to the Python 3 environment since the Splunk script call is ultimately from that. I asked Splunk support about this and was told it "isn't supported" and was directed to contact professional services which seems like overkill for what is likely just an environment issue. **Is there a way to wrap the python commands such that this still works?** As of now my workaround is to have the batch file called as a scheduled task in Windows and write the results to a file which is then monitored by Splunk. Additionally, at some point Splunk will make the jump internally to Python 3 and likely people will have the reverse problem with older Python 2.X libraries.

How do we load a Splunk universal forwarder (UF) on a Citrix nonpersistent system citrix image?

$
0
0
We have a farm of Citrix servers that are built from a Gold image. The systems act as desktops for users. Each night the system is rebooted and it comes up like the day the Gold image was built. All of the Windows logs have been redirected to an E: drive that is persistent. I can't load Splunk on the E drive because it is not accessible when the Gold Image was built. I can't load Splunk on the gold image because it starts everyday like a brand new install and the guid changes each day plus we get all the logs back to the date the gold image was built. I think we have 3 choices but don't know how to do any of them. 1) Load Splunk on E drive and figure out how to tell the gold image to start Splunk from E: at boot. 2) Load Splunk on C drive and tell Splunk to only forward events form boot time forward. 3) Load Splunk on C: but tell Splunk to put all the temporary files, fish bucket, or anything else that needs to change, on E: Anyone have any suggestions? Even a list of all the Splunk UF file locations and registry entries would be a big help.

Hi , I need your help to set filter between min and max value.

$
0
0
Hi , I need your help to set filter between min and max value. example : want to print value between range ( value >-2 and value < 5 ) I have created two dropdown box create two filter FILTER score > "$minwbrs_score$" FILTER score < "$maxwbrs_score$" ---> issue here i am facing issue in xml parsing for operator < (less than) This is Error : XML Syntax Error: StartTag: invalid element name,

How do I use the tstats command to count field pairs?

$
0
0
Hello everybody, i want to count how often does a specific pair of src-dest appear... something like src, dest, count 10.10.10.10 11.11.11.11 3 10.10.10.10 11.11.11.12 1 10.10.10.10 11.11.11.13 12 I use following string | tstats summariesonly=true prestats=true count as boo from datamodel=Network_Traffic.All_Traffic where All_Traffic.x_src_zone="smth" All_Traffic.x_dest_zone="smth" by All_Traffic.x_src_zone All_Traffic.x_dest_zone| table All_Traffic.x_src_zone All_Traffic.x_dest_zone boo Unfortunately, the whole boo column is always empty
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>