Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

IP Reputation app directory structure error.

$
0
0
I recently tried to install the ip reputation application to spunk enterprise. i had downloaded the .tgz file from splunkbase and tried to install by uploading to the file. i received the following error. **"There was an error processing the upload.Invalid app contents: archive contains more than one immediate subdirectory: and ipreputation"** so i extracted the file and checked that there was a another folder called ***PaxHeader*** as well as a file ***._ipreputation*** under the ipreputation main directory. when i moved this file and folder into the ipreputation folder,the app seemed to be installed. But the threatscore was not displayed eventhough i had entered the key in the .py file. Please advise. I had re downloaded and checked for the MD5 checksum, it seems to be alright. but the app directory structure seems to have an error.

How to create a lookup matching non-exact words ?

$
0
0
I have the below type of event and I want to add a category field to it using lookups time Transaction Business name 6/01/2018 40.22 ABC foods 6697 VALE TAP AND PAY 0000 So, I created the following lookup - test.csv 1. Business name,Category 2. ABC foods,Dine out 3. DEF utilities,Utilities 4. TARGET suburb name,Shopping 5. supermarket suburb TAP and PAY 0000,Groceries Below is my search query, index="finance" sourcetype="csv_finance" | lookup test.csv "Business name" OUTPUT Category| table "Business name" Category but its not displaying the results. How can I create a successful lookup that will display the **categories** along with the **business name** in the search results ?

How can I automatically rotate Splunk local passwords?

$
0
0
All, I've been asked to automatically rotate the local passwords on Splunk every week. It can be predictable. Like HelloP@ssword1June1st goes to HelloP@ssword1June8th. But just needs to rotate to meet the auditors requirement. Any idea how I would tackle that?

Append eval'd streamstats to stats in table

$
0
0
I am trying to append and eval'd field from streamstats to other fields from a stats command within a table. The following produces results in each field except new_loss (the eval'd field from streamstats). Is this possible? My current search which doesn't work index=vdi sourcetype="vmware_pcoip" host=* | sort _time | convert ctime(_time) as "Latest Time Stamp" | stats last("Latest Time Stamp") as "Latest Time Stamp" last(loss_percentage) as loss_percentage last(round_trip_time_ms) as roundtrip last(rto) as rto last(quality) as quality last(avg_rx) as avgRX last(avg_tx) as avgTX by host | streamstats current=f window=1 global=f last(bw_limit) as old_bw_limit by host | eval new_loss=if(bw_limit

Multiple Channel posts with Slack Notification Alert app

$
0
0
I am trying to configure the slack notification alert app for multiple slack channels. There seems to be only one option for a webhook URL but we need to be able to configure multiple webhook URLs to be able to post to multiple channels in the same organization/workspace... Anyone have a solution to this?

How to display "0" instead of "No Results Found"

$
0
0
Hi guys! I have the below query for a Single Value Dashboard Panel. It is counting the daily total error duration of the system. My problem with this is, when there is no error, it displays "No Results Found" instead of "00:00:00" or "0". How can I fix this? | stats sum(DURATION) AS "DURATION" | eval secs=DURATION%60,mins=floor((DURATION/60)%60),hrs=floor((DURATION/3600)%60) | eval HOURS=if(len(hrs)=1,"0".tostring(hrs), tostring(hrs)),MINUTES=if(len(mins)=1,"0".tostring(mins), tostring(mins)),SECONDS=if(len(secs)=1,"0".tostring(secs), tostring(secs)) | eval Time=HOURS.":".MINUTES.":".SECONDS | fields + Time

invalid search or missing required fields for thresholding

$
0
0
Hi, I'm using ad hoc search for a glass table. By search, when run i'm able to get the value that i want. But in the glass table, error appear as "invalid search or missing required fields for thresholding ". Any idea or opinion that i can start to look at?

How to find field data that does not match expected output

$
0
0
I am collecting data from a field that should contain a 9 digit number. I am finding that there are some instances where this field is blank, or contains alphanumeric characters In order to quantify the issue (and identify this other content), could anyone advise what search query can I use to identify these events where the field does NOT contain a 9 digit number please ?

Getting Error message while accessing the Splunk Add-on for AWS via splunk console.

$
0
0
Hi All, Currently I had pushed the Splunk Add-on for AWS in one of the heavy forwarder instance, but when i was trying to open the Add-on its throwing an Error. Kindly guide me to fix this issue. Error Details: An error occurred while reading the page template. See web_service.log for more details View more information about your request (request ID = 5a55ceffbf7f26b0399490) in Search

Source and sourcetype filtering no longer working after upgrade

$
0
0
After upgrade from Splunk 6.2. to 6.6.3 having large existing indexes, any search by either source or sourcetype does no longer work. I.e. "No results found. Try expanding the time range" Indeed, both fields are present in all events as can be seen if not filtering in the search-line. Even statistics work. If I do " * | stats count by source" , then I get a perfect list of all sources having a count of events. But sill, clicking on a source and "Add to search" will add it to the search-line and return an empty result. Any Ideas where it goes wrong? I do find some errors in log, such as: WordPositionData - couldn't find tab delim or warnings reason='couldn't parse hash code: can this be a reason? Thanks

How I send Email to SPLUNK ?

How to configure inputs.conf to send data from 1 directory to 2 different clusters with different index/sourcetype

$
0
0
We have a scenario where we need to forward data from 1 directory to 2 different indexer clusters. While this is achievable through TCP Routing in inputs.conf, I believe the solution will only work if everything else remains the same in the monitoring stanza. We need to send data to the 2 clusters with different index/sourcetype configuration. Is this possible using the same inputs.conf file? We have observed that setting up 2 different stanzas for the same monitored directory results in only one of the stanzas being respected. Below is a description of the configuration. [monitor:///A/B/C] index = index1 sourcetype = st1 _TCP_ROUTING = cluster1 [monitor:///A/B/C] index = index2 sourcetype = st2 _TCP_ROUTING = cluster2 The above configuration resulted in the data only flowing to cluster2. We tried differentiating the 2 stanzas by putting asterisk at the end of the directory name, but it didn't make a difference.

grouping and adding the group values

$
0
0
I have these fields Server, LUNs, Application, Used in GB, Available in GB How can I group by server column and then add the total and used columns by each group. I only get one server back with the sum of all used GB in the results when it try this line: | stats sum(used_gb) by server Every server has multiple LUNs and I want to add the total by server name

How to restrict a user to a single dashboard?

$
0
0
I want to make a role such that a user can only view a single dashboard. They should not be able to access any other page, including any settings pages or the search app. Is there a way to achieve this?

Custom fields of a query are not showing up in the Splunk Machine Learning Toolkit.

$
0
0
In the search module, I have extracted 2 custom fields for a query and they show up after some time in that module itself. However, custom fields are not present when I put the same query in the Splunk Machine Learning Toolkit. What can be done to see those 2 custom fields in the toolkit?

Splunk AWS integration Unable to Fetch instance Details

$
0
0
Hi Team, We are using latest 7.1 Splunk Enterprise version of Splunk and integrated with our AWS enviroment. In the overview page i am seeing configuration details but other then that there is no information available .We also noticed in splunk_aws_inspector logs promptign with below error.Please help in fixing this issue. 2018-01-10 09:09:27,065 level=ERROR pid=4812 tid=MainThread logger=splunk_ta_aws.modinputs.inspector pos=util.py:__call__:163 | | message="Failed to execute function=run, error=Traceback (most recent call last): File "E:\Splunk\etc\apps\Splunk_TA_aws\bin\3rdparty\splunktalib\common\util.py", line 160, in __call__ return func(*args, **kwargs) File "E:\Splunk\etc\apps\Splunk_TA_aws\bin\splunk_ta_aws\modinputs\inspector\__init__.py", line 53, in run _do_run() File "E:\Splunk\etc\apps\Splunk_TA_aws\bin\splunk_ta_aws\modinputs\inspector\__init__.py", line 30, in _do_run aiconf.AWSInspectorConf, "aws_inspector", logger) File "E:\Splunk\etc\apps\Splunk_TA_aws\bin\splunk_ta_aws\common\ta_aws_common.py", line 136, in get_configs tasks = conf.get_tasks() File "E:\Splunk\etc\apps\Splunk_TA_aws\bin\splunk_ta_aws\modinputs\inspector\aws_inspector_conf.py", line 60, in get_tasks _cleanup_checkpoints(tasks, config) File "E:\Splunk\etc\apps\Splunk_TA_aws\bin\splunk_ta_aws\modinputs\inspector\aws_inspector_conf.py", line 119, in _cleanup_checkpoints internals = store.get_state("internals") File "E:\Splunk\etc\apps\Splunk_TA_aws\bin\3rdparty\splunktalib\state_store.py", line 155, in get_state state = json.load(jsonfile) File "E:\Splunk\Python-2.7\Lib\json\__init__.py", line 291, in load **kw) File "E:\Splunk\Python-2.7\Lib\json\__init__.py", line 339, in loads return _default_decoder.decode(s) File "E:\Splunk\Python-2.7\Lib\json\decoder.py", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "E:\Splunk\Python-2.7\Lib\json\decoder.py", line 382, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded Regards, Shweta

How to change table column headings?

$
0
0
The search command that I have used is: | chart list(field1) as A list(field2) as B by name month The result I am getting is something like this Name A : JAN A : FEB A : MAR B : JAN B : FEB B : MAR abc xyz Desired result: NAME JAN : A JAN : B FEB : A FEB : B MAR : A MAR : B abc xyz I know the reason why I am getting the result but is there any way to change the names like the desired one? Thanks in advance.

Can i use wildcard for the indexes.conf indexer name?

$
0
0
I am trying to use wildcard to the indexer name is it possible? example: i have indexes name with a patern name and same configuration of how and cold but different maxdatasize below [index_name_1] maxdatasize = 1000 homePath.maxDataSizeMB = 0 coldPath.maxDataSizeMB = 0 [index_name_2] maxdatasize = 2000 homePath.maxDataSizeMB = 0 coldPath.maxDataSizeMB = 0 [index_name_3] maxdatasize = 3000 homePath.maxDataSizeMB = 0 coldPath.maxDataSizeMB = 0 is it possible to add a index name on wildcard with the same configuration to apply for all of them on hot and cold configuration? like this [index_name*] homePath.maxDataSizeMB = 0 coldPath.maxDataSizeMB = 0

Unable to setup SSL self sign cert on Windows forwarder and Windows Indexer both Running Windows 2k12

$
0
0
1. I have follow the splunk instruction, on my Windows Indexer server I have created a CAroot.pem file 2. I have also created a myNewServerCertificate.pem file using the instruction combining the below 3 files type myServerCertificate.pem myServerPrivateKey.key myCACertificate.pem > myNewServerCertificate.pem 3. I have also created a myNewForwardercertificate.pem file using the instruction combining the 3 below files type myForwarderCertificate.pem myForwarderPrivateKey.key myCACertificate.pem > myNewForwarderCertificate.pem 4. On my Indexer i pointed inputs.conf to the new cert but when i look in the logs it not using the new cert instead it goes back to the default cert server.pem my inputs.conf [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = $SPLUNK_HOME/etc/auth/mycerts/myNewServerCertificate.pem sslPassword = password123 requireClientCert = false 5. On my Forwarder here is my outputs.conf and server.conf ouputs.conf: [tcpout] defaultGroup = splunkssl [tcpout:splunkssl] server = 192.168.43.140:9997 [tcpout-server://192.168.43.140:9997] clientCert = $SPLUNK_HOME/etc/auth/mycerts/myNewForwarderCertificate.pem sslPassword = $1$F9PZO6wn/g== sslVerifyServerCert = false server.conf: [sslConfig] serverCert = $SPLUNK_HOME\etc\auth\mycerts\myCACertificate.pem password = $1$F9PZO6wn/g== caCertFile = myCACertificate.pem caPath = $SPLUNK_HOME\etc\auth\mycerts sslPassword = $1$F9PZO6wn/g== I can't seem to get ssl going with the self sign cert, can anybody shed some light for me. thanks,

splunk enterprise security cloud base

$
0
0
What is the minimum gb/day for ES I can purchase on cloud base? I have 20gb/day splunk enterprise licence and i want to add the ES module. Thanks
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>