Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

deployment from SH

$
0
0
Hi All, Is there a way to make deployments from SH without going through CM? and how we can do it? What settings do we have to change or to configure for deploying? Thanks M&A

resizing index instance volumes

$
0
0
Hi , Recently we added new volumes and new indexes for index instances. Now I need to increase the size of new volume and reduce the size of main index which is on old volume. I just started working with splunk and got the cluster. Is there any document which can point me for administering the cluster volumes. Thanks, NP

Wrong time stamp for splunk search events

$
0
0
Please see the below events timestamp with _raw time stamp it seems like _raw time stamp assigning minutes as HRS and seconds as minutes to _time event time stamp. _time _raw 2017-10-10T16:09:00.000-0400 [10/10/2017 9:16:09] insert into #temp_ord_version values ( *****, ******, 169, 169 ) 2017-10-10T16:09:00.000-0400 [10/10/2017 9:16:09] insert into #temp_ord_version values ( *****, ****, 18, 18 ) 2017-10-10T16:09:00.000-0400 [10/10/2017 9:16:09] insert into #temp_ord_version values ( *****, *****, 20, 20 ) _time time stamp -> 2017-10-10T16:09:00.000-0400 -> minutes as HRS and seconds as minutes to _time event time stamp from _raw _raw time stamp -> [10/10/2017 9:16:09]

transit times ?

$
0
0
I am trying in splunk to monitor the progress of certain id’s which come from two different sources but in the same index. From source one there is a DB-query which is executed once a day. This generates something like this: ID Date_1 Date_2 Status 1 2-1-2017 23-9-2016 Y 2 23-3-2017 16-1-2017 x 3 16-6-2017 4-3-2017 y 4 12-12-2016 01-10-2017 y The next day it may generated this: ID Date_1 Date_2 Status 1 1-1-2017 23-9-2016 X 2 23-3-2017 16-1-2017 x 3 16-6-2017 4-3-2017 y In total in the index I have this after two runs: ID Date_1 Date_2 Status 1 1-1-2017 23-9-2016 X 1 2-1-2017 23-9-2016 Y 2 23-3-2017 16-1-2017 x 2 23-3-2017 16-1-2017 x 3 16-6-2017 4-3-2017 y 4 12-12-2016 01-10-2017 y As you can see has Id 2 no changes but is still inserted twice of course and id 4 has disappeard in the next day because it has moved to another process. ID 4 is now in the next process en will show in the other query from the second source. The output for this will be something like this: id Date_3 code 4 10-10-2017 A4 I want to show the transit times (?) of each unique ID. Over time counting from today (NOW())

Splunk Enterprise not recognizing Cisco ESA add-on App

$
0
0
Hi All, I'm trying to install the Cisco ESA Add-on App https://splunkbase.splunk.com/app/1761/ However when setting this up in Cisco Security Suite, it doesn't recognize the app after I've uploaded it - please see screenshots. It does however recognize it when configuring a data input, please could you advise? Thanks! ![alt text][1] [1]: /storage/temp/217837-splunk-esa-security-suite-setup-fail.jpg

i want to show two decimals after integer without changing values and if we give integer(52) also then output like 52.00

$
0
0
HI, a=0.54689556898 b=1.25698 c=0.5 d=51 I want output like a=0.54 b=1.25 c=0.50 d=51.00 Please do needful, how to write query i tried with this query but i am not getting | makeresults |eval Total=0.8 | rex Field=Total "(?.*)\.(?.*)" |eval EMP2=substr(FieldC,0,len(FieldC)-1) | eval Result= FieldB.".".EMP2 |eval Result1=round(Result,3) | eval EMP5=substr(Result1,0,len(Result1)-1) | fields - FieldC, EMP2, FieldB, Result, Result1, _time

Annoymising IP but have a unique value for each IP

$
0
0
Hi We want to annoymise IPs, so far we can get it to replace all IP with x.x.x.x BUT we want to replace the IP with a unique value for each IP, so that we can see how many unique visitors and look up what they were doing without seeing any customer information. Ideally, we want to do something like for (?m)^(.*)clientip=\d+\.\d+\.\d+\.\d+ (.*)$ | sha256sum REGEX = (?m)^(.*)clientip=\d+\.\d+\.\d+\.\d+ (.*)$ FORMAT =$1 ***sha256sum*** the IP DEST_KEY = _raw

search running low on memory

$
0
0
My operations fold contacted me with a memory alert on my search head. Do I need to get more memory added? this is a Linux VM. $ free -m total used free shared buffers cached Mem: 11908 10992 915 1 109 4511 -/+ buffers/cache: 6371 5536 Swap: 4095 3551 544 Swap: 4194300k total, 3636256k used, 558044k free, 4201620k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7835 splunk 20 0 6922m 629m 14m S 122.4 5.3 4443:12 splunkd 3825 splunk 20 0 1190m 60m 7024 S 46.0 0.5 140:45.40 python 10090 splunk 20 0 686m 207m 3056 S 6.6 1.7 207:40.26 splunkd

What is the best way to determine if a UFW is running without CLI access?

$
0
0
Hi, I'm looking for options to validate that a UFW is running on servers, without actually logging into the server (we are losing ssh access to all servers). Any recommendations?

Has anyone integrated Splunk Enterprise with TEMIP (Telecommunications Management Information Platform)?

$
0
0
Hi all, I am trying to integrate Splunk with TEMIP (Telecommunications Management Information Platform). TEMIP is a ticketing tool which is used for ticketing purpose in Network Operation Centre (NOC). I want to integrate Splunk with TEMIP so that all the Splunk servers related alerts will come in TEMIP. and proactively raise TroubleTicket (TT) and solve them.

What's the best way of getting data from our Splunk servers?

$
0
0
Hi guys, Just a few quick questions about getting Splunk server data into splunk! Our splunk environment collects a large amount of security data from thousands of sources, yet, we don't collect any security data from the Splunk servers themselves (they run on Redhat linux OS). I was thinking of adding all of our servers (Cluster master, license master, deployer etc) to our deployment server and create a server class with the the *nix TA to ingest the relevant host data we want. Is this the best solution or does anyone have any better ideas on how to do it? Also, can the deployment server be a client of itself? How do we get data from it to our indexer cluster if not? Is the indexer cluster okay with forwarding data to itself? Any help would be appreciated. Cheers!

How to not evaluate something during a certain time period?

$
0
0
So, I have a search query that calculates a field but I wanted to know if there is a way to check if it is a certain time period and then to not calculate that field. I have a start time and end time: for example: 10/13/2017 12:10:00 and end time 10/20/2017 14:20:00. And I wanted to change the eval so that if the current time matches that time field then to make a different calculation than what its currently calculating. Basically eval field=if("in time frame",new calculation, old calculation) Thanks

Is it possible to use a single rex command to deal with multiple scenarios?

$
0
0
Hello All, I am trying to write a single rex command that will handle a number of different field entires. Basically I have an effort being stored (painfully) in hours and minutes, but the values for the field can vary. Here is an example of the possibilities: Case|Effort 1|30 minutes 2|1 hour 3|1 hour 30 minutes 4|2 hours 5|2 hours 30 minutes What I'd like to do is write a single rex that extracts the hour and minute values when they are available. So far I've written one that handles cases 3 and 5: rex field=Effort "(?\d+)\s\w+\s(?\d+)\s\w+" What I can't get are cases 1 2 and 4. I mean, I can rewrite the rex to only get those cases, but I don't know how to combine them... Do you have any pointers on how to do this? Thank you and best regards, Andrew

Search payload sent with POST requests to a particular endpoint in the past

$
0
0
I have the following query, but I am not sure how to get the payload that was sent to the request_url. index=fastly sourcetype=fastly_syslog_json fastly_service_name=www.mysite.com request_type=POST request_url="/api/v1/myPostEndpoint" | fields {what to put here?} I am hoping there is a way I can inspect the payloads that have been POSTed to that endpoint over a range of time, in order to create a report on a particular field within those payloads.

No Dome input options after installing dome9

$
0
0
I installed the Splunk AWS app, add on and Dome9 app Going through the configuration I am unable to select a dome type under data inputs.

Is it possible to alias a command to another one?

$
0
0
All, So we're slowly moving off of index=java to index=applicationlogs for a few reasons. Is there a way to alias index=java to index=applicationlogs for users?

How to calculate calculate appropriate levels for maxThreads and maxSockets, in the httpServer stanza of ~/etc/system/default/server.conf, for a HEC collector instance?

$
0
0
Is there a formula to make a stab at appropriate levels for maxThreads and maxSockets, in the httpServer stanza of ~/etc/system/default/server.conf, for a HEC collector instance? Our current setting is automatic, which seems to set the limits too low: # Automatically tune these limits: maxThreads = 0 maxSockets = 0

compare response time from yesterday to today

$
0
0
Trying to compare response time from yesterday to today. This search seems to be working, but very, very slow. Any suggestions on how to improve it? sourcetype=prd_banking_server Bank_Code = 108 earliest=@d latest=now | eval Duration_Sec = duration/1000 | multikv | eval ReportKey="Today" | append [search sourcetype=prd_banking_server Bank_Code = 108 earliest=-1d@d latest=-24@h | eval Duration_Sec = duration/1000 | multikv | eval ReportKey="Yesterday" | eval _time=_time+(60*60*24*7)] | timechart span=60m avg(Duration_Sec) by ReportKey

eventtype based panel

$
0
0
eventtype=* |stats count by eventtype which works. However, in a dashboard below querry doesn't work. Any suggestions please? index=$111111$ $22222$ eventtype=* |stats count by eventtype

Splunk and AIDE -- How do I ignore the first line of an AIDE log file?

$
0
0
Right now AIDE runs a check every 5 minutes and comes back with the same results each time of files Added, Removed, or Changed. The issue is the timestamp changes and the same results are being indexed over and over even though there has been no change. I would like to prevent indexing the same log file, but Splunk sees the log as a different file because the timestamp is changing on the first line. Is there a way to prevent Splunk from indexing the AIDE logs and only index them when there is a change in the rest of the AIDE log below the timestamp? Example AIDE log. Start timestamp: 2016-06-11 01:53:00 Summary: Total number of files: 1116 Added files: 0 Removed files: 1 Changed files: 3 --------------------------------------------------- Removed files: --------------------------------------------------- removed: /var/log/aide/aideCIM.log --------------------------------------------------- Changed files: --------------------------------------------------- changed: /var/log/aide changed: /var/log/aide/aide.log changed: /var/log/aide/aide_files.log ---------------------------------------------------
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>