Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Indexer down: On "Pause the Monitoring", will forwarders pick up where they left off once the indexer is online?

$
0
0
We have a small Splunk infrastructure, one indexer, one search head and 300 machines with forwarders installed. Our indexer has gone down with hardware issues. Our log traffic is less than 10GB a day over the weekends and up to 60GB during the weekdays. Our indexer went down on Saturday night. From what I've read, "if the indexer refuses data (full or down) then the forwarders fill their memory queue up to 2MB (default), then pause the monitoring". My question is on "Pause the Monitoring", will the forwarders pick up where they left off once the indexer is online?

How do I change a password in the Command Line Interface (CLI) without typing it in cleartext?

$
0
0
Hello, I was wondering how do you change a password using the CLI without typing it into the command in cleartext? This is primarily because PowerShell commands are logged in the environment and so any password changes will be written to the log. It is not feasible to disable PowerShell logging to change the password either, so this will have to be done in the CLI. Does anyone know of a method to accomplish this? Thank you in advance!

Is it possible to pass event id to Http Event Collector (HEC) to avoid indexing if same event is sent twice?

$
0
0
My app notifies Splunk with the call to HEC on data changes. As data actually stored as series of events, it is quite straightforward to use Splunk for analysis. But, due to some internal reasons, it is possible that same events will be delivered to HEC twice. And it is crucial to have only one event stored in Splunk in this case. Most obvious way to achieve this is to have some unique id posted with event and having Splunk ignore the event if it has id matching any of previously indexed events. But, I failed to find anything like this in the documentation.

How do you find the number of users logged in at any specific hour?

$
0
0
Here are two sample events Event 1 - 2018-09-10 11:17:57,982 INFO [http-nio-127.0.0.1-8085-exec-130] [BreakssFogFilter] BF27462 GET https://rambo.ixngames.com/start.action 7485905kb Event 2 - 2018-09-10 11:10:55,644 INFO [http-nio-127.0.0.1-8085-exec-51] [BreakssFogFilter] ZD07220 POST https://rambo.ixngames.com/userLogout.action 1615031kb Event 1 indicates that a user just logged in. Event 2 indicates a user logged out. Around 30 similar events get created with slightly different format events when a users logs in or logs out. It also specifies the user name in the events. We are trying to figure out How many users(distinct) are logged in to the server at any specific hour by analyzing the events from the above mentioned event formats.

How do you display response time results sorted in descending order ?

$
0
0
I am trying to display response times in a chart for my services. But, how do I display the response times results in chart in descending order (Highest number first)? | eval Date=strftime(_time, "%Y-%m-%d") | chart avg(response_time) over services by Date | rename * as avg_* | rename avg_services as services | foreach avg_* [eval "<>"= round('<>',2)] | rename avg_* as *

Is there a document for best practices in decommissioning an IDX cluster

$
0
0
there a document for best practices in decommissioning an IDX cluster? We are going back to stand alone indexers. I found this one, but it doesn’t seem to fit their request. http://docs.splunk.com/Documentation/Splunk/7.1.2/Indexer/Decommissionasite

Is there a document for best practices in decommissioning an index (IDX) cluster?

$
0
0
Is there a document for best practices in decommissioning an IDX cluster? We are going back to stand alone indexers. I found this one, but it doesn’t seem to fit their request. http://docs.splunk.com/Documentation/Splunk/7.1.2/Indexer/Decommissionasite

Monitoring directories - question

$
0
0
With a clustered index environment, we have typically used the deployment server for the push mechanism to the universal forwards etc. Now on random servers we want to monitor for specific actions in directories not covered by a previous addon for say, the linux add on. I want to monitor a random directory - what is the best way to accomplish this? Use the add-monitor command individually on the those servers for the specific locations? Is this the best way to handle this? Thanks in advance!

What is the best way to monitor a random directory?

$
0
0
With a clustered index environment, we have typically used the deployment server for the push mechanism to the universal forwards etc. Now on random servers, we want to monitor for specific actions in directories not covered by a previous add-on for say, the linux add on. I want to monitor a random directory — what is the best way to accomplish this? Is using the add-monitor command individually on the those servers the best way to handle this? Thanks in advance!

How do I get the difference in results from two searches over different timelines?

$
0
0
I want to get top 20 errors of the day & top 20 errors of the week. Then, I want to get the difference between both results. i.e. new errors that were seen in last 24 hrs which were not seen earlier. I tried this, but it throws some error: | multisearch [search ERROR earliest=-1d | top limit=20 error_field | eval type="search1" ] [search ERROR earliest=-8d latest=-2d | top limit=20 error_field | eval type="search2"] | eval difference = search1-search2 Error thrown: Multisearch subsearches may only contain purely streaming operations (subsearch 1 contains a non-streaming command.)

How do you search an inputlookup for the results of your query?

$
0
0
I'm a little stumped with what I am trying to achieve with the lookup of values from a CSV, which are based on the search results I get when performing a search. CSV is defined as an inputlookup and contains field1,field2 When I search, I will have a value returned that is in the format of field1 in the CSV. And, I would like to display the corresponding field2 in my search results. For example: username,displayname. I've looked at the inputlookup and lookup documentation, but am unsure on how to pass results or filter a subqueries results for the value.

excess buckets

$
0
0
Hi, I see some `excess buckets` in the bucket status tab in the indexer clustering page. Is it safe to click `remove` button ? Does data become unavailable during the operation ? Should the cluster be kept in the `maintenance mode` prior to the operation ? Thanks

Is it safe to remove excess buckets?

$
0
0
Hi, I see some `excess buckets` in the bucket status tab in the indexer clustering page. Is it safe to click `remove` button ? Does data become unavailable during the operation ? Should the cluster be kept in the `maintenance mode` prior to the operation ? Thanks

How to modify rows and columns in the Splunk table?

$
0
0
Hello, I have written a splunk search which produces the following table: from to parameter value A C bla_1 111 B D bla_2 222 I want to modify that table into the following: from to value A bla_1 111 B bla_2 222 bla_1 C 111 bla_2 D 222 Would you have any ides on how to achieve this? Thank you.

VirusTotal API scan in workflow (http request)

$
0
0
Hello All, I am working on a solution that requires a "workflow action" to give a drop down when searching against a "url" field when a search has been initiated for a User's URL/web history. We are filtering results from a security appliance for web traffic / firewall filtering. We use VirusTotal for the bulk of our URL scans for remediation. I would like to click on the "Event Action (Verbose Mode)" and click on the custom VirusTotal workflow I had created. We have a functioning WHOIS workflow function and it is working beautifully. But VirusTotal has certain restrictions on how data is fed to them via their website. I would love to have this function like the "WHOIS" search and pop the results via the VirusTotal website. I have researched all that I can so far, I do have a public API for searching if needed. Does anyone have any information on what to do next? I have listed below some examples for what VirusTotal provides. https://www.virustotal.com/vtapi/v2/file/scan/upload_url?apikey= https://www.virustotal.com/vtapi/v2/url/scan - Thanks Everyone!

auth0 app: Can you help me configure this app with a new Heavy Forwarder?

$
0
0
How often do the logs rotate for Auth0 app, if at all? We configured the Auth0 app to pull data. Is there a config to only pull data from a specific time period? We are currently ingesting logs using the Auth0 app available on Splunkbase, as part of migration to Amazon Web Services. We built a new Heavy Forwarder where we have installed this app and are trying to pull the logs from it. However, we do not see any logs on the new Splunk Servers. The only thing which I see in the Splunk logs is 09-10-2018 19:00:15.113 +0000 INFO ExecProcessor - message from "/opt/splunk/etc/apps/splunk-auth0/bin/auth0.sh" Modular input auth0://ProdAuth0 Indexed an Auth0 log with _id: 5b70ad69007cf15a6684a70b 09-10-2018 19:00:15.113 +0000 INFO ExecProcessor - message from "/opt/splunk/etc/apps/splunk-auth0/bin/auth0.sh" Modular input auth0://ProdAuth0 Indexed an Auth0 log with _id: 5b70ad69597a32123753217d if I disable the Data input from the old app and enable the same setting from the new server, is there a way to index only the new data?

How do you manipulate a token before passing it to a drilldown?

$
0
0
How do you manipulate a token before passing it to a drilldown? For example, the following dashboard has a a statistic table with a field, country with value "United States of America (USA)", and I just want to pass "USA" to the drilldown. But the token ("country") is not changed to "USA" from the eval function when passed to the deep link. Any clues? Thanks. | makeresults | eval Country="United States of America (USA)" | table Country-15mnow1replace(replace(mvindex(split($click.value$," "),-1,-1),"\(",""),"\)","")

Parsing JSON with spath command is not returning expected results.

$
0
0
I have tried to get after.merchantId a million different ways, but it always comes back blank. I believe I'm missing the obvious: Search String: sourcetype="json" auditId=RECIPIENT_ADDED | spath | table _time, after.merchantId ...only _time has values, nothing else sourcetype="json" auditId=RECIPIENT_ADDED | spath | rename after.merchantId as merchantId, after.leadDays as leadDays | eval x=mvzip(merchantId,leadDays) | table _time,merchantId,leadDays,x ...only _time has values, nothing else Sample JSON: { @timestamp: 2018-09-09T19:05:50.077Z @version: 1 actingProfileType: ALL after: {"phoneNumber":"8005551212","recipientNumber":"************1111","merchantId":"111111112","paperPaymentEnabled":"true","leadDays":"5","Nickname":"Bob Evans","addressOnFile":"false","recipientName":"Bobby Evans","transferMethod":"PAYMENT","merchantZipRequired":"false","providerStatus":"ACTIVE","merchantName":"Bobby Evans"} application: BACKOFFICE auditId: RECIPIENT_ADDED browserName: Chrome browserVersion: 68.0.3440.106 clientIp: 192.0.0.1 companyId: 11113 component: PAYMENTS instanceId: 1abc2345-67de userId: 11111114 userSourceId: 2fgh3456-89ij }

Archiving Indexers to AWS S3: Is there is anyone who has experience with setting up this configuration?

$
0
0
Hello, We have a fairly new Splunk Enterprise implementation behind us and are trying to figure out a way to archive our data buckets (Cold) into frozen buckets, as we are running out of space in our main search indexer. We have read through the Splunk documentation, and believe that we will want to use coldToFrozenDir to set up the path. We are thinking about archiving the data to either AWS S3 buckets or Glacier. Is there is anyone who has experience with setting up this configuration or who has some guidance? Will we need to use Hadoop in order to use AWS S3 buckets for archiving our Splunk data or can we go without? How do we set up the path to push the Splunk data to AWS?

Can you help me with my Splunk query to set up severity?

$
0
0
sourcetype=xreGuide XRE-07*** IS_VISIBLE=true | bucket _time span=10m | stats dc(receiverId) as receiverIds by _time | eval psev=case(receiverIds<=499, "4", receiverIds<=9999, "2", receiverIds>10000, "1") | eventstats count as VIOLATIONS by psev | eval severity=if(VIOLATIONS>1 AND psev=3, 3, 4) | eventstats min(severity) as overallSeverity | fields _time receiverIds overallSeverity | rename overallSeverity as severity What i am trying to do is to get the severity based on the error count.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>