Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

intermitencia in time to Get Authentication? DMC

$
0
0
![alt text][1] good morning I'm reviewing splunk from the DMC, and I draw attention to these cuts in the graphics. Is this behavior normal? currently we have problems of communication between the pairs because the bundles is almost 7GB. I would appreciate your experience and recommendations [1]: /storage/temp/216587-ss1.jpg

will the logs gets re indexed when you change the indexing port?

$
0
0
Hi All, I used to send the data to Platform 1 via port 9997 and then I had to stop sending the data to platform 1 and send the data to another platform using port 9994... all my old data on the server was re indexed again even though it was indexed in platform 1 via 9997. Any idea why it got re indexed? OR is it a bug ? Regards, Thippesh

Splunk data input?

$
0
0
I've been trying to look for a way to for Splunk to input real-time data. Can Splunk do an http get request to a site every 15 minutes or so to get data from html page? For testing purposes, we could use google.com be the page that i want to grab the information from, and I want to set up Splunk to grab whatever in that html page and input it into the data. Is this possible? If yes, any ideas how I could do this? Thank you!

Sideview Utils DateTime Module - Start with Today, Not All Time

$
0
0
Working with the DateTime module I'd like to have a default earliest value of today. I've tried this: ![Sideview Utils][1] I've set the default time first in ValueSetter module to pass down to the DateTime module. Setting earliest in the first ValueSetter module doesn't seem to flow down to the DateTime, then down to the second ValueSetter, then to the Search Module. The first ValueSetter doesn't seem to have any impact. If I set earliest in the second ValueSetter module, that'll work in replacing searching across all time, but when I select another earliest date from the DateTime Module, it doesn't overwrite the earliest value in the second ValueSetter before pushing that value down to the Search Module. Is there any way to set a start earliest value so the search doesn't execute across all time? Mark [1]: /storage/temp/217612-su.png

connection a SQL Server using DB Connect 3.1.1

$
0
0
Hi at all, I have a DB Connect 2.1.3 on Linux that is correctly connected to SQL Server 2008/R2. Now I'm trying to connect a new Splunk server using DB Connect 3.1.1 on Linux. I used the same configuration in both the installation (settings, drivers and credentials) and obviously I checked connection with telnet. DB Connect 3.1.1 receives from SQL Server the following message. 09/21/2017 17:13:33,Logon,Unknown,Login failed for user 'XXXXX'. Reason: Password did not match that for the login provided. [CLIENT: xx.xxx.xxx.xxx] I used JDBC Generic Microsoft driver in both the installations. Is there a knows issue on DB Connect 3.1.1? Thank you in advance. Bye. Giuseppe

Combine the two queries and calculate count

$
0
0
Hello experts. I tried to execute the query, as described here https://answers.splunk.com/answers/106906/how-to-perform-math-on-single-values.html In my case, too, there are two requests. 1st search: index=ns SUBMIT_SM REQUEST host="notif*" | rex field=_raw "CID\:(?.*)\ actor-id" | dedup CID | stats count as part 2nd search: index=ns SUBMIT_SM REQUEST host="notif*" | stats count as uniq I tried to combine these requests into one to calculate the ratio | multisearch [ search index=ns SUBMIT_SM REQUEST host="notif*" | rex field=_raw "CID\:(?.*)\ actor-id" | dedup CID | eval marker="s" ] [ search index=ns SUBMIT_SM REQUEST host="notif*" | eval marker="o" ] | stats count(eval(marker=="s")) as part count(eval(marker=="o")) as uniq | eval velocity=(part/uniq)*100) I receive an error: Error in 'multisearch' command: Multisearch subsearches may only contain purely streaming operations (subsearch 1 contains a non-streaming command.) The search job has failed due to an error. You may be able to see the job in the Job Inspector. I tried it differently index=ns SUBMIT_SM REQUEST host="notif*" | stats count as part | append [ search index=ns SUBMIT_SM REQUEST host="notif*" | rex field=_raw "CID\:(?.*)\ actor-id" | dedup CID | stats count as uniq] | eval velocity=part/uniq But velocity was not calculated Help

Removing n whitespaces from event at Index time

$
0
0
Hi all, I want to remove the whitespaces from only the account value, and not the whole event at index time. Is this possible? Given the events look like this: {"account": "Account 1", "justification": "TEST 1", "value": "50"} {"account": "Account 1 2", "justification": "TEST 2", "value": "50"} {"account": "Account 1 2 3", "justification": "TEST 3", "value": "50"} {"account": "Account 1 .. n", "justification": "TEST 4", "value": "50"} I want it to look like this: {"account": "Account1", "justification": "TEST 1", "value": "50"} {"account": "Account12", "justification": "TEST 2", "value": "50"} {"account": "Account123", "justification": "TEST 3", "value": "50"} {"account": "Account1..n", "justification": "TEST 4", "value": "50"}

Match day and get the sum by day, also get the percentage

$
0
0
My data looks like this, I've grouped it by a common field. I want to match the date_mday and get the sum of the events for that day. commonField list(field1) list(date_mday) list(count) abc f222 efg 20 10 abc f333 ccc 20 20 abc f222 efg 20 30 abc f334 ccc 20 40 -- sum of count for same date_mday - 10 + 20 + 30 + 40 = 100 *abc f114 ddd 19 10 abc f113 ccd 19 9 -*- sum of count for outliers for same date_mday - 10+9 = 19 def f222 efg 22 10 def f333 ccc 22 25 -- sum of count for same date_mday - 10+25+5 = 40 def f111 bbb 22 5 *def f111 bbb 20 15* There are some outliers(in italic) in the data. Then, I want to get the percentage of the outlier vs the total sum. I'm using the stats command for grouping the data running over a 30 days range, like this: search string here | stats list(field1),list(field2),list(date_mday),list(count) by commonField

Capturing AD Authenticated Applications

$
0
0
Scenario: We're doing an active directory upgrade which will effect applications that currently point to specific domain controller for authentication. We have so many applications in use right now and some stem back to before when most of us were employed. I'm wondering if there's any way to construct something within Splunk that would be able to track applications that are authenticating via. active directory. Would it just give the servers the apps are housed on or is there a way to get specific information relating to the application itself? I believe my first step is to get Splunk on the domain controllers, after that I'm unsure and I just wanted to see if this was something anyone on here has ever dealt with or had experience with. As always thanks to anyone who takes the time out to read this and even more thanks if anyone has suggestions!

KVstore update

$
0
0
I have the following "Frankenstein" query that creates a lookup table, and works quite well. Replaces several inadequacies of the Monitoring Console for tracking forwarders. This is only setup for the question (but you may have suggestions for this as well) index=_* host=sm008 OR host=sm007 OR host=sm004 "/services/broker/phonehome/connection" | rename host AS manager | dedup clientip | stats max(_time) AS phonehome_time, values(manager) AS manager, BY clientip | join type=inner ind`enter code here`ex clientip [ search index=_internal sourcetype=* source="*metrics.log" sourceHost=* date_zone=* group=tcpin_connections (connectionType=cooked OR connectionType=cookedSSL) fwdType=* guid=* version=* arch=* hostname=* sourceIp=* | rename sourceIp AS clientip, hostname AS SrcHostname | dedup clientip | stats max(_time) AS connect_time, values(SrcHostname) AS SrcHostname, values(date_zone) AS TZ-Offset(reported), values(fwdType) AS fwdtype, latest(version) AS Version, values(arch) AS Arch, values(os) AS OS,avg(tcp_eps) AS tcp_eps, BY clientip] | eval "Avg EPS"=round(tcp_eps,2) | eval FwdType=upper(fwdType) | eval status_p = if( (phonehome_time < (relative_time(now(),"-600"))), "Missing", "Active") | eval status_c = if( (connect_time < (relative_time(now(),"-600"))), "Missing", "Active") | eval "LastPhoneHome"=strftime(phonehome_time,"%d-%b-%y - %H:%M:%S") | eval connect=strftime(connect_time,"%d-%b-%y - %H:%M:%S") | search manager=* | eval combined_status=status_p."/".status_c | rename clientip AS SrcIP, connect AS "LastConnected", combined_status AS "status", Version AS version, | table SrcIP, SrcHostname, LastPhoneHome, "LastConnected", status, manager, fwdtype, version, OS We now want to keep a history of and track of forwarders that have ever connected (we have more than 10,000 currently). I created a KVstore with the same fields as above, with one new one: host_record-date. This field will track date of the last time a forwarders IP appeared. Now I want to update in intervals the KVstore with information from the query above. Not all IP's in the KVstore will be update since some "randomly" disappear, so will only update those KVstore rows that have changed. Shouldn't the following query do this for me? | inputlookup admin_panel-KV-phonehome_indexing-status | join SrcIP type=outer [search index=_* host=sm008 OR host=sm007 OR host=sm004 "/services/broker/phonehome/connection" | rename host AS manager | dedup clientip | stats max(_time) AS phonehome_time, values(manager) AS manager, BY clientip | join type=inner index clientip [ search index=_internal sourcetype=* source="*metrics.log" sourceHost=* date_zone=* group=tcpin_connections (connectionType=cooked OR connectionType=cookedSSL) fwdType=* guid=* version=* arch=* hostname=* sourceIp=* | rename sourceIp AS clientip, hostname AS SrcHostname | dedup clientip | stats max(_time) AS connect_time, values(SrcHostname) AS SrcHostname, values(date_zone) AS TZ-Offset(reported), values(fwdType) AS fwdtype, latest(version) AS Version, values(arch) AS Arch, values(os) AS OS,avg(tcp_eps) AS tcp_eps, BY clientip] | eval "Avg EPS"=round(tcp_eps,2) | eval FwdType=upper(fwdType) | eval status_p = if( (phonehome_time < (relative_time(now(),"-600"))), "Missing", "Active") | eval status_c = if( (connect_time < (relative_time(now(),"-600"))), "Missing", "Active") | eval "LastPhoneHome"=strftime(phonehome_time,"%d-%b-%y - %H:%M:%S") | eval connect=strftime(connect_time,"%d-%b-%y - %H:%M:%S") | search manager=* | eval combined_status=status_p."/".status_c | rename clientip AS SrcIP, connect AS "LastConnected", combined_status AS "status", Version AS version, | table SrcIP, SrcHostname, LastPhoneHome, "LastConnected", status, manager, fwdtype, version, OS] | table SrcIP, host_record-date, SrcHostname, LastPhoneHome, "LastConnected", status, manager, fwdtype, version, OS | outputlooklup admin_panel-KV-phonehome_indexing-status

getting this error while applying distribution bundle

$
0
0
I have some apps that I deleted in slave-apps directory on our indexers and now our master apps on cluster master has these files and i want to push the distribution bundle but gives this error In handler 'clustermastercontrol': No new bundle will be applied. The master and peers already have this bundle with bundle id = 9BF1726DFB2075A5E9149D2D00E8AE98

How can I search based on PCI requirements without using the Splunk App for PCI Compliance?

$
0
0
If downloading the PCI App is not an option, what would be the best/fastest way to create an index, or to generate searches based on the PCI requirements?

Can I forward local text log files from my laptop to Splunk (for testing purposes)? Splunk and the forwarder would be on the same laptop in this hypethical.

$
0
0
How to use Splunk Forwarder in my personal laptop for testing purpose and forward the data to Splunk from a monitored local text log file kept in a directory. Please note that I have Splunk and Splunk Forwarder on the same laptop. If this is possible, please guide me. I have used Files and Directory option in Splunk to get the data in indexers and search it. It is working as expected. This is just for visualizing splunk forwarder forwarding data to Splunk. Nothing else.

Eval formula to display dates till 31st december where start day is determined by a formula

$
0
0
Hey Everyone I am trying to write an eval when a user enter an year it should return a date formula works fine in excel DATE(F6,11,29)-WEEKDAY(DATE(F6,11,24)) F6 is user input for an year. idea is to display the days from the days from thanksgiving to december 31st for any year I input.

transpose with a group by

$
0
0
my data is currently setup as follows: Group / Flag / Count G1 / No / 5 G1 / Yes / 10 G1 / Total / 15 G2 / No / 7 G2 / Yes / 19 G1 / Total / 26 ... I am trying to "transpose" the data to this: Group / Yes / No / Total G1 / 5 / 10 / 15 G2 / 7 / 19 / 26 ...

if we increase max_memtable_bytes in limits.conf does this change effects the performance of search??

$
0
0
ES app creating large lookup file the size nearly 600MB file. So as the work around suggested from Splunk docs we increased max_memtable_bytes value to 700MB in limits.conf on all the indexers. After the change search heads working very slow and searches aso working slow. Does this change have any impact on search heads??

Cluster has only 0 peers (waiting for 2 peers to join the cluster)

$
0
0
Receiving as we had to redistribute the configuration to peers and took restart i think both peers took restart when cluster master was down and now we are getting this error Cluster has only 0 peers (waiting for 2 peers to join the cluster) Please help

How to sort strings based off a dictionary of values?

$
0
0
Hi & thanks in advance for reading, I have a table as follows: email event ---------------------------------------------- I-got-delivered@example.com deferred I-got-delivered@example.com delivered I-got-delivered@example.com processed I-bounced@example.com deferred I-bounced@example.com processed I-bounced@example.com bounced Im-processing@example.com deferred Im-processing@example.com processed where the events are ordered as follows: { 1: 'deferred', 2: 'processed' 3: 'bounced', 4: 'delivered' } I want group by the email, compare the events and return only the max value for event (i.e. deferred < processed < bounced < delivered). The table should look like this: I-got-delivered@example.com delivered I-bounced@example.com bounced Im-processing@example.com processed I was thinking I could do it with lots of nested if statements, but I was wondering if there's a more elegant way to do it. How would you achieve this? Thanks, fre

How can I search for the contents of a table inside of another table?

$
0
0
Hi and thanks for reading in advance, I have two tables: 1. events for status=50* on a /submissions URL endpoint, let's call this errors, and 2. events for status=200 on a /submissions URL endpoint, let's call this successes I want to find out which events occur in the errors table, but not the successes table. The particular context is we had an outage, and I'm trying to discover which events were successfully submitted at a later date, and mainly: which events haven't been submitted since the outage. How would you go about this? Thanks, fre

Why does it say there are events but then it says "No results found"?

$
0
0
After running a search, I have the below results: 112,471 events (9/20/17 2:00:00.000 PM to 9/21/17 2:10:07.000 PM But when I click on the Events tab, I see this: `No results found.` even though the search is in verbose mode. How can I review those 112,471 events?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>