Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

query to find out the forwarders

$
0
0
Hi there, is there any query to find out the forwarders which are reporting for last 1 day or f there is a delay in the logs. Thanks

How to alert if a syslog device does not send data in a rolling 24-hour period?

$
0
0
Splunkers, To meet a regulatory requirement, I need to alert on if a syslog device does NOT send data to the Indexers in a 24 hour period. For example: If host splunk1 does send data, no alert needs to be generated. If host splunk2 does NOT send data, and alert must be generated. This alert needs to have a hostname. We are leveraging Nexpose to send a synthetic transaction to devices such as Cisco ACS switches. Search example: index=network message_text="Login failed for user SynTran01 - sshd" | stats count by host This search string returns a count of 16 and it will always be 16 for this specific devices type. Any advice would be greatly appreciated.

How to troubleshoot event forwarding from forwarder to indexer

$
0
0
I somehow lost my custom stanza's on my forwarder for sending syslog data to my indexer. I noticed that my forwarder was missing those from the forwarder on the deployment server, so I added that back and it I now see my custom stanza's for forwarder input. I see events coming in to my forwarder but they are not being sent to the indexer or at least not searchable on my search head. How do I debug this? Thanks!

Trouble getting syslog_ng to work on a standalone Splunk instance

$
0
0
Ive install syslog-ng on a standalone splunk instance but cannot get it running - ive looked at the following guide : https://www.splunk.com/blog/2016/03/11/using-syslog-ng-with-splunk.html using a syslog gen i can send a message directly to splunk as a direct input, but then i disable that and configure syslog-ng. the service starts and is listening but nothing is written to a file [root@centos-6-1 syslog-ng]# netstat -anp | grep 514 udp 0 0 0.0.0.0:514 0.0.0.0:* 13833/syslog-ng sending a facility 7 syslog message from cmd line: SyslogGen.exe -t:x.x.x.x -f:7 -s:7 -h:myhost -m:"Too many bytes.\x0D\x0A" @version:3.2 # syslog-ng configuration file. # # This should behave pretty much like the original syslog on RedHat. But # it could be configured a lot smarter. # # See syslog-ng(8) and syslog-ng.conf(5) for more information. # options { flush_lines (0); time_reopen (10); log_fifo_size (1000); long_hostnames (off); use_dns (no); use_fqdn (no); create_dirs (no); keep_hostname (yes); }; source s_sys { udp(port(514)); }; #destination d_cons { file("/dev/console"); }; destination d_mesg { file("/opt/syslog-ng/$HOST/$YEAR-$MONTH-$DAY-test.log"); }; #destination d_auth { file("/var/log/secure"); }; #destination d_mail { file("/var/log/maillog" flush_lines(10)); }; #destination d_spol { file("/var/log/spooler"); }; destination d_boot { file("/opt/syslog-ng/$HOST/$YEAR-$MONTH-$DAY-test1.log"); }; #destination d_cron { file("/var/log/cron"); }; #destination d_kern { file("/var/log/kern"); }; #destination d_mlal { usertty("*"); }; #filter f_kernel { facility(kern); }; filter f_default { level(info..emerg) and not (facility(mail) or facility(authpriv) or facility(cron)); }; #filter f_auth { facility(authpriv); }; #filter f_mail { facility(mail); }; #filter f_emergency { level(emerg); }; filter f_boot { facility(local7); }; #filter f_cron { facility(cron); }; #log { source(s_sys); filter(f_kernel); destination(d_cons); }; #log { source(s_sys); filter(f_kernel); destination(d_kern); }; log { source(s_sys); filter(f_default); destination(d_mesg); }; #log { source(s_sys); filter(f_auth); destination(d_auth); }; #log { source(s_sys); filter(f_mail); destination(d_mail); }; #log { source(s_sys); filter(f_emergency); destination(d_mlal); }; #log { source(s_sys); filter(f_news); destination(d_spol); }; log { source(s_sys); filter(f_boot); destination(d_boot); }; #log { source(s_sys); filter(f_cron); destination(d_cron); }; # vim:ft=syslog-ng:ai:si:ts=4:sw=4:et: gratzi

Where does props.conf need to exist in a distributed deployment?

$
0
0
I think I need to push this from the deployment to each device or at least the forwarder and search head. I have 5 servers making up my SPLUNK Enterprise deployment, 1 SH, 1 FW, 1 DS, 2 Indexers. My props.conf on the forwarder has this configuration for 1 data source: FIELDALIAS-severity_as_id = severity as severity_id FIELDALIAS-dst_as_dest = dst as dest EVAL-app = netwitness EXTRACT-subject = CEF\:\s+\d(?:\|[^\|]+){3}\|(?[^\|]+) When I search I am not seeing the 'subject' does this need to be pushed to the search head? how about the other devices. I am trying to understand this. Thanks!

Why am I seeing "WARN DateParserVerbose - Failed to parse timestamp"?

$
0
0
I have an event like: {"app":"EventHub Service","caller":"kafka.go:110","fn":"gi.build.com/predix-data-services/event-hub-service/brokers.(*SaramaLogger).Println","lvl":"eror","msg":"Error closing broker -1 : write tcp 10.72.139.124:53006-\u003e10.7.18.82:9092: i/o timeout\n","t":"2017-09-13T15:26:56.762571201Z"} I am seeing WARN messages in splunkd.log like: 09-13-2017 15:11:50.289 +0000 WARN DateParserVerbose - Failed to parse timestamp. Defaulting to timestamp of previous event (Wed Sep 13 15:10:59 2017). Context: source::/var/log/event_hub.log|host::pr-dataservices-eh-event-hub-16|eventhub:service|217289 The date is correct for each of the events, so my question is this: Should I set DATE_FORMAT for this sourcetype to clean this up? Clearly splunk is grumpy about it. Should I force splunk to take the field t as the timestamp? This sourcetypes attributes are system default, there is nothing that I am doing "locally" as far as parameters. Any other thoughts are much appreciated!

How do I resolve a warning about incomplete metadata results (after 100000+ entries)?

$
0
0
How to resolve the warning "Metadata results may be incomplete: 100000 entries have been received from all peers , and this search will not return metadata information for any more entries." I have a query as follows **PART:-1** | inputlookup ABCD | search Forward="Yes" | table Region,IPHost, ip_address | rename Region AS my_region, IPHost AS my_hostname, ip_address AS my_ip **PART:-2** | join type=left my_hostname [| metadata type=hosts index=* | rename host AS my_hostname] | eval lastTime=coalesce(lastTime,0) | eval timeDiff=now()-lastTime | eval last_seen_in_24_hours=if(timeDiff>86400,"NO","YES") | eval lastReported=if(lastTime=0,"never",strftime(lastTime,"%F %T")) | table my_region,my_hostname,last_seen_in_24_hours,lastReported where part 1 is just a csv file which displays a bunch of hosts and part is the query which checks all those bunch of hosts were actually reported in last_24_hours or not which displays the result as follows ![alt text][1] [1]: /storage/temp/213588-dashboard.png From the above you can see that all the 3 hosts were reporting as "NO" in the dashboard which means that they were not reported in last 24 hours but all those 3 hosts were actually reporting. To investigate further I tried to check if the part 2 works for a single host which shows as not reporting to splunk though its reporting as below | metadata type=hosts index=* | search host="abcd" result :- no results found **warning :-** Metadata results may be incomplete: 100000 entries have been received from all peers (see parameter maxcount under the [metadata] stanza in limits.conf), and this search will not return metadata information for any more entries. Is there any way that i can filter the search or exclude all those results. I think the above warning causing the issue and displaying all the hosts as "NO" though they were reporting insted of "YES". Is there any other way other than modifying the limits.conf I can filter or modify my search to look only for the csv file hosts instead of looking through all the entries. Any suggestions would be really helpful.

Amazon Kinesis Modular Input - Data not displaying in dashboards

$
0
0
We have a large number of separate AWS accounts that we are collecting VPC flowlog data from. Each of these accounts will push to a centralized account that has Kinesis streams deployed in all of our active regions. We have an input created for each region's stream. The data looks to be getting indexing properly within Splunk, though the Splunk App VPC flowlog dashboards don't display any data. Looking at the searches on the dashboards, they are set to look for source="dest_port" or source="src_ip". Our ingested data looks more like source="us-east-1:VPCFlowLogs:eni-12345678-all" for this part of the event. Is this input type incompatible with Kinesis streams that receive cross-account input? We would really like to avoid having to pay for and manage streams in every account separately, as well as configure new inputs every time we add a new account.

Key-value pair extraction -- regex help

$
0
0
We have some snmp data and want to extract the data as a key-value pair Sample var.12345.5.5 = INTEGER: 10 myTag::var.12345.5.9 = STRING: "abc" myTag::var.12345.5.3 = STRING: "admin" myTag::var.12345.5.4 = STRING: "developer" var.12345.5.5 = INTEGER: 10 myTag::var.12345.5.9 = STRING: "xyz" myTag::var.12345.5.3 = STRING: "user1" myTag::var.12345.5.4 = STRING: "support" output required var_12345_5_9,var_12345_5_3,var_12345_5_4 abc,admin,developer xyz,user1,support I tried a basic REGEX=var\.([^ ]+)\s=\s(\S+) FORMAT = $1::$2 CLEAN_KEYS = true But can you please help in making it a bit more better?

How do I dynamically set earliest from subsearch?

$
0
0
Hi folks, been all over this site and google, not finding a working solution. I'm trying to perform a search using a subsearch to populate earliest= | tstats min(_indextime) as firstTime, max(_indextime) as lastTime where earliest=[ | rest /services/admin/indexes splunk_server=localhost | search title=syslog | eval dy = (frozenTimePeriodInSecs/86400) | eval earli="-" . dy . "d@d" |fields earli ] index=syslog by index | eval delta = (lastTime - firstTime) | eval yr = floor(delta/86400/365)| eval dy = (delta/86400) % 365 | eval actual_ret = yr . " years, " . dy . " days" | eval lastTime=strftime(lastTime,"%Y-%m-%d %H:%M:%S"), firstTime=strftime(firstTime,"%Y-%m-%d %H:%M:%S") | fields index, firstTime, lastTime, delta, actual_ret | join index [| rest /services/admin/indexes splunk_server=localhost | eval yr = floor(frozenTimePeriodInSecs/86400/365) | eval dy = (frozenTimePeriodInSecs/86400) % 365 | eval ret = yr . " years, " . dy . " days" | eval index=title | stats avg(currentDBSizeMB) as currentDBSizeMB, avg(maxTotalDataSizeMB) as maxTotalDataSizeMB, max(frozenTimePeriodInSecs) as frozenTimePeriodInSecs, max(ret) by index | eval pct_data=(currentDBSizeMB/maxTotalDataSizeMB) * 100] | eval pct_ret = (delta/frozenTimePeriodInSecs)*100 and get error >Invalid value "(" for time term 'earliest' Also tried subsearch earliest=[ | rest /services/admin/indexes splunk_server=localhost | search title=syslog | eval dy = (frozenTimePeriodInSecs/86400) | fields dy ] with same result. Both subsearches by themselves return correct results: | rest /services/admin/indexes splunk_server=localhost | search title=syslog | eval dy = (frozenTimePeriodInSecs/86400) | eval earli="-" . dy . "d@d" |fields earli returns -365d@d and | rest /services/admin/indexes splunk_server=localhost | search title=syslog | eval dy = (frozenTimePeriodInSecs/86400) | fields dy returns 365 How can I get the subsearch value to be used by "earliest="?

Search to see which forwarders reported in the previous 24 hours or if there is a delay in logs?

$
0
0
Hi there, is there any query to find out the forwarders which are reporting for last 1 day or f there is a delay in the logs. Thanks

Urgent question about https certificate

$
0
0
Hello, We want to enable Splund SSL, so we put enableSplunkdSSL = true to server.conf. We generated a certificate using the FQDN as the CN of the certificate. Then in our AddOn, we use splunk.getLocalServerInfo() to get the url:port. The problem is that splunk.getLocalServerInfo() always returns https://127.0.0.1:8089, even if we changed MgmtHostPort in web.conf. As a result, we always get an error: SSLError: hostname '127.0.0.1' doesn't match xxxxxx, where xxxxxx is the CN we set for the certificate. So how shall this work? Shall we use 127.0.0.1 as the CN to create a cert? Or we shall not call getLocalServerInfo()? Thanks!

Port issue with splunkd SSL

$
0
0
Hello, We want to enable Splunkd SSL, so we put enableSplunkdSSL = true to server.conf. We generated a certificate using the FQDN as the CN of the certificate. Then in our AddOn, we use splunk.getLocalServerInfo() to get the url:port. The problem is that splunk.getLocalServerInfo() always returns https://127.0.0.1:8089, even if we changed MgmtHostPort in web.conf. As a result, we always get an error: SSLError: hostname '127.0.0.1' doesn't match xxxxxx, where xxxxxx is the CN we set for the certificate. So how shall this work? Shall we use 127.0.0.1 as the CN to create a cert? Or we shall not call getLocalServerInfo()? Thanks!

Multiple Kinesis inputs - GetShardIterator errors

$
0
0
We have created Kinesis streams in multiple regions within the same account. Each stream has the same name, though a different arn due to the service being region specific (e.g. arn:aws:kinesis:us-west-2:123456789012:stream/), so they are distinct and distinguishable. The logs are showing errors below for all but one of the configured streams. "An error occurred (InvalidArgumentException) when calling the GetShardIterator operation: StartingSequenceNumber 49575723201104542708550335494573616233923858177688862722 used in GetShardIterator on shard shardId-000000000000 in stream under account 870296345612 is invalid because it did not come from this stream." It looks like this might be a bug in the way Kinesis data is getting loaded, but perhaps there's a setting in the conf that needs to be added?

How can I use a CSV of email addresses to search indexed data?

$
0
0
Hi everyone, I'm having a little trouble querying with a CSV and wondered if you could provide assistance. I have a CSV with a lot of email addresses: Layout of Emails.csv Emails Email1@address.com Email2@address.com Email3@address.com and so forth. The query I'm using is: index=index1 sourcetype=MessageTracking | search [|inputlookup Emails.csv | rename Emails as address | fields address ] | table address, directionality directionality is a field in the sourcetype MessageTracking. Unfortunately I am getting no results out of the query, although there are addresses in that sourcetype and the csv which I've queried and do get results back for. Any help would be appreciated. Thanks Steve

Implementing a script-based Splunk search

$
0
0
What would be the best way to create a search where I can get my results and use them in my JavaScript file? I have custom map that I'm using for my app but I need to show the map markers when the user searches the location.

FREE Splunk that permits up to 500MB volume per day. How do I obtain this?

$
0
0
Hello! How do I obtain a free version of Splunk that permits up to 500MB volume per day maximum? Is this something that I need to contact Splunk sales for (to obtain a license key)? I have downloaded the enterprise version and it cuts you off after 30 days. I just need the version/license for something I can keep indefinitely as I practice my SpunkFu at home/non-work time. At work, we have a licensed copy, but never bothered to have a FREE version for testing purposes. Thanks in advance!

Routinely (24 times per day = 1 get per hour) parse section of HTML page (i.e.: specific table) to output.txt. I need syntax/regex to parse specific section, NOT all source code.

$
0
0
Hello, I need to parse a specific web page's table (I'm using PowerShell/WMI ($wc.downloadstring) to download source code) and output to output.txt. If I pull the entire source code, I get duplicate events/data for obvious reasons - which then throws off my numbers of events (based on repeats) I need to pull the exact section of the page 24 times a day (1 x per hour), and output to file. What I need: The regex syntax to search html source code - specific section/table. Should I use a named variable to identify the code for beginning of the table and the end of the table - which means I can output or index all the content within? Thanks in advance for your help!

Append * to the end of values in a multivalue input

$
0
0
I have created a multivalue parser from suggestions in the Splunk answers in the following form: [stats count | eval src="$dashInSrc$" | makemv src delim="," | mvexpand src | fields src] But what I would like to have happen is at the end of each value append the asterisk to broaden my search to values that might not be complete at input for the values of the fields in the events; i.e. these are hostnames being input and I would like to include * so that when the event logs the value as the FQDN it will grab that event as well.

RemedyForce

$
0
0
Just want to ask on how can i get incident data from remedyforce so i can input it to the Splunk?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>