Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How to trouble event forwarding from forwarder to indexer

$
0
0
I somehow lost my custom stanza's on my forwarder for sending syslog data to my indexer. I noticed that my forwarder was missing those from the forwarder on the deployment server, so I added that back and it I now see my custom stanza's for forwarder input. I see events coming in to my forwarder but they are not being sent to the indexer or at least not searchable on my search head. How do I debug this? Thanks!

Where does Props.conf need to exist in a distrubted deployment

$
0
0
I think I need to push this from the deployment to each device or at least the forwarder and search head. I have 5 servers making up my SPLUNK Enterprise deployment, 1 SH, 1 FW, 1 DS, 2 Indexers. My props.conf on the forwarder has this configuration for 1 data source: FIELDALIAS-severity_as_id = severity as severity_id FIELDALIAS-dst_as_dest = dst as dest EVAL-app = netwitness EXTRACT-subject = CEF\:\s+\d(?:\|[^\|]+){3}\|(?[^\|]+) When I search I am not seeing the 'subject' does this need to be pushed to the search head? how about the other devices. I am trying to understand this. Thanks!

Issue with date parsing

$
0
0
I have an event like: {"app":"EventHub Service","caller":"kafka.go:110","fn":"gi.build.com/predix-data-services/event-hub-service/brokers.(*SaramaLogger).Println","lvl":"eror","msg":"Error closing broker -1 : write tcp 10.72.139.124:53006-\u003e10.7.18.82:9092: i/o timeout\n","t":"2017-09-13T15:26:56.762571201Z"} I am seeing WARN messages in splunkd.log like: 09-13-2017 15:11:50.289 +0000 WARN DateParserVerbose - Failed to parse timestamp. Defaulting to timestamp of previous event (Wed Sep 13 15:10:59 2017). Context: source::/var/log/event_hub.log|host::pr-dataservices-eh-event-hub-16|eventhub:service|217289 The date is correct for each of the events, so my question is this: Should I set DATE_FORMAT for this sourcetype to clean this up? Clearly splunk is grumpy about it. Should I force splunk to take the field t as the timestamp? This sourcetypes attributes are system default, there is nothing that I am doing "locally" as far as parameters. Any other thoughts are much appreciated!

Metadata results may be incomplete: 100000 entries have been received from all peers (see parameter maxcount under the [metadata] stanza in limits.conf), and this search will not return metadata information for any more entries.

$
0
0
I have a query as follows **PART:-1** | inputlookup ABCD | search Forward="Yes" | table Region,IPHost, ip_address | rename Region AS my_region, IPHost AS my_hostname, ip_address AS my_ip **PART:-2** | join type=left my_hostname [| metadata type=hosts index=* | rename host AS my_hostname] | eval lastTime=coalesce(lastTime,0) | eval timeDiff=now()-lastTime | eval last_seen_in_24_hours=if(timeDiff>86400,"NO","YES") | eval lastReported=if(lastTime=0,"never",strftime(lastTime,"%F %T")) | table my_region,my_hostname,last_seen_in_24_hours,lastReported where part 1 is just a csv file which displays a bunch of hosts and part is the query which checks all those bunch of hosts were actually reported in last_24_hours or not which displays the result as follows ![alt text][1] [1]: /storage/temp/213588-dashboard.png From the above you can see that all the 3 hosts were reporting as "NO" in the dashboard which means that they were not reported in last 24 hours but all those 3 hosts were actually reporting. To investigate further I tried to check if the part 2 works for a single host which shows as not reporting to splunk though its reporting as below | metadata type=hosts index=* | search host="abcd" result :- no results found **warning :-** Metadata results may be incomplete: 100000 entries have been received from all peers (see parameter maxcount under the [metadata] stanza in limits.conf), and this search will not return metadata information for any more entries. Is there any way that i can filter the search or exclude all those results. I think the above warning causing the issue and displaying all the hosts as "NO" though they were reporting insted of "YES". Is there any other way other than modifying the limits.conf I can filter or modify my search to look only for the csv file hosts instead of looking through all the entries. Any suggestions would be really helpful.

Configuring Forwarders with Deployment server

$
0
0
All, I have a successfully deployed app based on the Splunk documentation on how to create "send_to_indexer" app. The client is checking in, but I'm unable to figure out how I can modify the client. What I'm looking for is this. I manually installed the UF on the server and selected the Security logs. I'm getting those with no issues. Now I want to select the System logs, and I was wanting to do this by modifying the app and configure the UF, but I'm unable to find any documentation on doing it this way - maybe the deployment server isn't used for this? Is there a way to modify what logs you're collecting from the deployment server, and the index that the deployment servers send to without having to manually update all servers?

Unable to load Custom Algorithm in Splunk ML Toolkit

$
0
0
I followed the link (http://docs.splunk.com/Documentation/MLApp/2.4.0/API/Registeranalgorithm) to load an algorithm MLPRegressor from scikit into Splunk. I did the entry in algos.conf as "[MLPRegressor]" I created a new file in "SPLUNK_HOME\etc\apps\Splunk_ML_Toolkit\bin\algos" as MLPRegressor.py and copied the algorithm code in it. Restarted Splunk. Now when I am applying the algorithm to Predict Numeric Fields it gives an error as below I am running as **SEARCH** ************************************************************************* | inputlookup server_power.csv |fit MLPRegressor hidden_layer_sizes=1 activation=logistic ************************************************************************* **ERROR** ************************************************************************* 09-13-2017 12:42:16.516 INFO ChunkedExternProcessor - Running process: "C:\Program Files\Splunk\bin\python.exe" "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\fit.py" 09-13-2017 12:42:17.028 INFO ChunkedExternProcessor - stderr: Running C:\Program Files\Splunk\etc\apps\Splunk_SA_Scientific_Python_windows_x86_64\bin\windows_x86_64\python.exe C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin\fit.py 09-13-2017 12:42:18.650 ERROR ChunkedExternProcessor - Error in 'fit' command: Error while initializing algorithm "MLPRegressor": Failed to load algorithm "algos.MLPRegressor" 09-13-2017 12:42:18.650 INFO UserManager - Unwound user context: NULL -> NULL *************************************************************************

Details of Splunk 6.X Fundamentals Part 1?

$
0
0
1. How many attempts are there for the above course? 2. what is the duration of the certification course? 3. How many questions will be there?

Getting F5 data into the data model of enterprise security

$
0
0
The F5 logs are sent through the syslog to Splunk. However, the messages are not likely correctly cut out because many fields are populated with the "unknown" value. How can we deal with this? What should be the right configuration to correctly map the log data into the data models? Thank you for your reply. Regards, Laurent Ripaux

syslog for splunk

$
0
0
Ive install syslog-ng on a standalone splunk instance but cannot get it running - ive looked at the following guide : https://www.splunk.com/blog/2016/03/11/using-syslog-ng-with-splunk.html using a syslog gen i can send a message directly to splunk as a direct input, but then i disable that and configure syslog-ng. the service starts and is listening but nothing is written to a file [root@centos-6-1 syslog-ng]# netstat -anp | grep 514 udp 0 0 0.0.0.0:514 0.0.0.0:* 13833/syslog-ng sending a facility 7 syslog message from cmd line: SyslogGen.exe -t:x.x.x.x -f:7 -s:7 -h:myhost -m:"Too many bytes.\x0D\x0A" @version:3.2 # syslog-ng configuration file. # # This should behave pretty much like the original syslog on RedHat. But # it could be configured a lot smarter. # # See syslog-ng(8) and syslog-ng.conf(5) for more information. # options { flush_lines (0); time_reopen (10); log_fifo_size (1000); long_hostnames (off); use_dns (no); use_fqdn (no); create_dirs (no); keep_hostname (yes); }; source s_sys { udp(port(514)); }; #destination d_cons { file("/dev/console"); }; destination d_mesg { file("/opt/syslog-ng/$HOST/$YEAR-$MONTH-$DAY-test.log"); }; #destination d_auth { file("/var/log/secure"); }; #destination d_mail { file("/var/log/maillog" flush_lines(10)); }; #destination d_spol { file("/var/log/spooler"); }; destination d_boot { file("/opt/syslog-ng/$HOST/$YEAR-$MONTH-$DAY-test1.log"); }; #destination d_cron { file("/var/log/cron"); }; #destination d_kern { file("/var/log/kern"); }; #destination d_mlal { usertty("*"); }; #filter f_kernel { facility(kern); }; filter f_default { level(info..emerg) and not (facility(mail) or facility(authpriv) or facility(cron)); }; #filter f_auth { facility(authpriv); }; #filter f_mail { facility(mail); }; #filter f_emergency { level(emerg); }; filter f_boot { facility(local7); }; #filter f_cron { facility(cron); }; #log { source(s_sys); filter(f_kernel); destination(d_cons); }; #log { source(s_sys); filter(f_kernel); destination(d_kern); }; log { source(s_sys); filter(f_default); destination(d_mesg); }; #log { source(s_sys); filter(f_auth); destination(d_auth); }; #log { source(s_sys); filter(f_mail); destination(d_mail); }; #log { source(s_sys); filter(f_emergency); destination(d_mlal); }; #log { source(s_sys); filter(f_news); destination(d_spol); }; log { source(s_sys); filter(f_boot); destination(d_boot); }; #log { source(s_sys); filter(f_cron); destination(d_cron); }; # vim:ft=syslog-ng:ai:si:ts=4:sw=4:et: gratzi

How to Compare 2 fields from 2 sourcetypes and remove events that are the same and only in the second sourcetype

$
0
0
I have 2 Sourcetypes A and B with 2 important Fields SSN and Number. I want to compare all of the SSN and number's from Sourcetype A to Sourcetype B I then return Results that only show up in Sourcetype B Sourcetype A SSN number #####1111 12345 (drop this because it matches B) #####2222 12345 (drop this because it is sourcetype A even though it doesn't match) Sourcetype B SSN number #####1111 12345 (drop this because it matches A) #####2222 11111 (keep this because it doesn't match anything in A and it is Sourcetype B) I am really stuck on this one not even sure where to start.

How do I tell if we are using Splunk Web?

$
0
0
I am using Splunk Enterprise 6.6.1 and there is a security vulnerability that exploits Splunk Web that is resolved in 6.6.3. I go to my services running and there is a "splunkweb (for legacy purposes only)" service that is not running, so it appears that we do not use splunk web, although I can still access splunk from the web interface. How can I find out for sure if I am exposed to this vulnerability?

Kinesis Flowlogs - Data not displaying in dashboards

$
0
0
We have a large number of separate AWS accounts that we are collecting VPC flowlog data from. Each of these accounts will push to a centralized account that has Kinesis streams deployed in all of our active regions. We have an input created for each region's stream. The data looks to be getting indexing properly within Splunk, though the Splunk App VPC flowlog dashboards don't display any data. Looking at the searches on the dashboards, they are set to look for source="dest_port" or source="src_ip". Our ingested data looks more like source="us-east-1:VPCFlowLogs:eni-12345678-all" for this part of the event. Is this input type incompatible with Kinesis streams that receive cross-account input? We would really like to avoid having to pay for and manage streams in every account separately, as well as configure new inputs every time we add a new account.

Key-value pair extraction regex

$
0
0
We have some snmp data and want to extract the data as a key-value pair Sample var.12345.5.5 = INTEGER: 10 myTag::var.12345.5.9 = STRING: "abc" myTag::var.12345.5.3 = STRING: "admin" myTag::var.12345.5.4 = STRING: "developer" var.12345.5.5 = INTEGER: 10 myTag::var.12345.5.9 = STRING: "xyz" myTag::var.12345.5.3 = STRING: "user1" myTag::var.12345.5.4 = STRING: "support" output required var_12345_5_9,var_12345_5_3,var_12345_5_4 abc,admin,developer xyz,user1,support I tried a basic REGEX=var\.([^ ]+)\s=\s(\S+) FORMAT = $1::$2 CLEAN_KEYS = true But can you please help in making it a bit more better?

dynamically set earliest from subsearch

$
0
0
Hi folks, been all over this site and google, not finding a working solution. I'm trying to perform a search using a subsearch to populate earliest= | tstats min(_indextime) as firstTime, max(_indextime) as lastTime where earliest=[ | rest /services/admin/indexes splunk_server=localhost | search title=syslog | eval dy = (frozenTimePeriodInSecs/86400) | eval earli="-" . dy . "d@d" |fields earli ] index=syslog by index | eval delta = (lastTime - firstTime) | eval yr = floor(delta/86400/365)| eval dy = (delta/86400) % 365 | eval actual_ret = yr . " years, " . dy . " days" | eval lastTime=strftime(lastTime,"%Y-%m-%d %H:%M:%S"), firstTime=strftime(firstTime,"%Y-%m-%d %H:%M:%S") | fields index, firstTime, lastTime, delta, actual_ret | join index [| rest /services/admin/indexes splunk_server=localhost | eval yr = floor(frozenTimePeriodInSecs/86400/365) | eval dy = (frozenTimePeriodInSecs/86400) % 365 | eval ret = yr . " years, " . dy . " days" | eval index=title | stats avg(currentDBSizeMB) as currentDBSizeMB, avg(maxTotalDataSizeMB) as maxTotalDataSizeMB, max(frozenTimePeriodInSecs) as frozenTimePeriodInSecs, max(ret) by index | eval pct_data=(currentDBSizeMB/maxTotalDataSizeMB) * 100] | eval pct_ret = (delta/frozenTimePeriodInSecs)*100 and get error >Invalid value "(" for time term 'earliest' Also tried subsearch earliest=[ | rest /services/admin/indexes splunk_server=localhost | search title=syslog | eval dy = (frozenTimePeriodInSecs/86400) | fields dy ] with same result. Both subsearches by themselves return correct results: | rest /services/admin/indexes splunk_server=localhost | search title=syslog | eval dy = (frozenTimePeriodInSecs/86400) | eval earli="-" . dy . "d@d" |fields earli returns -365d@d and | rest /services/admin/indexes splunk_server=localhost | search title=syslog | eval dy = (frozenTimePeriodInSecs/86400) | fields dy returns 365 How can I get the subsearch value to be used by "earliest="?

how to resolve the warning "Metadata results may be incomplete: 100000 entries have been received from all peers , and this search will not return metadata information for any more entries."

$
0
0
I have a query as follows **PART:-1** | inputlookup ABCD | search Forward="Yes" | table Region,IPHost, ip_address | rename Region AS my_region, IPHost AS my_hostname, ip_address AS my_ip **PART:-2** | join type=left my_hostname [| metadata type=hosts index=* | rename host AS my_hostname] | eval lastTime=coalesce(lastTime,0) | eval timeDiff=now()-lastTime | eval last_seen_in_24_hours=if(timeDiff>86400,"NO","YES") | eval lastReported=if(lastTime=0,"never",strftime(lastTime,"%F %T")) | table my_region,my_hostname,last_seen_in_24_hours,lastReported where part 1 is just a csv file which displays a bunch of hosts and part is the query which checks all those bunch of hosts were actually reported in last_24_hours or not which displays the result as follows ![alt text][1] [1]: /storage/temp/213588-dashboard.png From the above you can see that all the 3 hosts were reporting as "NO" in the dashboard which means that they were not reported in last 24 hours but all those 3 hosts were actually reporting. To investigate further I tried to check if the part 2 works for a single host which shows as not reporting to splunk though its reporting as below | metadata type=hosts index=* | search host="abcd" result :- no results found **warning :-** Metadata results may be incomplete: 100000 entries have been received from all peers (see parameter maxcount under the [metadata] stanza in limits.conf), and this search will not return metadata information for any more entries. Is there any way that i can filter the search or exclude all those results. I think the above warning causing the issue and displaying all the hosts as "NO" though they were reporting insted of "YES". Is there any other way other than modifying the limits.conf I can filter or modify my search to look only for the csv file hosts instead of looking through all the entries. Any suggestions would be really helpful.

Connecting Splunk to Tableau Issue

$
0
0
I'm trying to connect Splunk to Tableau, so I can create Tableau visualization using all my Splunk reports. I am using Tableau Version: 10.3, and I installed Splunk ODBC: 2.1.1. I'm sure that I already input the correct server and credentials, but I'm receiving error below. **[Splunk][SplunkODBC] (40) Error with HTTP API, error code: Couldn't connect to server** I'm not sure how to triage this problem. Any input will be appreciated. Thanks!

How to combine multiple separate fields into one for graphing purposes

$
0
0
2017-09-12 12:31:11.817 INFO [RunMaster] stats: jif: 1, fif: 9, fim: 192, f2c: 183 paper: pc: 9129, uwr: n/a, rwr: n/a side-a: fa: 0, fmq: 0, fq: 0, fp: 96, #r: 49, frs: 0, f2f ms: 101, fb100 0.00 side-b: fa: 0, fmq: 0, fq: 9, fp: 87, #r: 49, frs: 0, f2f ms: 101, fb200 0.00 I want to pull out the values for fa, fmq, fq, and fp, but also associate them with either side-a or side-b. I want to be able to graph the these values side by side while also showing side-a vs side-b. Currently I can pull out one sides info into separate fields using this regex: "[.\n]*side-a:\sfa:\s(?\d+),\sfmq:\s(?\d+),\sfq:\s(?\d+),\sfp:\s(?\d+)" I tried also setting this regex equal to side-a but got no results. Any suggestions on how I am do this? Thanks.

Where can I find the complete documentation of configuration options for universal forwarder?

$
0
0
In the Forwarder manual (http://docs.splunk.com/Documentation/Forwarder/6.6.3/Forwarder/Abouttheuniversalforwarder), we have a section on "Configure the universal forwarder". It listed some example configuration tasks for universal forwarder. What I am looking for are complete references kind of documents listing all the available configurations for universal forwarder. On section "Configure forwarding with outputs.conf", at its end, we have a list of common attributes. Where can we find the complete list of attributes for outputs.conf for Universal Forwarder only. Respectively, the complete list of attributes for other .conf files for Universal Forwarder only?

Dashboard to view a list of users belonging to a user AD group in LDAP?

$
0
0
I am trying to build a dashboard where I can have a drop down for the list of users and use them to view their AD group, roles and permissions. Tried rest query : /rest/services/authentication/users but I can't get the ad group? If anyone has a similar dashboard can you please post the source code?

How many times can I take the final exam for the Splunk Fundamentals 1 course?

$
0
0
1. How many attempts are there for the above course? 2. what is the duration of the certification course? 3. How many questions will be there?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>