I have seen a few other questions similar to this one, but not exactly, and the solutions do not work.
In my cluster master log, I am seeing the following error repeatedly:
01-08-2016 23:37:42.853 +0000 WARN DistributedPeerManagerHeartbeat - Unable to get server info from peer: http://:8089 due to: Connection reset by peer
On the indexer, I see the following:
08-02-2014 18:11:42.033 -0700 WARN HttpListener - Socket error from ,cmaster ip. while idling: error:1407609C:SSL routines:SSL23_GET_CLIENT_HELLO:http request
The indexer is connecting to the master since I can see it in the master's indexer clustering peers and indexes tabs.
This appears to be an SSL issue, but I cannot figure out what. The indexer says it is connecting with http, but I would expect it to connect with https. But where is this set? The indexer is connecting to the master with the following server.conf stanza:
[clustering]
master_uri = https://:8089
mode = slave
pass4SymmKey =
I verified that all passwords are correct.
↧
Why am I seeing "DistributedPeerManagerHeartbeat - Unable to get server info from peer... due to connection reset" on my cluster master log?
↧
How to extract and apply header information to every log line?
Hello Splunk Guru's,
The file below contains a header of 7 lines followed by an undetermined number of log lines. I would like for the header to apply to each and every log line. For instance, I would like to be able to search on our Version=6 and find all log lines associated with this version.
Timestamp=2016-01-08T14:29:20
SmartRecorderSN=HL3BC085
Version=6
FirmwareVersion=3.09.14
EventDurationSetpoint=30
BlackoutSetpoint=5
Iteration=322966
TRAT2,HL3BC085201601081429001212ER.SDE,2016-01-08T14:29:01,521,0.0004,0.000000,0.000000,1.0000,-1.5000,1,-0.0016,1.0000
ECR,HL3BC085201601081429001212ER.SDE,9,2016-01-08T14:29:00,00000,1,521,3326464,0,0.000000,0.000000
ECR,HL3BC085201601081429135674CDR.SDE,9,2016-01-08T14:29:13,00000,1,429,3345602,0,0.000000,0.000000
TRC,HL3BC085201601081429135674CDR.SDE,2016-01-08T14:29:13,429,0.000000,0.000000,0,0,30,1,1,0,-1,-1
TRAT2,HL3BC085201601081429291213ER.SDE,2016-01-08T14:29:27,521,0.0004,0.000000,0.000000,1.0000,-1.5000,1,-0.0016,1.0000
ECR,HL3BC085201601081429291213ER.SDE,9,2016-01-08T14:29:27,00000,1,521,3388928,1,0.000000,0.000000
ECR,HL3BC085201601081429435675CDR.SDE,9,2016-01-08T14:29:43,00000,1,429,3357073,0,0.000000,0.000000
TRC,HL3BC085201601081429435675CDR.SDE,2016-01-08T14:29:43,429,0.000000,0.000000,0,0,30,1,1,0,-1,-1
EndTimeStamp=2016-01-08T14:30:02
Kind Regards,
Rob
↧
↧
Why does Splunk Web sometimes not show the event data for a search unless I restart?
Splunk Web doesn't show the events at times. If I restart and log in, it will show the events, but after some time, events are not displayed. It shows total events, but the details are not displayed
![alt text][1]
Also, the main page doesn't show the summary of events indexed. Usually it should total events and indexes.
![alt text][2]
[1]: /storage/temp/78254-splunk.jpg
[2]: /storage/temp/78255-splunk1.jpg
What could be the problem? This leads to me restarting splunkd service every time.
↧
Can I setup Splunk so that only certain forwarders use encryption?
Hi,
I have a request from a customer to encrypt their feed to Splunk. The doc looks pretty simple, but after reading it, my impression is that all forwarders would then have to be configured to use encryption. Is that correct?
↧
How to write a search to find who deleted or modified files on a Windows server for the last 24 hours?
I am now very new to Splunk. I have installed a Splunk forwarder to monitor Window Security Logs, but would like also build a search to search who deleted and modified files / folder for the last 24 hours. Please point me to the right direction. Also, is it possible to prompt asking to enter the server name or file name when the search is running? Thanks.
↧
↧
How can I export all items from Settings>Searches, Reports, and Alerts?
So basically, I'm looking to effectively export/retrieve all content from Settings>Searches, Reports, and Alerts. Basically looking to build a reference document to list my alerts/reports with the underlying search. Is there a simple way to pull these from a location in the OS file structure instead of manually recording them from the UI? Tedious task, I know!
↧
Is using SplunkCimLogEvent logging best practice?
Hi,
I came across "**Splunk Logging best practices**" article ([http://dev.splunk.com/view/logging-best-practices/SP-CAAADP6][1]) and it seemed like using the provided **SplunkCimLogEvent** class would help in logging correctly, however, after a very brief trial I came across 2 things that seemed to contradict Splunk's own best practice advice.
1. Key=value pairs are not comma separated but space separated, i.e. key=value key=value rather then key=value, key=value
2. Both the key and the value are quoted together, i.e. "Key=Value" rather than just key="value" or "key"="value"
Anyone have experience in using the SplunkCimLogEvent class for logging from Java apps?
Has it helped or should I just stick with what we have, which is direct logging of key=value pairs?
The main thing I was interested in was SplunkCimLogEvent's support of exception/stacktrace logging, anyone think this is useful or again just stick with logging exceptions the normal way across multiple lines in the logfile?
I used the dependency com.splunk.logging:splunk-library-javalogging:1.5.0.
[1]: http://dev.splunk.com/view/logging-best-practices/SP-CAAADP6
↧
Cant search DB2 database after successfully connecting
I've successfully connected to DashDB (DB2) database from splunk. I went though the documentation and made sure that all the drivers are installed. But I still cant see the DB2 source in the sources page. Is there something else i need to do?
Thanks
I've added some screenshots.
![alt text][1]
![alt text][2]
[1]: /storage/temp/78260-33.png
[2]: /storage/temp/78262-44.png
↧
High splunkd memory usage on datamodel acceleration
I currently have the following setup.
3 x search heads ( 8 cpu, 16gb memory)
2 x indexer ( 8 cpu, 16gb)
Currently I'm only indexing around 10GB per day worth of data, 80% is from the NetApp application "splunk app for netapp". I have datamodel acceleration enabled with a summary of 1 month history on a cron of every 5 minutes.
Now currently the datamodel acceleration runs for about 2-3 minutes and during that time, the memory usage of the splunkd process reaches 16gb and causing OOM kernel errors that kills the process. This causes splunk to crash on the indexer. I've tried the suggestion if implementing cgconfig rules that limits the splunk user to 12gb maximum memory usage but I find this to be a workaround at best that killing splunk child processes shouldn't be needed.
To see how much memory it could use, I created a 3rd indexer with double the resources of the original 2 (so 16 cpu and 32gb memory). In this case, when the datamodel acceleration job was running it was using 32GB and causing OOM errors to appear in /var/log/messages.
My questions:
1. Has anyone else seen such high memory usage on indexes when datamodel acceleration runs?
2. The splunk app for netapp datamodel is quite large which hundreds of fields. Does the amount of fields in the datamodel equate to higher memory usage during datamodel updates?
3. Does reducing the datamodel span (from 1 month to say 7 days) have an impact on memory usage during datamodel updates?
The only thing I can think of right now is creating a custom datamodel with the fields that I need. If anyone has any solutions to try other than a new datamodel, I'm all ears.
↧
↧
inputs.conf and props.conf and new set up
Sorry newbie questions.
I have been looking at trying my hand at customizing the setup, instead of using the GUI.
These are from things I have tried and read in the docs.
The idea would be to set up the input folders in the "inputs.conf" file with "monitor" to grad the logs, then use the "props.conf" file with "rule" to set the sourcetype for the logs.
The next thing I going to do is set up log parsing to linebreak before the log events.
**A.**
I created a inputs.conf and props.conf file
I added to this folder and it did not read the inputs.conf file:
D:\Splunk\etc\apps\ZINPUTS\defaults\inputs.conf
I moved it to this folder and then it read it:
D:\Splunk\etc\system\local\inputs.conf
I am wanting to create a config like and app that I could copy from one server to another, when should I put my custom conf files?
Is there a CLI to output which conf files splunk reads?
**B.**
Monitoring:
I created this monitor for each folder, I added the recursive=true just to remind me what the default setting it.
I have 40 folders that I will monitor.
This does not seem to work.
[monitor://D:\SplunkData\7641\logform1\...\*.log]
recursive = true
I would like to read logs from the following folders:
D:\SplunkData\7641\logform1\*.log
D:\SplunkData\7641\logform1\day1\*.log
D:\SplunkData\7641\logform1\day2\*.log
D:\SplunkData\7641\logform1\day1\hour1\*.log
D:\SplunkData\7641\logform1\day1\hour2\*.log
**C.**
props.conf
I am thinking I would use props.conf and rules to set the sourcetype of the logs so:
The name of the application appears on line 5 of each log file, can I do this to find and identify the log as the sourcetype:
[rule::logform1]
sourcetype=logform1
REGEX=\t\tlogform1.exe
Currently this throws and error when I start splunk:
Invalid key in stanza [rule::logform1] in D:\Splunk\etc\system\local\props.conf, line 3: REGEX (value: \t\tlogform1.exe).
**D.**
Not sure what I will do here, I would like to set the break between records and there are four record types in one log file, I would like to break when these appear.
2016-01-07 15:07:30.879 DBUG
15:10:44.072_F_F_8837002
15:10:44.072 Int
Via: SIP/2.0/ UDP
Note: There are several more but these some of them.
I was going to use "BREAK_ONLY_BEFORE" for each of these log events.
Any ideas here?
Thanks for the assistance.
↧
Windows Advanced Audit Policy Configuration
Hello All,
I'm a new Splunker and have a Windows 6.3.2 enterprise installed with the following:
Supporting Add-on for Active Directory v 2.1.2
Cisco Security Suite v 3.1.1
Template for Citrix XenDesktop 7 v 1.1.1
App for Windows Infrastructure v 1.2.0
Add-on for PowerShell v 1.2.1
TA_Windows v 4.8.1
We are using Advanced Audit Policy (AAP) Configuration in our environment. I am not having any luck finding documentation on which AAP settings need to be configured. It appears to be an all or nothing proposition where either we get almost no information or millions of events in a very short period of time. I have searched the Splunk site fairly thoroughly but have not found any really helpful guidance on this. I did find this page:
http://docs.splunk.com/Documentation/MSApp/latest/MSInfra/ConfigureActiveDirectoryauditpolicy
This page mentions AAP but quickly looses me when suggesting I review of eventtypes.conf file. Any help or suggestions are greatly appreciated!
jpc
↧
Cannot access app whrn role is given only write permissions on app
I have created a role which has only write permissions and no read permissions on app.
When i try to login , it says
the app is not available.
does it need read permissions to access the app.
↧
duplication, data inputs, syslog & transforms/props.conf
short story:
using transforms.conf and/or props, how can i set an event's index value?
Long Story:
I am using two apps, with two UDP listeners, each with the required sourcetype.
Primarily i am capturing firewall info & Snort related. Both of these controls are running on my pfSense gateway. Both the firewall and snort send their logs to the local syslog, which then gets send to my spunk install.
The same syslog is sent to the 2 listeners i have setup, each listener with the corresponding source type. While this works i am assuming it will be impacting my license and resources.
Is the only way for me to get this to work is to use a single UDP syslog stream from my pfSense to Splunk, then use rules from props and transforms to individually set the source type and identity the fields?
many thanks
↧
↧
Splunk Addon for Microsoft Azure is not compatible with China Azure?
The China Azure customer wants to pull data out with Splunk Addon for Microsoft Azure from China Azure but always failed. There is a log in the _internal:
“…ERROR ExecProcessor - message from ""C:\Program Files\Splunk\etc\apps\SplunkAzure\bin\SplunkAzure.exe"" An error occurred while processing this request.”
I am sure the Azure account and key are right because I can query tables successfully via Azure Storage Explorer for China (http://shaunstorage.blob.core.chinacloudapi.cn/share/AzureStorageExplorerCN.zip), yes, there is another edition for Global Azure (http://azurestorageexplorer.codeplex.com/). I guess the Azure addon might use the Global Azure’s codes, so I would like to confirm this app is only compatible with Global Azure or both?
Thanks a lot.
↧
No IN Bound or OUT Bound events from DD-WRT
Love the idea of Home Monitor and really want to get it to work.
I'm running Home Monitor 4.3.0 on Splunk 6.3.2. DD-WRT v3.0-r27734 on a DIR 686L.
Set up Home Monitor initially with dd-wrt sourcetype and produced problem below. Then re-ran /homemonitor/apps/local/homemonitor/setup and set sourcetype as syslog and produced same problem.
There are many Events but no IN Bound or OUT Bound events. See ![alt text][1] (imgur image ID 1YTTUs8 if the link doesn't work)
Have sample output from DD-WRT, extract below:
2016-01-10 14:59:57 Kernel.Warning 192.168.28.1 Jan 10 06:59:57 kernel: ACCEPT IN=vlan2 OUT=br0 MAC=78:54:2e:4e:13:c9:00:17:10:85:ab:92:08:00:45:00:00:8f SRC=218.15.145.194 DST=192.168.28.57 LEN=143 TOS=0x00 PREC=0x00 TTL=43 ID=4934 PROTO=UDP SPT=14392 DPT=19598 LEN=123 MARK=0xa000
2016-01-10 14:59:57 Kernel.Warning 192.168.28.1 Jan 10 06:59:57 kernel: ACCEPT IN=br0 OUT= MAC=ff:ff:ff:ff:ff:ff:00:1d:ba:67:d7:f2:08:00 SRC=192.168.28.11 DST=192.168.28.255 LEN=78 TOS=0x00 PREC=0x00 TTL=128 ID=23255 PROTO=UDP SPT=137 DPT=137 LEN=58 MARK=0x35400
2016-01-10 14:59:57 Kernel.Warning 192.168.28.1 Jan 10 06:59:57 kernel: ACCEPT IN=vlan2 OUT=br0 MAC=78:54:2e:4e:13:c9:00:17:10:85:ab:92:08:00:45:00:00:84 SRC=123.26.105.194 DST=192.168.28.57 LEN=132 TOS=0x00 PREC=0x00 TTL=113 ID=15843 PROTO=UDP SPT=10538 DPT=19598 LEN=112 MARK=0xa000
2016-01-10 14:59:57 Kernel.Warning 192.168.28.1 Jan 10 06:59:58 kernel: ACCEPT IN=br0 OUT= MAC=ff:ff:ff:ff:ff:ff:00:1d:ba:67:d7:f2:08:00 SRC=192.168.28.11 DST=192.168.28.255 LEN=78 TOS=0x00 PREC=0x00 TTL=128 ID=23351 PROTO=UDP SPT=137 DPT=137 LEN=58 MARK=0x35400
Any ideas? Have I mis-configured something?
[1]: http://i.imgur.com/1YTTUs8.png
↧
Can I set up Splunk so that only certain forwarders use encryption?
Hi,
I have a request from a customer to encrypt their feed to Splunk. The doc looks pretty simple, but after reading it, my impression is that all forwarders would then have to be configured to use encryption. Is that correct?
↧
Splunk DB Connect 2: I've connected to a DashDB (DB2) database, but why can't I see the DB2 source in the sources page?
I've successfully connected to DashDB (DB2) database from Splunk. I went though the documentation and made sure that all the drivers are installed, but I still cant' see the DB2 source in the sources page. Is there something else I need to do?
Thanks
I've added some screenshots.
![alt text][1]
![alt text][2]
[1]: /storage/temp/78260-33.png
[2]: /storage/temp/78262-44.png
↧
↧
Why am I unable to access an app with a role that is given only write permissions?
I have created a role which has only write permissions and no read permissions for an app.
When I try to log in , it says:
the app is not available.
Does it need read permissions to access the app?
↧
Is the Splunk Addon for Microsoft Azure compatible with China Azure?
A China Azure customer wants to pull data out with Splunk Addon for Microsoft Azure from China Azure, but always failed. There is a log in the _internal log:
…ERROR ExecProcessor - message from ""C:\Program Files\Splunk\etc\apps\SplunkAzure\bin\SplunkAzure.exe"" An error occurred while processing this request.
I am sure the Azure account and key are right because I can query tables successfully via Azure Storage Explorer for China (http://shaunstorage.blob.core.chinacloudapi.cn/share/AzureStorageExplorerCN.zip), yes, there is another edition for Global Azure (http://azurestorageexplorer.codeplex.com/). I guess the Azure addon might use the Global Azure’s codes, so I would like to confirm this app is only compatible with Global Azure or both?
Thanks a lot.
↧
Home Monitor 4.3.0: Why do I see no IN Bound or OUT Bound events from DD-WRT?
Love the idea of Home Monitor and really want to get it to work.
I'm running Home Monitor 4.3.0 on Splunk 6.3.2. DD-WRT v3.0-r27734 on a DIR 686L.
Set up Home Monitor initially with dd-wrt sourcetype and produced problem below. Then re-ran /homemonitor/apps/local/homemonitor/setup and set sourcetype as syslog and produced same problem.
There are many Events but no IN Bound or OUT Bound events. See ![alt text][1] (imgur image ID 1YTTUs8 if the link doesn't work)
Have sample output from DD-WRT, extract below:
2016-01-10 14:59:57 Kernel.Warning 192.168.28.1 Jan 10 06:59:57 kernel: ACCEPT IN=vlan2 OUT=br0 MAC=78:54:2e:4e:13:c9:00:17:10:85:ab:92:08:00:45:00:00:8f SRC=218.15.145.194 DST=192.168.28.57 LEN=143 TOS=0x00 PREC=0x00 TTL=43 ID=4934 PROTO=UDP SPT=14392 DPT=19598 LEN=123 MARK=0xa000
2016-01-10 14:59:57 Kernel.Warning 192.168.28.1 Jan 10 06:59:57 kernel: ACCEPT IN=br0 OUT= MAC=ff:ff:ff:ff:ff:ff:00:1d:ba:67:d7:f2:08:00 SRC=192.168.28.11 DST=192.168.28.255 LEN=78 TOS=0x00 PREC=0x00 TTL=128 ID=23255 PROTO=UDP SPT=137 DPT=137 LEN=58 MARK=0x35400
2016-01-10 14:59:57 Kernel.Warning 192.168.28.1 Jan 10 06:59:57 kernel: ACCEPT IN=vlan2 OUT=br0 MAC=78:54:2e:4e:13:c9:00:17:10:85:ab:92:08:00:45:00:00:84 SRC=123.26.105.194 DST=192.168.28.57 LEN=132 TOS=0x00 PREC=0x00 TTL=113 ID=15843 PROTO=UDP SPT=10538 DPT=19598 LEN=112 MARK=0xa000
2016-01-10 14:59:57 Kernel.Warning 192.168.28.1 Jan 10 06:59:58 kernel: ACCEPT IN=br0 OUT= MAC=ff:ff:ff:ff:ff:ff:00:1d:ba:67:d7:f2:08:00 SRC=192.168.28.11 DST=192.168.28.255 LEN=78 TOS=0x00 PREC=0x00 TTL=128 ID=23351 PROTO=UDP SPT=137 DPT=137 LEN=58 MARK=0x35400
Any ideas? Have I mis-configured something?
[1]: http://i.imgur.com/1YTTUs8.png
↧