I am trying to parse custom IIS and Windows Firewall fields using props and transforms.
Our Universal Forwarders first send logs to Heavy Forwarders, then to the Indexers.
Where is the proper place to put the props and transforms so that the fields are parsed correctly?
Also, will this affect data already indexed, or just new data?
Thanks. This has always been confusing to me, so thanks for helping!
↧
Where do I put props.conf and transforms.conf stanzas to parse custom IIS and firewall fields? Will this impact previously indexed data?
↧
How to configure the Website Monitoring app version 2.6 if all the fields are greyed out?
Only discovered Website Monitoring version 2.6 yesterday. I have installed it using the Splunk Web interface. The next step is to set it up, but only the Save Configuration button is active. All of the fields to answer are all grey'd out.
One article I've read says, I was to create a file in $SPLUNK_HOME/etc/apps/website_monitoring/local/inputs.conf, but following the instruction does not help either. I also read the articles previously posted here, but I have yet to find the article that provides me direction to help me complete my set up.
Can someone help?
↧
↧
planning an upgrade from 6.3.2 to V7.0
We have 6 splunk servers
1 SH
1 enterprise security
1 license + cluster master
2 Indexers
1 deployment server
I will be stopping Splunk services and take a snapshot of all VMs and then perform the upgrade if anything goes wrong during the upgrade I am planning to revert to snapshots, is this the best practice or will reverting to snapshot break anything?
↧
Can I create indexes.conf and inputs.conf files on my search heads to send /var/log/ logs to my indexer cluster?
My SHC of 3 members is Linux. I need to create an inputs.conf to ingest /var/log/* and send them to my indexer-cluster. _internal data
from all of my servers is being indexed properly so I believe that the data flow is correct. I believe I need to do two things: 1)
create an indexes.conf file on each search head and 2) create an inputs.conf file on each search head.
Step 1) On my deployer, I created /opt/splunk/etc/master-apps/_cluster/local/indexes.conf and executed splunk apply shcluster-bundle
without errrors. This is the contents of indexes.conf.
[linux]
coldPath = $SPLUNK_DB/linux/colddb
enableDataIntegrityControl = 0
enableTsidxReduction = 0
homePath = $SPLUNK_DB/linux/db
maxTotalDataSizeMB = 512000
thawedPath = $SPLUNK_DB/linux/thaweddb
I cannot find the indexes.conf file on any of my search heads.
2) I also created /shcluster/apps/locallinux/local/inputs.conf and executed splunk apply shcluter-bundle without errors. This is the contents of inputs.conf.
[monitor:///var/log/messages]
disabled = false
index = linux
sourcetype = syslog
[monitor:///var/log/cron]
disabled = false
index = linux
sourcetype = syslog
Same problem as above, I cannot find the inputs.conf file on any of my search heads.
In a separate, but bigger picture of what I am trying to accomplish, on my License Server and on my Monitoring server, I created a linux index and used the web gui to create the inputs AND I have SPLUNK_HOME/etc/system/local/outputs.conf as below.
[indexAndForward]
index = false
[tcpout]
defaultGroup = DSCA_Indexers
forwardedindex.filter.disable = true
indexAndForward = false
[tcpout:DSCA_Indexers]
server=10.20.38.11:9997, 10.20.38.12:9997, 10.20.38.13:9997
My linux information gets to the indexers.
The desired goal is to send ALL Enterprise Server Linux /var/log/* to the indexers.
↧
How to build a cron expression in a Splunk alert to run in CST time?
hi there
What would be the cron expression to run an alert every day at 11:00am CST (Central time)?
or Splunk is already taking the time zone from the operating system?
thanks
↧
↧
schedule search to get past hour and run for last 7 days for only that hour
Need help..
Hi,
I can run a search for 7 days and do eval to get data for particular hour but that seems a costly operation.
I am thinking to get past hour value to some variable like abc= stfrtime(_time,@H) and assign to date_hour
date_hour=abc and do a search for past 7 days.
Not able to find any splunk field to use. Able to
index=abc_core search_test=* earliest=-1h@h latest=-0h@h | stats count as TodayStats by search_test |join search_test [search index=abc_core search_test=* (earliest=-25h@h latest=-24h@h) OR (earliest=-49h@h latest=-48h@h) OR (earliest=-73h@h latest=-72h@h) OR (earliest=-97h@h latest=-96h@h) OR (earliest=-121h@h latest=-120h@h) OR (earliest=-145h@h latest=-144h@h) OR (earliest=-169h@h latest=-168h@h) | stats count(search_test) as Count by search_test | eval WeeklyAvg=round(Count/7,0) | eval WeeklyAvg75=(Count/7)*0.75| table client_app_id WeeklyAvg WeeklyAvg75]|
index=abc_core search_test=* earliest=-7d@d latest=now | eval abc=stfrtime(timestamp/1000,"%H) | where date_hour=abc
Need help to do simpler and efficient way ..
Basic requirement - Not to search for all 7 days data and do eval and condition , but need to give some query code upfront to search for only that hour.. Need to use as dynamic saved search to run every hour.
↧
Is it safe to revert to a snapshot?
We have 6 splunk servers
1 SH
1 enterprise security
1 license + cluster master
2 Indexers
1 deployment server
I will be stopping Splunk services and take a snapshot of all VMs and then perform the upgrade if anything goes wrong during the upgrade I am planning to revert to snapshots, is this the best practice or will reverting to snapshot break anything?
↧
Why am I seeing multiple host names with duplicate client names in forwarder management?
I am seeing multiple Host Names with duplicate Client Names in Forwarder Management. Why is this happening and how do I prevent it from happening?
↧
How can I perform a scheduled search that searches for one specific hour of each day?
Need help..
Hi,
I can run a search for 7 days and do eval to get data for particular hour but that seems a costly operation.
I am thinking to get past hour value to some variable like abc= stfrtime(_time,@H) and assign to date_hour
date_hour=abc and do a search for past 7 days.
Not able to find any Splunk field to use. Able to
index=abc_core search_test=* earliest=-1h@h latest=-0h@h | stats count as TodayStats by search_test |join search_test [search index=abc_core search_test=* (earliest=-25h@h latest=-24h@h) OR (earliest=-49h@h latest=-48h@h) OR (earliest=-73h@h latest=-72h@h) OR (earliest=-97h@h latest=-96h@h) OR (earliest=-121h@h latest=-120h@h) OR (earliest=-145h@h latest=-144h@h) OR (earliest=-169h@h latest=-168h@h) | stats count(search_test) as Count by search_test | eval WeeklyAvg=round(Count/7,0) | eval WeeklyAvg75=(Count/7)*0.75| table client_app_id WeeklyAvg WeeklyAvg75]|
index=abc_core search_test=* earliest=-7d@d latest=now | eval abc=stfrtime(timestamp/1000,"%H) | where date_hour=abc
Need help to do simpler and efficient way ..
Basic requirement - Not to search for all 7 days data and do eval and condition , but need to give some query code upfront to search for only that hour.. Need to use as dynamic saved search to run every hour.
↧
↧
Can we install a universal forwarder on a 2016 Windows server with SCCM?
Is it possible to get a UF installed on a 2016 Windows server with sccm or do we have to use a chef recipe?
↧
Can I stop Splunk, take a VM snapshot, upgrade Splunk, then revert to the snapshot after the upgrade?
We have 6 splunk servers
1 SH
1 enterprise security
1 license + cluster master
2 Indexers
1 deployment server
I will be stopping Splunk services and take a snapshot of all VMs and then perform the upgrade if anything goes wrong during the upgrade I am planning to revert to snapshots, is this the best practice or will reverting to snapshot break anything?
↧
Monitoring of Java Virtual Machines with JMX - - Issue getting this to work on the forwarders
I have followed the steps defined in this Splunk answers but have been unable to get JMX data working for the majority of our servers. I say majority because 2 out of 20 servers are working. They are all using the same config, but differen't pid numbers, so it's possible there is some environment dependency, that's undocumented, missing.
https://answers.splunk.com/answers/210216/is-it-possible-to-move-the-monitoring-of-java-virt.html
-**Here's the contents of my config file.**
**Inputs**
[jmx://JVM-Name]
config_file = config-JVM-Name.xml
polling_frequency = 60
sourcetype = jmx
index = custom-index_name
disabled = 0
interval = 30
_TCP_ROUTING = routinggroup
crcSalt = crcsalt2
**Then here's the errors it throws.
jmx.log**
2017-10-06 11:01:46,425 - com.splunk.modinput.ModularInput -5181309 [main] INFO - stanza count:3
2017-10-06 11:01:56,503 - com.splunk.modinput.ModularInput -5191387 [main] INFO - stanza count:3
2017-10-06 11:02:00,690 - org.exolab.castor.mapping.Mapping -5195574 [Thread-3] INFO - Loading mapping descriptors from jar:file:/D:/program%20files/SplunkUniversalForwarder/etc/apps/jmx_app_name/bin/lib/jmxmodinput.jar!/mapping.xml
2017-10-06 11:02:00,690 - org.exolab.castor.mapping.Mapping -5195574 [Thread-4] INFO - Loading mapping descriptors from jar:file:/D:/program%20files/SplunkUniversalForwarder/etc/apps/jmx_app_name/bin/lib/jmxmodinput.jar!/mapping.xml
2017-10-06 11:02:00,690 - org.exolab.castor.mapping.Mapping -5195574 [Thread-2] INFO - Loading mapping descriptors from jar:file:/D:/program%20files/SplunkUniversalForwarder/etc/apps/jmx_app_name/bin/lib/jmxmodinput.jar!/mapping.xml
2017-10-06 11:02:00,690 - com.splunk.modinput.ModularInput -5195574 [Thread-4] ERROR - cannot create JMXServiceURL for server with description JVM Description - no providers installed
2017-10-06 11:02:00,690 - com.splunk.modinput.ModularInput -5195574 [Thread-3] ERROR - cannot create JMXServiceURL for server with description JVM Description - no providers installed
2017-10-06 11:02:00,690 - com.splunk.modinput.ModularInput -5195574 [Thread-4] INFO - 0 servers found in stanza jmx://JVM-Name
2017-10-06 11:02:00,690 - com.splunk.modinput.ModularInput -5195574 [Thread-3] INFO - 0 servers found in stanza jmx://JVM-Name
2017-10-06 11:02:00,690 - com.splunk.modinput.ModularInput -5195574 [Thread-2] ERROR - cannot create JMXServiceURL for server with description JVM Description - no providers installed
2017-10-06 11:02:00,690 - com.splunk.modinput.ModularInput -5195574 [Thread-2] INFO - 0 servers found in stanza jmx://jmx://JVM-Name
**splunkd.log**
10-06-2017 15:42:00.019 -0500 ERROR ExecProcessor - message from "python "D:\Program Files\SplunkUniversalForwarder\etc\apps\jmx_app_name\bin\jmx.py"" Traceback (most recent call last):
10-06-2017 15:42:00.019 -0500 ERROR ExecProcessor - message from "python "D:\Program Files\SplunkUniversalForwarder\etc\apps\jmx_app_name\bin\jmx.py"" File "D:\Program Files\SplunkUniversalForwarder\etc\apps\us_ssloansvc_qa_jmx4\bin\jmx.py", line 143, in
10-06-2017 15:42:00.019 -0500 ERROR ExecProcessor - message from "python "D:\Program Files\SplunkUniversalForwarder\etc\apps\jmx_app_name\bin\jmx.py"" monitor_tasks(process, token)
10-06-2017 15:42:00.019 -0500 ERROR ExecProcessor - message from "python "D:\Program Files\SplunkUniversalForwarder\etc\apps\us_ssloansvc_qa_jmx4\bin\jmx.py"" File "D:\Program Files\SplunkUniversalForwarder\etc\apps\jmx_app_name\bin\jmx.py", line 103, in monitor_tasks
10-06-2017 15:42:00.019 -0500 ERROR ExecProcessor - message from "python "D:\Program Files\SplunkUniversalForwarder\etc\apps\jmx_app_name\bin\jmx.py"" if update_inputs:
10-06-2017 15:42:00.019 -0500 ERROR ExecProcessor - message from "python "D:\Program Files\SplunkUniversalForwarder\etc\apps\jmx_app_name\bin\jmx.py"" NameError: global name 'update_inputs' is not defined
↧
Can I use relative time for bin span?
I want to run a query with rolling time span (rolling every minute) and want to count events in last 1 hour relative to current minute.
I am trying to run this query:
Search query | bin _time span=(now(), "-1h") | stats range(_time) AS Range, latest(_time) AS Latest count BY A, B, C, date_hour
but of course span does not accept -ve values.
Example:
10:04 - xxxxx
10:06 - xxxxx
10:09 - xxxxx
10:16 - xxxxx
11:07 - xxxxx
11:14 - xxxxx
so if my current time is 11:08 and if i say count events for last 1 hour from now. so it should count in time range 10:08 - 11:08 so that the count value is 3. if i simply use bin _time span=1h, it would return count value as 4 for the 10th hour and 2 for 11th hour.
so basically i want my time span to be rolling each minute and then get a count for last 60 mins (1hr) exact.
↧
↧
Universal forwarder -- error message with pass4SymmKey
I am trying to add an app to forward some information to another set of indexers to a universal forwarder configuration managed by a deployment server and already talking to another set of indexers.
There is nothing when I start the forwarder at the etc/system/local level
I get an error discovering the new indexers
10-06-2017 18:33:45.629 -0400 ERROR IndexerDiscoveryHeartbeatThread - Error in Indexer Discovery communication. Verif
y that the pass4SymmKey set under [indexer_discovery:splunkservers] in 'outputs.conf' matches the same setting under
[indexer_discovery] in 'server.conf' on the Cluster Master.
Although the error refers to a stanza that doesn't actually exist in my config, I have verified the key that I am using in the app as the correct pass4SymmKey for these indexers. However, when I start the forwarder, I see that a server.conf has been created automatically at the etc/system/local level and it contains a [general] stanza with a different pass4SymmKey as well as a [sslConfig] stanza with an sslPassword. I am wondering if this auto-created file is overriding my app configuration and how to troubleshoot this issue.
↧
Extract JSON fields in mixed data structure with props
I have an event with a mix of JSON and non-JSON data. I have successfully extracted a Payload field with props whose value is a JSON data structure. Then using the search `| spath input=Payload`, the value is successfully parsed into KV pairs. But how do I move this to a config file for automatic extraction? I was looking at an `EVAL-` statement with the `spath()` function, but it's not clear what the "Y" value should be if I want to extract all of the fields, not just a specific one:
`EVAL-Payload = spath(Payload, "*")`
↧
can you directly publish data from your java application to splunk web?
hello there,
I want to try and catch the spl query submitted on the web interface in my java application, process this query and get the data it wants, and them publish this data from my java application to the web interface.
All this should happen in the background so the user can't know that my script got his query and that my script will return his search. is it possible?
and one more question, I have splunk on docker and I downloaded the java sdk,
if i execute this command :
curl -u admin:xxxx -k https://localhost:8089/services/auth/login -d username=admin -d password=xxxx
i get a session token reply.
if I try to connect from the java application I get an handshake_failure
Thanks in advance,
I hope someone can help me.
PS:
this is the java code
--------------------------------------------
package hellosplunk;
import com.splunk.Service;
import com.splunk.ServiceArgs;
public class helloSplunk {
public static void main(String[] args) {
// TODO Auto-generated method stub
ServiceArgs loginArgs = new ServiceArgs();
loginArgs.setUsername("admin");
loginArgs.setPassword("xxxx");
loginArgs.setHost("localhost");
loginArgs.setPort(8089);
loginArgs.setScheme("https");
Service splunkService = Service.connect(loginArgs);
System.out.println(splunkService.getToken());
}
}
↧
Button switcher
hi
i want to create a button switcher here is my code in xml format:
![alt text][1]
but it dosent work, can you help me to know what is my problem
tanck you
[1]: /storage/temp/216733-button.png
↧
↧
The EventCode lookups in the Splunk App for Windows Infrastructure return incorrect values
The Splunk App for Windows Infrastructure has the windows_signatures.csv lookup file:
*signature_id,signature,CategoryString,action,result
512,"Windows NT is starting up",,,
...*
*1104,"The security Log is now full",,,*
And then the lookup itself:
*## Default lookup for EventCode->signature mapping ( i.e. EventCode=4625 + SubStaus=null() = "An account failed to log on" )
LOOKUP-signature_for_windows3 = windows_signature_lookup signature_id OUTPUTNEW signature,signature AS name, signature AS subject*
So here's the problem. I have an event coming from SharePoint with event code 1104:
*LogName=Application
SourceName=Microsoft-SharePoint Products-PerformancePoint Service
EventCode=1104*
And the lookup matches it - based on it being event code 1104 - to the message "The security Log is now full".
That's wrong - and pretty alarming. It looks like the lookup file is just for events from the Security log, yet the lookup is ignoring the log name, so event code 1104 becomes a full security log regardless of the log name (let alone the source name).
I'm still new with Splunk, so it's possible that I've effed something up to get this result. Has anyone else noticed this?
↧
How do i create a bar graph showing the different types of windows event log sourcetype?
Hi,
I am just trying to create a simple bar graph to show the count of the different type of windows log sourcetype, however it does not seems to work.
field: sourcetype
values: WMI:WinEventLog:Application, WinEventLog:Security, WinEventLog:System
This is my current search:
host= count(sourcetype="WinEventLog:System", sourcetype="WinEventLog:Security", sourcetpye="WMI:WinEventLog:Application")
Thank you in advance!
↧
Cisco IPFix v10 to Stream App Proper setup - documentation help - streamfwd
Im trying to find some documentation to help aid in ingesting Custom IPfix outside 1-400 IDs. but i read that theres not much documentation in this arena hehe .. heres what i have tried .
Main goal is to pretty much ingest IPFIX data for application URL/URI / source/dest other Netflow stats but it seems i need to code either in vocabularies or something else.
Cisco ASR 1004 --> streamfwd standalone app --> SH / indexer load
ive noticed Template ID of 294 and enterprise ID of 9
but i dont see it in ipfix.xml in the IETF org assignments
connection client ipv4 address ID = 12236
connection server ipv4 address ID = 12237
**i tried setting this in streamfwd.conf**
cat streamfwd.conf
[streamfwd]
port = 8889
netflowReceiver.0.ip = 10.1.1.1
netflowReceiver.0.protocol = udp
netflowReceiver.0.port = 9991
netflowReceiver.0.decoder = netflow
netflowReceiver.0.decodingThreads = 8
netflowElement.0.enterpriseid = 9
netflowElement.0.id = 12235
netflowElement.0.termid = cisco.12235
netflowElement.1.enterpriseid = 9
netflowElement.1.id = 12236
netflowElement.1.termid = cisco.12236
**and tried setting this in vocabularies**
**vocabularies]# cat cisco.xml**
true Cisco Netflow Protocol Vocabulary blob 12235 status. blob 12236 status
theres some things im trying to figure out and stitch together like how do i know how to state its a uint32/64
i tried to look at the exporter part of the router to then build in the vocabularies
size1=unsigned8
size4=unsigned32
size8=unsigned64
size32=string
size40=string
but its not 1-1 on some of them so im kinda lost on how i can bridge some of these inbound.
**This is the Exporter information from our cisco router showing Template ID of 294 along with IDs and Ent ID**
Client: Flow Monitor cisco-flow
Exporter Format: IPFIX (Version 10)
Template ID : 294
Source ID : 1280
Record Size : 95 + var
Template layout
_____________________________________________________________________________
| Field | ID | Ent.ID | Offset | Size |
-----------------------------------------------------------------------------
| connection client ipv4 address | 12236 | 9 | 0 | 4 |
| connection server ipv4 address | 12237 | 9 | 4 | 4 |
| ip dscp | 195 | | 8 | 1 |
| ip protocol | 4 | | 9 | 1 |
| connection client transport port | 12240 | 9 | 10 | 2 |
| connection server transport port | 12241 | 9 | 12 | 2 |
| routing vrf input | 234 | | 14 | 4 |
| connection initiator | 239 | | 18 | 1 |
| connection id | 12242 | 9 | 19 | 4 |
| flow observation point | 138 | | 23 | 8 |
| application id | 95 | | 31 | 4 |
| flow direction | 61 | | 35 | 1 |
| flow sampler | 48 | | 36 | 1 |
| services waas segment | 9252 | 9 | 37 | 1 |
| services waas passthrough-reason | 9253 | 9 | 38 | 1 |
| application http uri statistics | 9357 | 9 | 39 | var |
| application http host | 12235 | 9 | 41 | var |
*** i have it coming in Splunk cause i edited the app /streams/netflow and i see 12235 but it doesnt show like its correct) mabye because i did it with blob.***
Does anyone have example custom Cisco setup (i thought this would be like an easy 1-1
also in my streamfwd log i have this but not sure if i built it right
2017-10-07 11:06:29 WARN [140323433604864] (NetflowManager/NetflowDecoder.cpp:1112) stream.NetflowReceiver - NetFlowDecoder::decodeFlow Unable to decode flow set data. No template with id 261 received for observation domain id 6 from device 10.0.1.1 . Dropping flow data set of size 1358
↧