I have created a custom alert action which has 7-8 parameters. I have added few of them as below but this does not seems to be the correct way as the only last parameter is validated in this case.
[validation:savedsearch]
# Require parameters to be set if webhook action is enabled
action.snow_webhook = case('action.snow_webhook' != "1", null(), 'action.snow_webhook.param.url' == "action.snow_webhook.param.url" OR 'action.snow_webhook.param.url' == "", "No Webhook URL specified", 1==1, null())
action.snow_webhook = case('action.snow_webhook' != "1", null(), 'action.snow_webhook.param.assignment_group' == "action.snow_webhook.param.assignment_group" OR 'action.snow_webhook.param.assignment_group' == "", "Assignment Group cannot be empty", 1==1, null())
action.snow_webhook = case('action.snow_webhook' != "1", null(), 'action.snow_webhook.param.service_offering' == "action.snow_webhook.param.service_offering" OR 'action.snow_webhook.param.service_offering' == "", "Service Offering cannot be empty", 1==1, null())
action.snow_webhook = case('action.snow_webhook' != "1", null(), 'action.snow_webhook.param.description' == "action.snow_webhook.param.description" OR 'action.snow_webhook.param.description' == "", "Description cannot be empty", 1==1, null())
action.snow_webhook.param.url = validate(match('action.snow_webhook.param.url', "^https?://[^\s]+$"), "Webhook URL is invalid")
I tried to club those all in single statement like below, but this is also not working.
action.snow_webhook = case('action.snow_webhook'!= "1", null(), 'action.snow_webhook.param.url' == "action.snow_webhook.param.url" OR 'action.snow_webhook.param.url' == "", "No Webhook URL specified", 'action.snow_webhook.param.service_offering' == "action.snow_webhook.param.service_offering" OR 'action.snow_webhook.param.service_offering' == "", "No Service Offering specified", 1==1, null())
Can any one help me with how to achieve this?
↧
How to add validation for multiple parameters in custom alert action
↧
Dashboard events alert another dashboard
Hi
I have a dashboard (called "Monitoring") with three panel ("Single value"). All start with 0 value. When one panel or more panel increase of value for example (first panel to 2, second panel to 5, third panel to 3) another dashboard (called "Main") with one panel ("Single value") i would like it increase the value to 1. Obviously when "Dashboard Monitoring" have all panels at 0 value, the panel of "Dashboard Main" must be at 0 value.
Thanks for support.
Luigi
↧
↧
How to get latest parameter from csv disregarding empty values
Hi splunk comunity!
I have dashboard with text input, which starts to execute when i change my parameter in text box, in query i write this parameter to my csv file.
In another dashboard i'm trying to read latest value of this parameter, but if i post an empty field in my first dashboard i get an empty result in my second.
So the question is how to check an empty value like method isEmpty() in java or how to ban empty fields passing to csv file in first dashboard?
Or how can i display last not empty value?
↧
pre process binary file before injecting to splunk indexer
Hello All ,
I am having a file with .dat extension populated with binary data it it .
I am having a script as well which will convert binary to splunk readable format (ascii) .
But i need to know is there any way , That all the data which splunk Universal forwarder is reading from binary gets first injected to my script then it will show to splunk (In human readable format) . I mean to say can i pre process the data before injecting to indexer
Thanks in advance
↧
Can I set different interval of execution for different hosts receiving same scripted input app from deployment server?
Hi,
We have a requirement where we need to deploy an app having a script in it but interval of execution of script should differ for each hosts receiving the app.
Is this possible to do in the same app?
↧
↧
List all Windows domain members a user is logged in to
How might one obtain a list of all the Windows domain members a specific user is currently logged in to?
Our domain controllers have Splunk UF with Splunk_TA_windows and Splunk_TA_microsoft_ad add-ons, and we're running both Splunk Enterprise as well as Splunk Enterprise Security.
Thanks for any suggestions.
↧
Using props.conf on SplunkUniversalForwarder to denote TimeZone
TimeZone specification in props.conf on a SplunkUniversalForwarder instance does not appear to be working for me.
- SplunkUniversalForwarder instance version 6.3.2
- Splunk instance (indexer) version 7.0.0
- The application server running the forwarder is in US/Eastern system timezone (cannot change)
- The logs are generated in UTC without a timezone specifier in the string (cannot change)
As the logs are received by Splunk they are interpreted as being UTC-5 as I suppose the forwarder is appending its system timezone. As the _time field is subsequently converted to UTC we see logs with time values 5 hours in the future.
I want to configure the forwarder instance to explicitly state that the timezone of the records it's sending on is UTC. I've tried the following:
props.conf in:
- apps/appname/local
- apps/appname/default
- system/local
- system/default
I've tried several different stanzas to match the log monitors, for example:
[sourcetype]
TZ = UTC
[host::hostname*]
TZ = UTC
[source::...//logs//debug_*]
TZ = UTC
[default]
TZ = UTC
All to no avail. Actually I am now at the point where I don't think the configuration is a problem, but it may still be. I don't see _any_ difference to the logs imported regardless of which of the above options I use, so it's like it's being overridden at the indexer or simply not picked up.
Documentation suggests that the forwarder should be able to append TimeZone information from props.conf post version 6 and that this ought to be respected when indexed. I'm not seeing this behaviour at all. I don't want to / can't configure this at the indexer as I have servers in multiple different timezones, they each need to be able to specify the source tz information.
Can anyone suggest any other avenues of exploration? Thanks in advance.
↧
Trouble appending columns for a second search
Here is the example in the Splunk documentation:
specific.server | stats dc(userID) as totalUsers | appendcols [ search specific.server AND "text" | addinfo | where _time >= info_min_time AND _time <=info_max_time | stats count(field) as variableA ] | eval variableB = exact(variableA/totalUsers)
My need has a difference where the (appendcols [ search) is on a different target and the target is a lookup or inputlookup...
First search is the source, destination, protocol and destinationport seen in a given time period, the second search is a lookup table that has allowed traffic rules (source, destination, protocol, allowedport(s)), if the allowedport(s) is a range it can be 580-590.
search netactivity | stats count by source, destination, protocol, destinationport | appendcols [search inputlookup allowedrules | where Source=source and Destination=destination and Protocol=protocol | eval tmpport=(Port,"-"), portcnt=mvcount(tmpport) | eval startport=mvindex(tmpport,0), endport=if(portcnt>1, mvindex(tmpport,1), mvindex(tmpport,0)) | where startport<= destinationport AND endport>=destinationport | (table/stats/fields) Source, Destination, Protocol, Port, ApprovedBy ] | table source, destination, protocol, destinationport, Source, Destination, Protocol, Port, ApprovedBy
For the second search, I am trying to return the ApprovedBy field most importantly, but for validation a testing purposes, having the information from the rule that is being found, is beneficial. So I have tried the table, stats and fields clauses none of which has returned any values. My results are just the fields from the initial netactivity file.
I settled on trying to get the appendcols to work as I read the documentation, I believe it is the correct option. Lookup tables don't let me do a where clause and if all the allowed rules were a 1-1 relationship on the port (instead of ranges) maybe that would work better, but the port ranges rule out that option. Even if the allowed rules table was recreated to have a start and end port, lookup doesn't all for <= or >= in the clause. I also looked at join, but again the port range being a single port or a range of ports makes joining by an individual field impossible.
I figured this would have been a easy search, but I didn't find an example of anyone doing this. If anyone has implemented something along these lines, I would appreciated their insight.
Thanks in advance for any assistance.
Jason
↧
Appendcols or other options for searching a sub-search on port ranges
Here is the example in the Splunk documentation:
specific.server | stats dc(userID) as totalUsers | appendcols [ search specific.server AND "text" | addinfo | where _time >= info_min_time AND _time <=info_max_time | stats count(field) as variableA ] | eval variableB = exact(variableA/totalUsers)
My need has a difference where the (appendcols [ search) is on a different target and the target is a lookup or inputlookup...
First search is the source, destination, protocol and destinationport seen in a given time period, the second search is a lookup table that has allowed traffic rules (source, destination, protocol, allowedport(s)), if the allowedport(s) is a range it can be 580-590.
search netactivity | stats count by source, destination, protocol, destinationport | appendcols [search inputlookup allowedrules | where Source=source and Destination=destination and Protocol=protocol | eval tmpport=(Port,"-"), portcnt=mvcount(tmpport) | eval startport=mvindex(tmpport,0), endport=if(portcnt>1, mvindex(tmpport,1), mvindex(tmpport,0)) | where startport<= destinationport AND endport>=destinationport | (table/stats/fields) Source, Destination, Protocol, Port, ApprovedBy ] | table source, destination, protocol, destinationport, Source, Destination, Protocol, Port, ApprovedBy
For the second search, I am trying to return the ApprovedBy field most importantly, but for validation a testing purposes, having the information from the rule that is being found, is beneficial. So I have tried the table, stats and fields clauses none of which has returned any values. My results are just the fields from the initial netactivity file.
I settled on trying to get the appendcols to work as I read the documentation, I believe it is the correct option. Lookup tables don't let me do a where clause and if all the allowed rules were a 1-1 relationship on the port (instead of ranges) maybe that would work better, but the port ranges rule out that option. Even if the allowed rules table was recreated to have a start and end port, lookup doesn't all for <= or >= in the clause. I also looked at join, but again the port range being a single port or a range of ports makes joining by an individual field impossible.
I figured this would have been a easy search, but I didn't find an example of anyone doing this. If anyone has implemented something along these lines, I would appreciated their insight.
Thanks in advance for any assistance.
Jason
↧
↧
Windows Monitoring Stanza help
Hello I have the below location to be monitored in a windows machine
D:\Tab\Tableau Server\data\tabsvc\logs\appzookeeper\xyz.log
D:\Tab\Tableau Server\data\tabsvc\logs\appzookeeper\abclog.2019-02-17
D:\Tab\Tableau Server\data\tabsvc\logs\backgrounder\xyz.log
D:\Tab\Tableau Server\data\tabsvc\logs\backgrounder\abclog.2019-02-17
This is the monitoring stanza I am assuming is correct which will take all folders log files like appzookeeper,backgrounder,terniation etc
[monitor://D:\Tab\Tableau Server\data\tabsvc\logs\.\*]
Thanks in advance
↧
What's the best datamodel to audit processes ran by users? and filesystem changes?
Hello Again, I'm developing a compliance app (CIM, with tstats), now is the turn to write a search to monitor processes ran by users on the domain (windows and linux, maybe some other source of interest)
My doubt is, what datamodel should I use? I'm between Endpoint and Change. But endpoint does not have a user field, I don't understand why ¿What would be the right approarch?
For filesystem changes, I personally like Change but the SA-Cim definition, on the constraint part worries me, it litterally says:
(`cim_Change_indexes`) tag=change NOT (object_category=file OR object_category=directory OR object_category=registry)
I could just not parse the events with object_category=file, but I would like to know why is this, I mean, the endpoint datamodel does not have an object_category field, for example. Why I can't use it?
Thanks!
↧
Find a host that is reporting to one index, but not another.
I am attempting the following:
Find hosts that are logging to one index but not the other by the host field.
Use case, find hosts reporting via AWS API but are not logging host logs via OS UF.
I have tried a left join but my results are not consistent and have spent countless time trying to come to a solution.
Thanks in advance.
↧
List of event codes that found in another search
Hi, I'm trying to create a query to provide a list of event codes that are found in one period time that is NOT found in another time period. This is what I came up with, but it looks like it's just giving me the aggregate results from both searches.
index=win* EventCode=* earliest=-2d@d latest=now NOT [search index=win* EventCode=* earliest=-60d@d latest=-58d@d | stats count by EventCode] | stats count by EventCode
↧
↧
Splunk search does not return event data when there is multiple json in same event
I have a created a splunk alert when there is a failure occurs. I have query as follows:
index=* source=*** |spath path=TestLog.TestFailureLog.appName output=APPNAME|spath path=TestLog.TestFailureLog.eventType output=EVENTTYPE|spath path=TestLog.TestFailureLog.payload.level output=LEVEL|spath path=TestLog.TestFailureLog.payload.failureCount output=FAILURECOUNT|spath path=TestLog.TestFailureLog.payload.errorDescription output=ERRORDESCRIPTION|where APPNAME!="" and LEVEL="ERROR"|table APPNAME,EVENTTYPE,LEVEL,FAILURECOUNT,ERRORDESCRIPTION
It is working fine when I have single jsonobject per event in the log . For eg:
If I have Data like below:
{
"TestLog" : {
"TestFailureLog" : {
"appName" : "****",
"eventType" : "****",
"payload" : {
"level" : "ERROR",
"startTime" : "2019-02-21 17:53:47",
"failureCount" : 0,
"errorCode" : 17002,
"errorDescription" : "JSONObject not found.",
"failureIdList" : [ ],
"endTime" : "2019-02-21 17:53:47"
}
}
}
}
It is working fine. But, if in case I have both success and failure log in the same event, that particular event is vomited and it returns all the remaining failure logs.
Failed Case:
{
"TestLog" : {
"TestSuccessLog" : {
"appName" : "****",
"eventType" : "****",
"payload" : {
"level" : "INFO",
"startTime" : "2019-02-21 18:02:58",
"sourceCount" : 0,
"successCount" : 0,
"duplicateCount" : 0,
"publishedCount" : 0,
"endTime" : "2019-02-21 18:02:59"
}
}
}
}
{
"TestLog" : {
"TestFailureLog" : {
"appName" : "****",
"eventType" : "****",
"payload" : {
"level" : "ERROR",
"startTime" : "2019-02-21 18:02:58",
"failureCount" : 0,
"errorCode" : 17002,
"errorDescription" : "IO Error: Unknown host specified ",
"failureIdList" : [ ],
"endTime" : "2019-02-21 18:02:59"
}
}
}
}
Can Anyone please suggest me the solution for it.
↧
Am I using splunk-ansible playbook for role splunk_standalone correctly?
Hi there,
I am writing ansible playbooks that configure my local splunk universal forwarders.
To setup a mock receiver under test, I am trying to correctly use the splunk-ansible github playbook / roles. I can setup a splunk_standalone ok, and it says it's ready to receive forwarded inputs on 9777, but I can't seem to connect to it correctly.
How do I run the playbook to create an unlicensed vm for a test scenario, that can accept forwarders?
There aren't any great (work out of the box) examples out there in the documentation.
----------
I am using molecule to spin up a pair of vagrant VMs; a 'splunk' centos VM (receiver) with the splunk_standalone role applied (github.com splunk splunk-ansible), and an 'ubuntu' VM with my own universal forwarder role applied. The version of splunk is the latest trial tgz from free enterprise 60 day trial.
To converge the receiver, I synced the code and added the splunk_standalone role to the test suite roles path, and ran the role directly from a molecule converge playbook. I had to make some guesses about which vars to define, starting with the example defaults.yml for linux, which was a little incomplete.
Before I run the play, I have to create the splunk dir, the splunk user and group, and afterwards, I configure inputs.conf.
The included vars I used to run the play are:
---
# https://github.com/splunk/splunk-ansible/blob/develop/docs/USING_DEFAULTS.md
hide_password: false
delay_num: 3
splunk_password:
splunk_gid: 500
splunk_uid: 500
# Splunk defaults plus remainder that allow play to run without error
retry_num: 100
splunk:
# TASK [splunk_standalone : Enable HEC services] *********************************
admin_user: molecule
# TASK [splunk_common : Apply Splunk license] ************************************
ignore_license: true
# TASK [splunk_common : Download Splunk license] *********************************
license_uri:
# TASK [splunk_standalone : include_tasks] ***************************************
apps_location:
# TASK [splunk_common : Set as license slave] ************************************
license_master_included: false
role: splunk_standalone
# TASK [splunk_common : include_tasks] *******************************************
build_location: /splunk-7.2.4-8a94541dcfac-Linux-x86_64.tgz
opt: /opt
home: /opt/splunk
user: splunk
group: splunk
exec: /opt/splunk/bin/splunk
pid: /opt/splunk/var/run/splunk/splunkd.pid
password: "{{ splunk_password | default('invalid_password') }}"
# This will be the secret that Splunk will use to encrypt/decrypt.
# secret:
svc_port: 8089
s2s_port: 9997
# s2s_enable opens the s2s_port for splunktcp ingestion.
s2s_enable: 0
http_port: 8000
# This will turn on SSL on the GUI and sets the path to the certificate to be used.
http_enableSSL: 0
# http_enableSSL_cert:
# http_enableSSL_privKey:
# http_enableSSL_privKey_password:
hec_port: 8088
hec_disabled: 0
hec_enableSSL: 1
#The hec_token here is used for INGESTION only (receiving splunk events).
#Setting up your environment to forward events out of the cluster is another matter entirely
hec_token: 00000000-0000-0000-0000-000000000000
app_paths:
default: /opt/splunk/etc/apps
shc: /opt/splunk/etc/shcluster/apps
idxc: /opt/splunk/etc/master-apps
httpinput: /opt/splunk/etc/apps/splunk_httpinput
# Search Head Clustering
shc:
enable: false
#Change these before deploying
secret: some_secret
replication_factor: 3
replication_port: 9887
# Indexer Clustering
idxc:
#Change before deploying
secret: some_secret
search_factor: 2
replication_factor: 3
replication_port: 9887
When the VMs are converged, logging in with `molecule login -h `, netcat says their ssh ports are visible to each other. The VMs are configured to broadcast/receive on the 10.0.0.0/24 ip range. Spunk receiver is at 10.0.0.1 and splunk forwarder is at 10.0.0.2
`nc -zv 127.0.0.1 9997` run on the receiver says the port 9997 is connected to ok. But from the forwarder,`nc -zv 10.0.0.1 9997` returns error. This is in line with errors seen on the forwarder in splunk.log:
ERROR TcpOutputFd - Connection to host=10.0.0.1:9997 failed
WARN TcpOutputProc - Applying quarantine to ip=10.0.0.1 port=9997 _numberOfFailures=2
On the receiver, `splunk list inputstatus` shows:
tcp_cooked:listenerports :
9997
There's no active firewalls on the VMs, they're lightweight configurations for testing config management code.
Currently, the receiver inputs at `./system/local/inputs.conf` (or if I use the web UI, `./apps/splunk_monitoring_console/local/inputs.conf`) are set to:
[splunktcp://9997]
listenOnIPv6 = no
disabled = 0
acceptFrom = 10.0.0.0/24
connection_host = ip
I've tried this with ip6 enabled, and/or with connection_host set to none (dns is not configured on these hosts), but without success.
The forwarder outputs at `./system/local/outputs.conf` is set to:
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = 10.0.0.1:9997
[tcpout-server://10.0.0.1:9997]
My sub questions are:
1. Do the configs (vars used, inputs/outputs files) look reasonable?
2. What is the right way to apply the ansible role for splunk_standalone, unlicensed, so it can accept forwarders? (aka, 'did I run the role incorrectly', or should I have run more than one role).
3. Is the splunk_standalone role, unlicensed, able to accept forwarders?
Ideas? What other troubleshooting steps can I take?
↧
parsing specific events in logs at heavy forwarder
Hello,
I am new to splunk and learning it . I am trying the parse the events with specific keyword will dropping the other events from the logs at heavy forwarder. For example, below are the sample logs .
2018-02-21T18:00:13.119575+00:00 apachefront audispd: node=abc.corp.com type=PATH msg=audit(1550772013.107:10434531): item=1 name="/lib64/ld-linux-x86-64.so.2" inode=786685 dev=fe:02 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL 2018-02-21T18:00:13.1154665+00:00 apachefront audispd: node=apachefront type=EOE msg=audit(1550772013.107:10434531): 2018-02-21T18:00:13.120488+00:00 apachefront audispd: node=apachefront type=SYSCALL msg=audit(155054653.115:103534532): arch=c000003e syscall=59 success=yes exit=0 a0=1053420 a1=10534e0 a2=1050980 a3=7ffe6956c490 items=2 ppid=39078 pid=39084 auid=708926886 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15253 comm="ps" exe="/bin/ps" key="root" 2018-02-21T18:00:13.12561541+00:00 apachefront audispd: node=apachefront type=EXECVE msg=audit(155564013.115:104455432): a0="ps" a1="-eT" 2018-02-21T18:00:13.121049+00:00 apachefront audispd: node=apachefront type=CWD msg=audit(16872013.115:1062): cwd="/" 2018-02-21T18:00:13.121241+00:00 apachefront audispd: node=apachefront type=PATH msg=audit(1550772013.115:10434532): name="/usr/bin/ps" inode=1995646 dev=fe:02 mode=0100755 ouid=0 rdev=00:00 nametype=NORMAL 2018-02-21T18:00:13.156434+00:00 apachefront audispd: node=apachefront type=PATH msg=audit(1550765463.115:10434532): item=1 name="/lproc/ld-linux-x86-32" inode=7865644 dev=fe:02 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL
In the logs, i am trying to make heavy forwarder to send the events that have *type=SYSCALL* and *type= EXECEVE* while dropping the others. Below is my transforms.conf, however heavy forwarder is dropping the all the events. Any help would be appreciated.
[set_SYSCALL]
REGEX = \,\d{3}\s*\w+\s*\[type=SYSCALL]
DEST_KEY = queue
FORMAT = nullQueue
[set_EXECVE]
REGEX = \,\d{3}\s*\w+\s*\[type=EXECVE]
DEST_KEY = queue
FORMAT = nullQueue
Thank you
↧
User missing roles.
Hello
I have users who do not have all the roles they should be associated with appearing in the Access Control>>Users webpage. Example user foo is in three ldap groups (a, b, and c) which are bound to role_a, role_b, and role_c. When I search for user foo according to splunk this users roles are role_a and role_b. If I look at the map groups for ldap strategies associated to role_a, role_b and role_c user foo is a member of each.
When I click on user foo in Access Control>>Users, selected roles is greyed out , and role_c is not assigned to the user foo only role_a and role_b. How do I get splunk to assign role_c to user foo. I also have several other user who are only getting role_c assigned to them even though they are part of either role_a and/or role_b.
Thanks
↧
↧
how to make a decision based on a row value
I'm trying to create a traffic-color dashboard for my applications based on their status and Tier level. If any one of the application status is RED I want the tier to be shown as RED even though there are other application in same tier level is GREEN or AMBER.
can you suggest me how my search query should be?
example data
SrV |App| Tier |Status
S1 |A1 |Tier1 |AMBER
S2 |A1 |Tier1 |AMBER
S3 |A2 |Tier2 |AMBER
S4 |A3 |Tier3 |GREEN
S5 |A4 |Tier2 |GREEN
S6 |A2 |Tier2 |AMBER
S7 |A4 |Tier2 |GREEN
S8 |A5 |Tier1 |RED
to Something like
Tier1 Tier2 Tier3
RED AMBER GREEN
↧
Splunk SPL for matching same values and output to an additional column with a new defined value
Hi,
I have a field named OS
This filed is populating multiple values such as below after running following SPL:
| inputlookup Host.csv
| stats dc(host) as Count by OS
| fields - Count
Result:
WINDOWS NT
WINDOWS SERVER 2003
WINDOWS SERVER 2008
WINDOWS SERVER 2012
LINUX
LINUX 6.7
LINUX 7.0
SOLARIS 9
SOLARIS 10
I want an additional column in results that if:
All the windows above should display Windows
All the Linux above, should display Linux
and so on in an additional column like below:
![alt text][1]
How? I tried to use eval and case but seems like not getting it or having a long day.
Thanks in-advance
[1]: /storage/temp/269597-spl.png
↧
Migrate Splunk on premise to AWS cloud
Currently we have Splunk search heads , one of them with Enterprise Security, indexer cluster, deployment server,License master which we need to migrate from virtual machines and physical boxes to the AWS cloud. What would be the best approach so the data in buckets is not lost and all the configurations and apps are intact.
↧