Hello,
I changed the following 2 parameters in the TA as per the documentation in order to lower the data volume:
fifo_interval="60" --> "300"
fifo_snapshot="1440" --> "288"
In the documentation there is also mentioned that the macro nmon_span, field spanrestricted should be changed.
How would I set it to reflect the above parameter changes on the charts?
The field is defined as below:
| eval spanrestricted=case(
info_min_time == "0.000", 2*60,
Difference > (916*60*60),60*60,
Difference > (833*60*60),55*60,
Difference > (750*60*60),50*60,
Difference > (666*60*60),45*60,
Difference > (583*60*60),40*60,
Difference > (500*60*60),35*60,
Difference > (416*60*60),30*60,
Difference > (333*60*60),25*60,
Difference > (250*60*60),20*60,
Difference > (166*60*60),15*60,
Difference > (83*60*60),10*60,
Difference > (66*60*60),5*60,
Difference > (50*60*60),4*60,
Difference > (33*60*60),180,
Difference > (16*60*60),120,
Difference > (8*60*60),60,
Difference <= (8*60*60),60
)
Could you please advice?
Kind Regards,
Kamil
↧
App: Metricatior for nmon - setting of the spannrestricted field
↧
Validating Cluster Bundle incorrectly labeling bundle as not requiring a restart
I just stood up a brand new multi-site cluster on Splunk 7.2.4 and while I was pushing out the very first of my configurations I validated the bundle just to see if there were any new features and I noticed that the validation said that a restart was not needed, which I knew to be untrue because I was pushing out my volume definitions.
Sure enough, when I pushed it the cluster did a rolling restart.
Why would the validation process incorrectly flag a bundle as not needing a restart? This could have been a mess if it had been a live environment and I want to make sure I know how to account for it in the future.
↧
↧
SEDCMD with winhostmon
We are trying to mask some data from winhostmon using SEDCMD.
The sample data sourcetype=WinHostMon source=process :
Type=Process
Name="wfcrun32.exe"
ProcessId=1
CommandLine="C:\PROGRAM FILES (X86)\Test\test.EXE" /h0 "C:\Program Files (x86)\Test2\test2.test" /username:"Test" /domain:AD /password:"test"
StartTime="20170516135737.278912+120"
Host="test-test2-test3"
Path="C:\PROGRAM FILES (X86)\Test\test.EXE"
Props:
[WinHostMon]
SEDCMD-anonymize=s/\/password.*$/\/password:XXXXX/g
The issue is that it is not masking the data, i have tried sourcetype,source and host on the indexer but still its not masking.
If i upload a test file with data using the add data option i am able to mask the data using the SEDCMD, same goes for a file with a static sourcetype.
My guess is that the source/sourcetype is not correct because of the way Splunk identifies the data at indexing/parsing.
Does anyone have an idea how i can mask the data at indexing time?
The data is being send from a universal forwarder to our indexers so it is not passing through a heavy forwarder.
↧
Weekly Values to be fetched in a filter
Hi All,
I have a reported date time field which i am converting and displaying as a month filter - which contains values as Jan -2019 , Feb -2019
Is it possible to show week filter which will be depended on Month filter.
For ex : if I select month (Apr -19) from the filter, the week filter should display something like week 1 , week2, week3 for the month of APr 19
let me know if this is possible ?
↧
Week details to be dispalyed in a filter ex: week1st-7th apr , week2 -8th -14th Apr
Hi All,
I have a reported date time field which i am converting and displaying as a month filter - which contains values as Jan -2019 , Feb -2019
ex : Reported date time field = 05/05/2019 16:29 (%d/%m/%Y %H:%M)
Is it possible to show week filter which will be depended on Month filter.
For ex : if I select month (Apr -19) from the filter, the week filter should display something like week 1 , week2, week3 for the month of APr 19
let me know if this is possible ?
↧
↧
Java SDK Service.getJobs.create with token throws 401 Unauthorized error
I have a token instead of a username and password for connecting to Splunk. When connecting, I am able to authenticate just fine. However, when performing a query I get a 401 unauthorized. The token was set up for the HTTP Event Collector (HEC), so that may be why I can't perform the search directly on the instance.
loginArgs.setToken("xxx-xxx-xxx-xxx");
loginArgs.setHost("dev.splunk.domain.com");
loginArgs.setPort(443);
HttpService.setSslSecurityProtocol(SSLSecurityProtocol.TLSv1_1);
Service service = Service.connect(loginArgs);
System.out.println(service);
String mySearch = "search index=app_name starttime=\"05/01/2019:15:58:00\" endtime=\"05/15/2019:15:59:50\" | head 5";
Job job = service.getJobs().create(mySearch);
Below is part of the stack trace.
com.splunk.HttpException: HTTP 401 -- Unauthorized
at com.splunk.HttpException.create(HttpException.java:84)
at com.splunk.HttpService.send(HttpService.java:500)
at com.splunk.Service.send(Service.java:1295)
at com.splunk.HttpService.post(HttpService.java:348)
at com.splunk.JobCollection.create(JobCollection.java:81)
at com.splunk.JobCollection.create(JobCollection.java:62)
at com.mastercard.salt.client.http.HECConnector.execute(HECConnector.java:73)
at com.mastercard.salt.client.http.SplunkHECTest.setup(SplunkHECTest.java:17)
**Question**: Is the same token used for HEC and typical Splunk authentication?
↧
Splunk Log ingestion issues
Currently i am facing a issue , i am monitoring a directory that has over 14000 files i am getting few files are ingesting to splunk and few files are not ingesting , currently i am using these two stanzas in my input.conf file disabled = false
recursive = true , anyone has face similar issue ?
↧
Best way to connect to HEC with Java SDK
**Question**: What is the best way to connect to HEC with the Java SDK?
**SDK JAR Version**: 1.6.4.0
Currently, I am using the below code.
loginArgs.setToken("c0973521-5e90-4364-b551-cb7b1fcbfcf6");
loginArgs.setHost("https://hec.dev.splunk.domain.int:13510/services/collector/event");
loginArgs.setPort(13510);
HttpService.setSslSecurityProtocol(SSLSecurityProtocol.TLSv1_1);
Service service = Service.connect(loginArgs);
Which returns an error that the URI can't be null even though the host is being set, which leads me to believe that it's malformed.
java.lang.IllegalArgumentException: URI can't be null.
at sun.net.spi.DefaultProxySelector.select(DefaultProxySelector.java:148)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1150)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1334)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1309)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:259)
at com.splunk.HttpService.send(HttpService.java:445)
at com.splunk.Service.send(Service.java:1295)
at com.splunk.HttpService.post(HttpService.java:348)
at com.splunk.JobCollection.create(JobCollection.java:81)
at com.splunk.JobCollection.create(JobCollection.java:62)
at com.mastercard.salt.client.http.HECConnector.execute(HECConnector.java:73)
at com.mastercard.salt.client.http.SplunkHECTest.setup(SplunkHECTest.java:17)
↧
Error when trying to add inputs to Splunk add-on for Microsoft Office 365
I've installed the Splunk Add-on for Microsoft Office 365 on a HFW, and added a tenant and am now trying to add an input. However, when I fill out the form and click add, an error message appears: "Cannot read property 'status' of undefined". Anyone run into this? Proxies are enabled.
I also noticed that the doc states that after clicking add "he Splunk Add-on for Microsoft Office 365 saves your input settings and divides up the data collection tasks included in the input evenly among all the forwarders that you have specified in the Forwarders tab on the Configuration page.". Well, this app doesn't have a configuration page!
↧
↧
How to find all Dashboards, Reports and Alerts related to a specific index ?
Our Splunk instance is being overhauled and I need to update all of the content that has been built. We have some indexes that are changing name, and I am looking for a query that I can run to find all Dashboards, Reports and Alerts that are based of specific indexes.
I.E. index=main is changing to index=core.
I need to search for all Dashboards, Reports and Alerts that reference index=main so I can change them to index=core.
↧
Metrics per process on Windows and Linux host
Does any Splunk app give per process metric information on Windows and Linux host? We have deployed Splunk App for Infrastructure on our hosts. It has system level metrics but it doesnt give you process level metrics. Any help on this would be great. Thanks
↧
Splunk indexer node unable to join the cluster
I have an indexer cluster with 1 master and 2 peer nodes. The peer nodes machine got rebooted suddenly and now 1 of the peer node is showing status DOWN. I have tried restarting the node but it doesnt help. The replication factor for the cluster is 2. I have also tried adding the node to the cluster again and it complains the secret key is wrong. Is there any way I could find the secret key from the master or the working node? What is the best way to fix this issue and make the peer join the cluster?
↧
Form error when referencing a token in the search
I have three multi-selection options for my form. All three default to *, but the main panel of the form throws an error when I reference one or all of the token values to filter the search down. I get the following error: Error in 'search' command unable to parse the search: Comparator '=' has an invalid term on the right hand side.
The search string for the panel calling the token is as follows:
| dbxquery output='csv' connection="blah" query="SELECT * FROM "blah_data"
| search filtering_field=$filter_tok$
| table field1 field2 field3
The Multi-selection input is defined as follows:
Label: Filtering Field
**Token Options**
Token: filter_tok
Default: All
Token Prefix: (
Token Suffix: )
Token Value Prefix: Filtering_Field="
Token Value Suffix= "
Delimiter: OR (spaces before and after OR)
**Static Options**
Name: All
Value: *
**Dynamic Options**
Search String:
| dbxquery output='csv' connection="blah" query="SELECT * FROM "blah_data"
| dedup filtering_field
| table filtering_field
Time Input: Last 24 Hours
Field For Label: Filtering_Field
Field of Value: Filtering_Field
↧
↧
Route to index based on source IP/Dest IP in log (Not source host)
I cannot seem to get this to work so I assume I am doing something wrong. We are about to start a POC for splunk but we wanted to get a head start on some of our use cases.
We need to route specific data coming in to different indexes for our clients. Proxy and Firewall logs. The actual host sending us the logs could be the same for 100 clients so we need to do the routing based on Source or Dest with in the log.
Samples are below. But we basically want to route that data into the index called 1000. We would then want to make more that does different regex for other CIDR ranges. From what I am reading, this appears it should be at least close to what I want.
Props.conf
[cisco:asa]
TRANSFORMS-1000 = 1000cisco
Transforms.conf
[1000cisco]
REGEX = :10\.1\.([0-9]|[1-9][0-9]|1([0-9][0-9])|2([0-4][0-9]|5[0-5]))\.([0-9]|[1-9][0-9]|1([0-9][0-9])|2([0-4][0-9]|5[0-5]))
DEST_KEY = _MetaData:Index
FORMAT = 1000
Sample Log
<172>May 16 10:51:17 hostip %ASA-4-106023: Deny tcp src fwinterface:10.1.1.57/64176 dst outside:172.217.7.14/443(cloud.google.com) by access-group "aclname" [0x0, 0x0]
↧
Duration between Logoff and Logon
Hi team,
Please help me to figure out the issue.
I would like to create a dashboard using my Audit logs to capture my break time.
I'm trying to use time difference between Successful Logoff and Logon, That duration would be my breaktime.
I wrote a SPL, but no results obtained.
*Event 1*
**05/16/2019 03:00:05 PM
LogName=Security
SourceName=Microsoft Windows security auditing.
EventCode=4624
EventType=0
Type=Information
ComputerName=IN2119801W3.ey.net
TaskCategory=Logon
OpCode=Info
RecordNumber=240116
Keywords=Audit Success
Message=An account was successfully logged on.**
*Event 2*
**05/16/2019 02:30:00 PM
LogName=Security
SourceName=Microsoft Windows security auditing.
EventCode=4634
EventType=0
Type=Information
ComputerName=IN2119801W3.ey.net
TaskCategory=Logoff
OpCode=Info
RecordNumber=238613
Keywords=Audit Success
Message=An account was logged off.**
*Splunk query*
**index="mymachinelogs" Keywords="Audit Success" TaskCategory=Logoff OR TaskCategory=Logon | transaction TaskCategory startswith="Logoff" endswith="Logon" maxevents=2 | table _time TaskCategory duration**
No results found
Thanks in advance!
↧
Using Active Directory group membership as sub search to user logon search
I am trying to use an ldapsearch as input to a seach which will list AD user logons. Both parts of the search work independently, but I don't know how to combine them into one search (sub search and main search?). The users returned are all members of one Active Directory group or another and I'd like to be able to use the group name to limit the search.
This search will show me exactly what I want to see, but I have to put in OR statements which doesn't scale very well.
sourcetype="xmlwineventlog" EventCode=4624 (Logon_Type=2 OR Logon_Type=7 OR Logon_Type=10) (Target_User_Name=User1 OR Target_User_Name=user2 OR etc...)
This search returns all the sAMAccountNames of the members of a group:
| ldapsearch search="(&(objectclass=group)(distinguishedName=TheDN of the Group))"|ldapgroup|table member_name
Is there a way to combine them into one search without having to go to the trouble of creating a csv lookup? Since ldapsearch finds the correct user names, it seems inefficient to have to use that method.
↧
Converting bytes to GB or MB
Hey all, I was getting confused by some of the splunk answers for converting and couldn't figure out the eval portion of my query. Can someone shed some light on how I can convert the bytes_out field from my palo logs to MB and GB? Query below, thank you in advance!
index=pan_logs sourcetype=pan:traffic
| stats sum(bytes_out) AS bytes_out by user src_ip dest_ip
| where bytes_out>35000000
| sort - bytes_out
↧
↧
PCAP Analyzer for Splunk Not Working Properly - Windows
Hello,
Unable to convert pcap file to a csv for indexing and analysis.
I followed the instructions from Daniel; however, the pcap file is not converting to a csv. Therefore, the data is not being indexed.
I gave Full rights to my ID (and all users on my laptop) to
- Wireshark folder and subfolders (for access to tshark.exe)
- SplunkForPCAP folder and subfolders (for access to ../SplunkForPCAP/bin/ folder)
I set SPLUNK_HOME variable. I tried both as a system and as a public variable.
Here is the procedure I followed
- Drop a pcap in the folder I configured for Data Inputs (PCAPanalyzerTEST)
- A few minutes later, the file is processed? and no longer in the PCAPanalyzerTEST folder
- It is in the PCAPConverted folder
- There is also a csv file in the PCAPcsv folder. However, it is zero bytes long.
**Environment**
- Windows 8.1 Enterprise
- Splunk Enterprise 7.2.5.1 - Single instance on laptop
- Splunk Stream 7.1.3
- Splunk PCAP Analyzer 4.1.1.0
Here are the contents of the indexes.conf and input.conf files in the Splunk home folder \etc\apps\SplunkForPCAP\local.
**indexes.conf**
*[pcap]
coldPath = $SPLUNK_DB\pcap\colddb
enableDataIntegrityControl = 0
enableTsidxReduction = 0
homePath = $SPLUNK_DB\pcap\db
maxTotalDataSizeMB = 512000
thawedPath = $SPLUNK_DB\pcap\thaweddb*
**inputs.conf**
*[pcap://PCAPanalyzerTEST]
host = GCJPC
index = pcap
path = C:\Users\gcj\Desktop\PCAPanalyzerTEST*
Thanks in advance for any direction or advice you can offer.
God bless,
Genesius
↧
To filter data from cloudwatch logs to splunk
Hi,
I am getting cloudwatch logs data into Splunk. Right now, i am getting all the log data but i want only specific data(for eg, only the json stream being populated in logs once in a while).
How can i filter the data before Splunk ingest all of it from Cloudwatch Logs.
Thanks,
Niddhi
↧
creating a deployment app to push and run a bash script
Is it possible to create a splunk deployment app that i can push out to my forwarders that will run a bash script every minute to gather facts and push them to a log?
I have looked at some of the documentation, and created an app, placed my bash script in the /opt/splunk/etc/deployment-apps/myapp/bin/script.sh
I can see that it gets deployed to my test server, but i see in my splunkd.log that i get
"Incorrect path to script: /opt/splunk/etc/deployment-apps/myapp/bin/script.sh Script must be inside $SPLUNK_HOME/bin/scripts".
my default/inputs.conf file has:
[script://path to the script]
disabled=0
interval=60
sourcetype=splunkd
↧