Hi Everyone,
I'm new to Splunk: our Data looks like this:> id;name;Field1;Field2;Field3;Field4;field5;field6;field7> 0;Module Name 0;true;false;true;true;false;true;true> 1;Module Name 1;true;false;false;true;false;false;false
We have liked to build a table that looks liked this
----------------------------------------------
FieldName| Is_TRUE| is_False|
field1 | 10 | 20 |
field2 | 10 | 20 |
field3 | 10 | 20 |
----------------------------------------------
The columns is_true and is False are the sum of the times where field* is True and respectively False.
How to get something like this ? A Special Query for that ?
↧
From fields values make a table with calculated fields
↧
Can I REGEX a string and assign to field?
I have Graylog forwarding Windows events and I use this command in my props.conf to parser
FIELDALIAS-winlogbeat_as_action = winlogbeat_keywords as action
this sets action to the value of winlogbeat_keywords however it is [audit success] and I want to remove the '[]',
I know I can use an EXTRACT and a REGEX expression, but I am guessing that is against the entire messages string and I want to run it on winlogbeat_keywords and\or action
can I do this?
Thanks!
↧
↧
Is it possible to modify users settings using the CLI?
Hi,
I would like to modify some users option like "search assistant" or "syntax highlighting" using the CLI ; is it possible?
↧
Search on summary index is taking time to give results
Hello,
I am running a saved search(every 5 min) to populate a summary index using collect command.
Now the search on the summary index is taking much time to give results. Earlier it was not taking that much time.
What could be the reason for this delay in giving results? Ideally search query on summary index should give results quickly right?
When i searched _internal index for errors, i saw error msg "ERROR IndexScopedSearch - STMgr::distinct_apply_terms failed (rc=-33) while scanning for _indextime bounds in bucket".
Is this error related to my issue?
↧
writing MS SQL data with Splunk Enterprise 7.1.1 and DB Connect 3.1.3. in main Index,Collect Data form MS SQL 2016 with Splunk 7.1.1 and DB Connect 3.1.3
At the beginning some informations about the Enviroment.
- Single Instance of Splunk Enterprise in Version 7.1.1
- MS SQL 2016 Database
- JRE Version 8 (1.8.0_181)
- JDBC Driver Version 6.4
- DB Connect App 3.1.3.
The connection to the datebase works. So it is possible to execute the SQL query and preview the data. But the data is not written to the index.
In the splunk_app_db-connect_server log file we found the following issue:
2018-08-28 11:41:23.122 +0200 [QuartzScheduler_Worker-17] ERROR c.s.d.s.task.listeners.RecordWriterMetricsListener - action=unable_to_write_batch
java.io.IOException: HTTP Error 400: Bad Request
at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEventBatch(HttpEventCollector.java:112)
at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEvents(HttpEventCollector.java:89)
at com.splunk.dbx.server.dbinput.recordwriter.HecEventWriter.writeRecords(HecEventWriter.java:36)
at org.easybatch.core.job.BatchJob.writeBatch(BatchJob.java:203)
at org.easybatch.core.job.BatchJob.call(BatchJob.java:79)
at org.easybatch.extensions.quartz.Job.execute(Job.java:59)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
So here is what we have tried so far:
- changing DB Connect inputs to use current Index time
- removing Rising Column from DB Connect Input
- changing the port of the HEC in the global settings
- we filled the "Host" field on input configuration
- on HEC we disabled Indexer acknowledgement
With DB Connect 2.4.1 the writing to the main index works.... but there is an other problem by using the rising column functionally.
↧
↧
How to compare between individual values from two fields having multiple values ?
I have 2 fields from my search, something like this -
Errorcode, ErrorDescription
Err1, "abcd password is missing xyz"
Err1, "1111 password is missing 222"
Err1, "1233455 connection is not working 6789"
Now I have another field called ErrorCategory which has list of values like -
Password is missing, Connection is not working ,xxxx, yyyy, jjjj...
I want to compare each value of ErrorDescription with Error Category and the end result should look like this -
ErrorCode, ErrorDescription,ErrorCategory
Err1, "abcd password is missing xyz",Password is missing
Err1, "1111 password is missing 222",Password is missing
Err1, "1233455 connection is not working 6789",Connection is not working
Right now when I am trying to use match and compare, it is comparing the entire ErrorCatgory list with each value of ErrorDescription, so it is not able to match. How do I achieve the one on one comparison between fields having multiple values ?
Does anyone have any idea how to achieve this ?
↧
Find Hosts which do their searches in alphabetical order
Hi there
I have many log-entries with the two fields "host_address" (an IP address) and "query" (a search query). One entry per query. I would like to figure out which "host_addresses" do their queries in alphabetical order. That's it.
To be honest: I have no idea where to start!
The only thing I found was the following article:
[https://www.splunk.com/blog/2017/06/16/detecting-brute-force-attacks-with-splunk.html][1]
but it does not really help me eather. Can anyone?
Best regards, Dominic
↧
Difference between DMC and SSH free disk space
Hello Splunkers.
Today I saw a curious issue after receiveing a splunk alert os low disk space (lower than 10%).
In the Monitoring console, I can see why the alert was fired:
![alt text][1]
However, when I go to SSH and do a `df -h`, this is the result I get:
![alt text][2]
This would give a 20% free space in disk and the alarm would not be fired.
Any thoughts why this difference between Monitoring Console and SSH is happening?
regards,
GMA
[1]: /storage/temp/254798-dmc.png
[2]: /storage/temp/254799-ssh.png
↧
How to display all account in the same chart at the same time
how to display all account in the same chart at the same time
there three account!
account1 have 1000000$
account2 have 200000$
account3 have 100$
because ,the gap between numerical values is very large.
The three accounts can not be displayed in the same chart at the same time.
only two account was displayed
![alt text][1]
[1]: /storage/temp/254800-数值偏差大图中不显示.png
↧
↧
How to set the connectTimeout() and readTimeout() using splunk API Java SDK?
Hi,
I am using splunk jar 1.6.0.0 and as per the splunk github, it shows HttpService has methods to set the readTimeout() & connectTimeout() but when splunk jar 1.6.0.0, we could see only below methods in HttpService.
createSSLFactory()
getSslSecurityProtocol()
getSSLSocketFactory()
setSslSecurityProtocol()
setSSLScoketFactory() only.
We want to set those timeouts explicitly as the requests to splunk hangs there. Can somebody help as how the connectTimeout() and readTimeout() can be set using Splunk API java SDK?
Thanks.
↧
Email results from Python script through REST API
Hello Experts,
I have created a machine learning model and fetching data from Splunk to generate real-time predictions for my problem. I'm extracting the data from Splunk using REST API python library.
Question:
Once I generate the prediction from Python script, I have to email the result to specific users. I'm not sure how do I do that?
Through REST API is there an Option to send the results using "sendemail" command? ( how do I pass result variable through REST ?)
import splunklib.client as client
import splunklib.results as results
import sys
from time import sleep
service = client.connect(
host=HOST,
port=PORT,
username=USERNAME,
password=PASSWORD)
searchquery_normal = r"search criteria "
kwargs_normalsearch = {"exec_mode": "normal"}
job = service.jobs.create(searchquery_normal, **kwargs_normalsearch)
#results as a dataframe
lst = list(results.ResultsReader(job.results()))
df = pd.DataFrame(lst)
#Apply the ML model and print the prediction
prediction = fittedModel.predict(np.array(df.head(1)))
print('Prediction = '+'{:,}'.format(int(p)))
#Send the prediction to a receipient
####something like
####searchquery_normal = r"|sendemail prediction value recipientaddress "
I would really appreciate your help here.
Thanks!
Best,
Harsha
↧
Assistance in showing servers logging the most when license quota is at 75% full
I am looking for help to see how i can have my current alert, which emails me that our quota is 75% full, to also present in my email the top 10 offenders of logging.
Is that possible?
Currently my search i'm using to show a 30G quota being 75% full is:
index=_internal source="*license_usage.lo*" type=Usage pool="Linux Pool" earliest=@d| stats sum(b) as bytes | eval gb=bytes/1024/1024/1024|where gb>=22
Is there a way on the search to also have it show the top offenders, such as
index = * | tp limit=10 host
Thank you!
↧
Is there a regex available to drop service account events from active directory to be used on the universal forwarder?
Our security events count is in millions and we observed that we have more then 600 service accounts in our environment and they contribute millions of events for a/c log on events and hence we want to drop these events for service accounts.
Is there a regex available to drop service account events from active directory to be used on the universal forwarder?
↧
↧
Multiple Accounts with Plugin
I need to be able to deploy this against multiple AWS accounts and then aggregate the Trusted Advisor data in the dashboard? Is anyone using this against multiple accounts today?
↧
Summary Index: Why is a search that took less time before taking much longer now?
Hello,
I am running a saved search(every 5 min) to populate a summary index using collect command.
Now the search on the summary index is taking too much time to give results. Earlier it was not taking as much time.
What could be the reason for this delay in giving results? Ideally search query on summary index should give results quickly right?
When i searched _internal index for errors, i saw error msg `"ERROR IndexScopedSearch - STMgr::distinct_apply_terms failed (rc=-33) while scanning for _indextime bounds in bucket".`
Is this error related to my issue?
↧
What is the difference between DMC and SSH free disk space?
Hello Splunkers.
Today I saw a curious issue after receiveing a splunk alert os low disk space (lower than 10%).
In the Monitoring console, I can see why the alert was fired:
![alt text][1]
However, when I go to SSH and do a `df -h`, this is the result I get:
![alt text][2]
This would give a 20% free space in disk and the alarm would not be fired.
Any thoughts why this difference between Monitoring Console and SSH is happening?
regards,
GMA
[1]: /storage/temp/254798-dmc.png
[2]: /storage/temp/254799-ssh.png
↧
Visualizations: How do you display all accounts in the same chart at the same time?
How do I display all accounts in the same chart at the same time?
There are three accounts!
account1 have 1000000$
account2 have 200000$
account3 have 100$
because ,the gap between numerical values is very large.
The three accounts can not be displayed in the same chart at the same time.
only two account was displayed
![alt text][1]
[1]: /storage/temp/254800-数值偏差大图中不显示.png
↧
↧
How do I Email results from Python script to users through REST API?
Hello Experts,
I have created a machine learning model and am fetching data from Splunk to generate real-time predictions for my problem. I'm extracting the data from Splunk using REST API python library.
Question:
Once I generate the prediction from Python script, I have to email the result to specific users. I'm not sure how do I do that?
Through REST API is there an Option to send the results using "sendemail" command? ( how do I pass result variable through REST ?)
import splunklib.client as client
import splunklib.results as results
import sys
from time import sleep
service = client.connect(
host=HOST,
port=PORT,
username=USERNAME,
password=PASSWORD)
searchquery_normal = r"search criteria "
kwargs_normalsearch = {"exec_mode": "normal"}
job = service.jobs.create(searchquery_normal, **kwargs_normalsearch)
#results as a dataframe
lst = list(results.ResultsReader(job.results()))
df = pd.DataFrame(lst)
#Apply the ML model and print the prediction
prediction = fittedModel.predict(np.array(df.head(1)))
print('Prediction = '+'{:,}'.format(int(p)))
#Send the prediction to a receipient
####something like
####searchquery_normal = r"|sendemail prediction value recipientaddress "
I would really appreciate your help here.
Thanks!
Best,
Harsha
↧
Alerts: Assistance in showing servers logging the most when license quota is at 75% full
I am looking for help to see how i can have my current alert, which emails me that our quota is 75% full, to also present in my email the top 10 offenders of logging.
Is that possible?
Currently my search i'm using to show a 30G quota being 75% full is:
index=_internal source="*license_usage.lo*" type=Usage pool="Linux Pool" earliest=@d| stats sum(b) as bytes | eval gb=bytes/1024/1024/1024|where gb>=22
Is there a way on the search to also have it show the top offenders, such as
index = * | tp limit=10 host
Thank you!
↧
AWS Trusted Advisor for Splunk: Multiple Accounts with Plugin
I need to be able to deploy this against multiple AWS accounts and then aggregate the Trusted Advisor data in the dashboard? Is anyone using this against multiple accounts today?
↧