Hi
Any one knows how to suppress notable event by looking up a csv file?
I want to suppress notable events by lookup up a csv file which contains thousands of entry. Following is the search I tried:
`get_notable_index` source="" [|inputlookup | field ]
However, since notable event suppression is kind of EventType. I can't use subsearch and pipe on it. Therefore the notable event suppression cannot be saved.
Is there a way to get around this? Or is there a way to convert all the lookup item to search terms? It will like this:
`get_notable_index` source="" (myfield=123 OR myfiled=456 OR myfield=789 ......)
Any suggestion will be much appreciated.
Thanks.
↧
Enterprise Security - Notable event suppression by using lookup
↧
hyperv add on not sending real time data
Hello All
I have changed VMs for eg: One of the VMs status changed to 'Running' from 'Off' status, but The hyper V add on is still sending the data as 'Off' .
After I restarted the splunk forwarder on Blade then I got status as 'Running' to indexer
Is it required to restart splunk forwarder after any change to VM under the blade
Thank you
AB
↧
↧
What's the best way to find on which of the hosts splunkd is not running?
Hi!
I need to find out list of all the servers where **splunkd service is not running** which were running before. I have more than 9000 forwarders and have three scenarios which are listed below:
1. splunkd is not running.
2. splunkd is running and deployment client is set but indexer configurations are not done.
3. splunkd is running and indexer configurations are done but deployment client is not set.
Because of the above limitations, I am finding it difficult to use queries which are based on phone home or internal logs received in Splunk as its showing up incorrect server list.
Also, I'm not allowed to use script to monitor the splunkd service on each hosts as it requires remote login.
Thank You.
↧
What's the best way to get the list of forwarders where splunkd service has stopped running?
Hi!
I need to find out list of all the servers where **splunkd service is not running** which were running before. I have more than 9000 forwarders and have three scenarios which are listed below:
1. splunkd is not running.
2. splunkd is running and deployment client is set but indexer configurations are not done.
3. splunkd is running and indexer configurations are done but deployment client is not set.
Because of the above limitations, I am finding it difficult to use queries which are based on phone home or internal logs received in Splunk as its showing up incorrect server list.
Also, I'm not allowed to use script to monitor the splunkd service on each hosts as it requires remote login.
Currently I'm using internal logs to find out the up and down forwarders but looking for a better solution.
Thank You.
↧
Splunk 7 shows Splunk version as 4
Hi,
Sometimes when I open my Splunk 7 web interface, it shows splunk version as 4. All the functionalities and features are of Splunk 7 only but only visually the vesion seems as Splunk 4.
Is it a known or unknown bug ?? What can I do to resolve the issue?
↧
↧
Error in 'dbxquery' command: Invalid message received f
Hello,
I'm getting that error after upgrading Splunk Enterprise v7.0 .. is there anyone that can help me ? : )
Thanks
Error in 'dbxquery' command: Invalid message received from external search command during setup, see search.log.
App Version 3.1.1
App Build 34
output of dbx2.log file
2017-02-23T14:02:30+0300 [INFO] [mi_session.py], line 38 : session updated
2017-02-23T14:22:30+0300 [INFO] [mi_session.py], line 38 : session updated
2017-02-23T14:35:08+0300 [INFO] [rpcstart.py], line 262: action=post_processing_for_rpc_server_termination rpc_start_pid=16840 rpc_server_pid=16987
2017-02-23T14:35:43+0300 [INFO] [rpcstart.py], line 366: action=run_rpc_start rpc_start_pid=3371 args=['/opt/splunk/etc/apps/splunk_app_db_connect/bin/rpcstart.py', '--scheme']
2017-02-23T14:36:03+0300 [INFO] [mi_session.py], line 38 : session updated
2017-02-23T14:36:05+0300 [INFO] [rpcstart.py], line 366: action=run_rpc_start rpc_start_pid=3831 args=['/opt/splunk/etc/apps/splunk_app_db_connect/bin/rpcstart.py']
2017-02-23T14:36:05+0300 [INFO] [rpcstart.py], line 125: action=start_to_run_rpc_server rpc_start_pid=3831
2017-02-23T14:36:08+0300 [INFO] [rpcstart.py], line 198: action=starting_up_rpc_server_with_command command="[u'/usr/java/jdk1.8.0_73/bin/java', u'-XX:+UseConcMarkSweepGC', '-classpath', '/opt/splunk/etc/apps/splunk_app_db_connect/bin/lib/mysql-connector-java-5.1.38-bin.jar:/opt/splunk/etc/apps/splunk_app_db_connect/bin/lib/postgresql-9.4.1208.jar:/opt/splunk/etc/apps/splunk_app_db_connect/bin/lib/ojdbc6.jar:/opt/splunk/etc/apps/splunk_app_db_connect/bin/lib/sqljdbc42.jar:/opt/splunk/etc/apps/splunk_app_db_connect/bin/lib/rpcserver-all.jar', '-DSPLUNK_HOME=/opt/splunk', 'com.splunk.dbx2.rpc.RPCServer', u'127.0.0.1:9998']"
2017-02-23T14:36:08+0300 [INFO] [rpcstart.py], line 243: action=rpc_server_process_is_launched rpc_start_pid=3831 rpc_server_pid=3942
2017-02-23T14:40:32+0300 [INFO] [rpcstart.py], line 262: action=post_processing_for_rpc_server_termination rpc_start_pid=3831 rpc_server_pid=3942
↧
How to handle custom parameters in rest modular input
Hello,
I have developed a custom response handler class for TA.rest modular input and I would like to pass a custom parameter to it.
I know this is possible, by setting custom parameters in the input configuration page (key1=value, key2=value).
What I can't figure out is how I can access these parameters in my custom class.
class customHandler:
def __init__(self,**args):
pass
def __call__(self, response_object,raw_response_output,response_type,req_args,endpoint):
if response_type == "json":
output = json.loads(raw_response_output)
for record in output["logins"]:
print_xml_stream(json.dumps(record))
else:
print_xml_stream(raw_response_output)
Any help would be appreciated.
Thanks in advance.
↧
Nessus vulnerability solution
I am trying to find all hosts affected by a specific vulnerability and the solution to remediate that vulnerability as suggested by nessus. Since the solution field is present in the nessus:plugin field and every other information needed present in the nessus:scan sourcetype, nothing I have come up with seems to work. The end result should look something like this.
Vulnerability | Host-IP(s) | Solution
XSS vulnerability | 10.10.10.10 | Patch it
10.10.10.20
10.10.10.30
Thanks
↧
AWS Data Migration Service (DMS) in Splunk App for AWS
Is there any way to ingest AWS DMS performance metrics and logs in the Splunk App for AWS (via the Add-on I guess)?
↧
↧
Inconsequent field extraction behavior: works when eval'ed but not when used directly?
I have defined a field extraction that seems to properly extract fields:
`EXTRACT-KVSAxis = KV(?:Blade)*(?[XY][12]|Filter(?:Shape|Foil))`
I am able to timechart that field as well, but I am unable to use it to drill down or use it a search.
The following queries do work:
1. `... | table KVSAxis`
which tables the field content for every event as expected
2. `... | eval test=KVSAxis | where test="FilterShape"`
which filters correctly on the field test and its content.
But when I drop the eval and query the field directly this does ***not*** work:
... | where KVSAxis="FilterShape"
Any clue how I can get my latest search to work as expected and filter on the KVSAxis field?
↧
ERROR LMMasterRestHandler - path=/masterlm/usage: This license does not support being a remote master. from ip =XX.XX.XX.XX
I am having linux server where splunk enterprise and splunk heavy forwarder installed. In the splunk log, I am getting this error. Could you please help me in resolving this error.
I am using trial license in this development Splunk enterprise .
↧
How to request an accelerated report via REST?
Hi,
We have a requirement to pull data out of a report that they want updated at (near-enough) real time, so we've created a stats table of the data and put it into a report, which has then been accelerated. We want to be able to grab the data via REST so it can be used in a different application we are creating. How is this done?
Currently, if I run the report, I can see the most recent search id and I can see that it has been run based on a summary ID.
In Job Manager, it reports the following:
Search ID: myuser_\_nobody_\_search_\_RMD5a79ee73818f66aa4_at_1507109756_45011
Summary ID: 1F08A505-35F7-44C1-B50E-2D1D9BB70318_search_nobody_NSfd08606a4b07f6bc
If I run (using the search ID):
curl -k -u myuser https://localhost:8089/services/search/jobs/myuser__nobody__search__RMD5a79ee73818f66aa4_at_1507109756_45011
I get results for the most current run, but I don't know if this resultset will update as the underlying data changes
If I run (using the summary ID):
curl -k -u myuser https://localhost:8089/services/search/jobs/1F08A505-35F7-44C1-B50E-2D1D9BB70318_search_nobody_NSfd08606a4b07f6bc/results?count=0
I get a response of: Unknown sid.
Is there an easy way to always request the latest state of the accelerated report?
Thanks!
Best regards,
Alex
↧
UF can't perform a handshake with DS that's behind an Apache reverse proxy
In our current setup we have a private network with several hosts that have UFs installed, as well as separate hosts for a search head, indexer and a Splunk Deployment server. Since All servers where Splunk UFs are installed are in the same private network, they have simply been set up to use the private IP of the deployment server in deploymentclient.conf.
We also have a separate host where a reverse proxy has been configured, using Apache. This host is the only server that has a public IP. The reverse proxy is used so that we can access the web UIs of Splunk search head and deployment server.
Until now this setup has been working well, but now I have to add another UF that is outside of the private network. To do this, I have added another configuration file to the reverse proxy that looks like this:
Listen 8089 https
ServerName ds.oursplunk.com
ProxyPass "/" "https://{ds_private_ip}:8089/"
ProxyPassReverse "/" "https://{ds_private_ip}:8089/"
SSLEngine on
SSLProxyEngine on
SSLProxyVerify none
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLCertificateFile "/etc/letsencrypt/live/ds.oursplunk.com/cert.pem"
SSLCertificateKeyFile "/etc/letsencrypt/live/ds.oursplunk.com/privkey.pem"
SSLCertificateChainFile "/etc/letsencrypt/live/ds.oursplunk.com/fullchain.pem"
The configuration itself seems to be working fine, as I can successfully connect to https://ds.oursplunk.com:8089/ with a curl command from the server where the new UF is installed.
However after adding ds.oursplunk.com:8089 to deploymentclient.conf of the new UF, it can't perform a handshake. The most relevant part from splunkd.log of the UF seems this:
10-04-2017 05:33:37.379 -0400 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected
10-04-2017 05:33:46.755 -0400 INFO HttpPubSubConnection - SSL connection with id: connection_{rev_proxy_private_ip}_8089_{rev_proxy_internal_dsn}_{uf_hostname}_{uf_client_name}
10-04-2017 05:33:46.759 -0400 WARN HttpPubSubConnection - Unable to parse message from PubSubSvr: \n\n404 Not Found \n\n
Not Found
\nThe requested URL /services/broker/channel/subscribe/connection_{rev_proxy_private_ip}_8089_{rev_proxy_internal_dsn}_{uf_hostname}_{uf_client_name}/tenantService/handshake/reply/{uf_hostname}/{uf_client_name} was not found on this server.
\n\n Where {rev_proxy_private_ip} is the private IP of the reverse proxy server; {rev_proxy_internal_dsn} is the internal DNS of the reverse proxy (since we're hosting everything on AWS, it's the one that looks like ip-XX-XX-XXX-XX.aws-region.compute.internal); {uf_hostname} is the hostname of the server with UF; {uf_client_name} is the client name configured in deploymentclient.conf. So it seems that when the connection id is created, it's kind of mixing things from the UF server and the rev proxy server. Since I'm not that experienced with web servers, I haven't been able to solve this. Has anyone encountered a problem like this? Any suggestions for solving it?↧
↧
How to show stacked column for three fields along with single column beside the stacked fields in Column chart?
I have four fields named Baseline, a,b,c. Want to represent this using Column chart so that the sum of a,b,c will come as stacked column and Baseline will be separate column beside the stacked one (of a,b,c) using Column chart.
For that purpose I created stacked chart but it is combining all four fields including baseline, in bar.
Here is the Expected graph. (In this, the one which has three colors are a,b,c while the single one is Baseline,the number are nothing but total count.)
![alt text][1]
Any help would be much appreciated.
Thanks.
[1]: /storage/temp/217750-204675-bar-chart.png
↧
How do I get my rex search to extract a string between two strings from a sample below and concat it with the fixed string "751."
Example1
Input: 352322648-1112 : D_SSPP-HNW_SD-AVI
Output i want : "751.1112"
Example2
Input: 335587620-43300 : DEMO
Output i want: "751.43300"
Thanks
↧
tstats: Indexed Extractions vs Metadata
We're using tstats on accelerated datamodels, and it works like a charm...when using metadata fields (_time, host etc.)
*"Use the tstats command to perform statistical queries on indexed fields in tsidx files. The indexed fields can be from normal index data, tscollect data, or accelerated data models."*
*"Data model acceleration summaries are composed of multiple time-series index files [...] Each .tsidx file contains records of the indexed field::value combos in the selected dataset and all of the index locations of those field::value combos [...]*"
I assumed all I needed to do was to set INDEXED_EXTRACTIONS on a sourcetype, create a datamodel of said sourcetype, accelerate it and query/aggregate on my custom fields.
EDIT: I can't post links, but I realize that there's more to the process than my naive one-liner.
Is the documentation posted here the way to go? -> /Documentation/SplunkCloud/latest/Data/Configureindex-timefieldextraction
EDIT2: *"WRITE_META = true writes the extracted field name and value to _meta, which is where Splunk stores indexed fields.*"
Wait, so is custom indexed extractions actually just new metadata? (in which case the description of how tstats works seems misleading..)
Any pointers or help appreciated.
↧
Use query results from one panel as input to query on another panel on the same Dashboard
Hi,
Sorry if I am duplicating question here but I could not find an answer in the other posts that matched my scenario.
So I have a number of inputs on my dashboard and two panels, the first panel results in a multi row table. I wish to use the values from one of the fields as an input to a second panel on the same dashboard. I am not sure if this is possible as I have read cases where only single results are passed in this way, is this correct?
Ideally I would like the first query to be complete before the second attempted to load, and I would like the data from the query1 field I am interested in like this ( val1 OR val2 .....), so I can then use a token to insert it into my second query. I've pasted a cut down version of the Dashboard to help where $results_tok_query1$ equates to ( val1 OR val2 .....) resulting form the first query.
Thanks,
N
↧
↧
Server.conf file is automatically updating in Windows splunk forwarder
It is observed that server.conf is automatically updating with invalid certificate under etc/system/local even after the I changed it manually and tried disabling the deployment server from client.
Changes observed in the splunk directories:
1) Invalid certificate is creating on its own under etc/auth
2) server.conf is getting updated with invalif certificate
Splunk version which is being used: 6.4.2
↧
Splunk Arm64 download
In the requirements for Splunk Enterprise it says that there is a download for Arm64 but it not supported. I can’t find the download though. Anyone know where I can get it?
Thanks.
↧
DBConnect 3.x Rising columns not working
After migration to DBConnect 3.11 my SQL Statement won't work any more. It fails with an error in the UI.
com.microsoft.sqlserver.jdbc.SQLServerException: The value is not set for the parameter number 1.
I created a new Input in the Ui, first I run the stement with Batchmode by Execute Sql
SELECT [Entry No_] as [Entry_No]
,[Date and Time] as [Date_and_Time]
,[Time] as [Time]
,[User ID] as [User_ID]
FROM [table]
Next I select Rising Column and select Enty_No (bigint) also added the following line to my SQL Statement
WHERE Entry_No > ? ORDER BY Entry_No ASC
When I run Execute SQL again, the above error is displayed.
I created several Inputs with DB Connect 2.x and all of them worked. But now I cannot create a single one with 3.11
↧