Our environment
2 Indexers which are also our syslog servers, 1 License Server, 1 Search head, 1 Enterprise security app installed server, 1 Deployment server
We have the syslog folder under /opt/splunk and I can see it archives data because i can see its size in TBs how can I find out if splunk is ingesting the already indexed data from syslog folder ? In syslog.conf we have this write logs to /opt/splunk/syslogs/ and in inputs.conf we have [monitor:///opt/splunk/syslogs/cisco/asa/*/*]?
↧
how can I find out if splunk is ingesting the already indexed data from syslog folder ?
↧
Incident priority is always Informational
All of my incidents post with a priority of **informational** regardless of how the alert is defined.
Just defined a Test alert with Impact **High** and Urgency **High**.
Using the default alert priority cross reference: etc/apps/alert_manager/lookups/alert_priority.csv.sample
When the alert is triggered, the Incident Posture shows as **informational**
2017-09-12 06:44:04 unassigned Fuse QA | Test Alert Manager Priority High new New cdf5d8cc-13ed-42cb-81f7-2b70c0917958 Fuse QA | Test Alert Manager Priority High dva-ops-support [Untagged] low low informational
thoughts on how to debug this?
↧
↧
Hide and Show Input Dropdown Based Upon Another Input Drop Down
I have two different dropdowns and i want to hide and show second dropdown(category) based upon of first selectoin. Tried below way, but not working. If choice site is selected second dropdown (category) should be displayed else if choice container is selected second dropdown (category) should be in hidden.
↧
Splunk add on for New Relic is not able to connect to Splunk server, getting issue to connect to proxy,Please help.
HI,
I have done proxy configuration on Splunk add on for New Relic where my proxy setting and Account ID , API key all are corrected, log shows unable to connect to proxy as below -
2017-09-12 19:43:58,401 DEBUG pid=71556 tid=MainThread file=connectionpool.py:_new_conn:809 | Starting new HTTPS connection (4): api.newrelic.com
2017-09-12 19:44:08,417 ERROR pid=71556 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events.
Traceback (most recent call last):
File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/modinput_wrapper/base_modinput.py", line 127, in stream_events
self.collect_events(ew)
File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/new_relic_account_input.py", line 70, in collect_events
input_module.collect_events(self, ew)
File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/input_module_new_relic_account_input.py", line 72, in collect_events
response = helper.send_http_request(url, "GET", headers=headers, parameters=parameters, payload=None, cookies=None, verify=True, cert=None, timeout=None, use_proxy=True)
File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/modinput_wrapper/base_modinput.py", line 476, in send_http_request
proxy_uri=self._get_proxy_uri() if use_proxy else None)
File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/splunk_aoblib/rest_helper.py", line 43, in send_http_request
return self.http_session.request(method, url, **requests_args)
File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/requests/adapters.py", line 485, in send
raise ProxyError(e, request=request)
ProxyError: HTTPSConnectionPool(host='api.newrelic.com', port=443): Max retries exceeded with url: /v2/applications.json (Caused by ProxyError('Cannot connect to proxy.', timeout('timed out',)))
↧
How to query a lookup table using the REST API
Hi guys,
I have a Splunk scheduled search which is producing a list of URLs that need to be used by another system. The other system has to access the list using http/https protocol.
Now, what i'm looking for is:
- making the search results (csv file) available through something like https://splunkserver/list.csv
- appending the search results to a lookup table and querying the lookup table using something like https://splunkserver:8089/servicesNS/admin/search/data/lookup-table-files/list.csv
Can someone guide me in how to achieve this?
Thanks in advance!
Andrei
↧
↧
Splunk UniversalForwarder Restart
I will have a dashboard which will show the list of servers which is not sending the logs and i will have "Button" against to that servers and when the user clicks the button the respective universal forwarders needs to be restarted.
Is there a way we can restart the running splunk universal forwarder service in the server. I thought of doing in the below ways,
1. Deploying an app with restartsplunkd=true in serverclass.conf, But i am not sure how this can be achieved in a "button" click
2. using curl with admin creadentials -> https://Hostname:8089/services/server/control/restart.......... but here the admin credentials will be in cleartext form.................
Can you guys bring some ideas to achieve this
↧
DMC and dual purpose Splunk server
I have an indexer and universal forwarder on the same server. The reason for this is that the connection from the indexer to an upstream indexer loses connectivity due to the type of connection and, per the Splunk product team, the indexer will not only stop forwarding when the connection is lost, but also stop indexing. This has been confirmed with the product team as expected behavior per design.
The DMC is picking up the indexer and all other forwarders, but not the forwarder on the same instance as the indexer. The UF's internal logs are, of course, being ingested. Is DMC unable to see the instances individually? Is there any way to configure the UF or the DMC to see this invisible forwarder?
↧
Host Regex Help
Hello All,
I really need to get good at regex and learn to do this myself but alas there are so many other things that seem to be a priority right now. I have the following log file names.
log_SVR-IES-PAN-RAMA-01-20170806
log_SVR-ORW-PAN-RAMA-01-20170806
log_SVR-IES-PAN-RAMA-01-20170813
log_SVR-ORW-PAN-RAMA-01-20170813
log_SVR-IES-PAN-RAMA-01-20170820
log_SVR-ORW-PAN-RAMA-01-20170820
log_SVR-IES-PAN-RAMA-01-20170827
log_SVR-ORW-PAN-RAMA-01-20170827
log_SVR-IES-PAN-RAMA-01-20170903
log_SVR-ORW-PAN-RAMA-01-20170903
log_SVR-IES-PAN-RAMA-01-20170910
log_SVR-ORW-PAN-RAMA-01-20170910
log_SVR-IES-PAN-RAMA-01
log_SVR-ORW-PAN-RAMA-01
I am monitoring the log files with the following stanza:
[monitor:///var/log2/gns/palo/log_*]
index = panlog
host_regex = (?<=log_).+-01
sourcetype = pan:log
no_appending_timestamp = true
So the question is will the host_regex just give the host name svr-orw|ies-pan-rama-01? According to the regexr.com/v1 site it should but I want to make sure it is correct before I implement it.
THanks
ed
↧
Splunk Db connect Time stamp issue?Also we are receiving duplicate logs ?
09-12-2017 11:07:02 event_time="2017-09-12 14:59:41.8203496",
Here we are seeing data we have two time stamps.event_time as Epoch format .Please help ??
↧
↧
Can you get metadata from an alert or report in search?
I'd like to run a search that would give me the metadata for all reports\alerts (creator, app, schedule, etc.) so that I can view all of this information on a single page. Is this possible? The main goal of this is to look at all scheduled report\alert times and see if there's any time where the reports\alerts are running more. So to be clear, I'm not after fired alerts, I'm after the schedule (though I suppose I could use the times on the fired alerts... hmm).
Any how, I'm open to suggestions and discussion. :-D
↧
Successful and Fail Tableau Extracts
We have integrated Tableau with Splunk, I am setting up a Splunk dashboard which will give any user information on the dashboard of which extract ran successfully/Failed during the pass 24 hours. I need a Splunk Query which can give me that information.
↧
How to Trim string at @ - need help creating rex search
Hello,
I cannot figure out the syntax of the rex function. I have a field called email with multiple domains: katz.r@blah.com example@blahblah.com. I need to create a new field where just katz.r and example are returned- so it is cut off at the @ sign. I cannot figure out the syntax of rex to write it and the split function keeps both the values: katz.r and blah.com-which is not what I want. I also tried rtim but I that isn't working for a field- just a given string.
Thanks for the help!
↧
Sophos Central app for Splunk: which Splunk logs should I check to find errors?
Hello,
I've installed, configured, and fixed the typo in sophos_events.py, but the app is not pulling data from Sophos Central/Cloud. Are there any debug settings that can be set, or which Splunk logs should I check to find errors? The API key I'm using works, I've tested it with https://github.com/sophos/Sophos-Central-SIEM-Integration.
Thanks!
↧
↧
Drilldown on a timechart dashboard
Hi,
I have a problem to execute a drilldown on a timechart dashboard.
My source dashboard is generate in the below way:
source="SDC_GUI_DEN_ER_V" | timechart span=1d count
I have to click on the date (format date 2017-06-30) and open a new dashboard filtered on this date:
I have tried in the following way but don't work, on source and destination dashboard is present the filter on the time:
Please let me know.
Thanks,
Nello
↧
Why is the "Splunk Add-on for CyberArk" not supported in version 6.6?
Per the release notes the "Splunk Add-on for CyberArk" is not listed as compatible with Splunk Enterprise Ver 6.6. Why is it no longer compatible?
↧
Splunk DB Connect: Time stamp issue and duplicate logs
09-12-2017 11:07:02 event_time="2017-09-12 14:59:41.8203496",
Here we are seeing data we have two time stamps.event_time as Epoch format .Please help ??
↧
Regex to extract from start until a specific character
I have a test field in a CSV called description:
Completed changes are not shown as complete in channels for a while Actualstart: 2017-05-15 06:40:34
I want to extract everything from the start of the string until I encounter Actualstart.
I do not know how long the sub string before Actualstart is going to be , but I need to extract from start until Actualstart is reached.
↧
↧
Search that shows which extract ran successfully or failed
We have integrated Tableau with Splunk, I am setting up a Splunk dashboard which will give any user information on the dashboard of which extract ran successfully/Failed during the pass 24 hours. I need a Splunk Query which can give me that information.
↧
Lookup File Editor in Search Head Cluster - "The requested lookup file does not exist"
Hello Splunkers,
I Tried installing the latest version of the lookup_Editor app on our Search Head Cluster.
Accessing the lookup files in the editor gives me the following message "The requested lookup file does not exist".
But the same version works on my Standalone Dev Splunk Instance.
Are there any known issues w.r.t to the lookup_Editor app working on a Search Head Cluster ?
Regards,
Mukund M
↧
ITSI: data ingestion/indexing rate - do I need to limit this if KPI is in minutes time scale?
With ITSI, in view of KPI search is in minutes time scale, do I need to limit the data ingestion/indexing rate?
Since ITSI KPIs are normally scheduled to search in 1, 5, 15 minutes interval. When designing how frequent to forward events for indexing (e.g. system metric like CPU, memory, disk activities), is it a good practice (for performance consideration) to control the system metric data ingestion/indexing rate, so as to align with the KPI search interval (e.g. every 30 seconds, instead of every second?)? Any best practice to govern this?
↧