I am try to use a process search feature in the CB-Response app. I am receiving the following error:
error: unknown error reading api key from credential storage: http 403 forbidden -- you (user=XXXXXXX) do not have permission to perform this operation (requires capability: list_storage_passwords).
I have already checked the following:
- Account permissions - set to admin
- capability (list_storage_passwords) already added to the role
- Permissions on the app directory recursively
Not sure what i am missing out.
Any help would be appreciated.
↧
Error: Unknown error reading API Key
↧
Pros and cons of Splunk vs. Solarwinds
Hi folks, I'm looking for some thoughts on Splunk vs. Solarwinds. We currently have all of our servers pointing to Solarwinds, mainly just for monitoring health of systems. We also have a very small deployment of Splunk (currently at 10GB per day, bumping up to 60GB per day in the next month or so). I am aware that Solarwinds also has a SIEM, but haven't looked at it. One of my server team counterparts is trying to encourage more use of Solarwinds to keep things on a single pane of glass, including things like file integrity monitoring, SIEM, system monitoring, and application/server correlation and keeping track of where servers are located in the data centers and what not, basically a server inventory and tracking. Are these all things that we can get out of Splunk? Is it advisable to use Splunk for that kind of thing? Would the Solarwinds Splunk app make it possible to do everything from a single pane of glass in Splunk, and just have Solarwinds continue collecting the data? Any advice on this would be appreciated. Thanks.
↧
↧
Docker logs missing INFO and ERR
Hello,
We're pushing Docker logs to Splunk using the native logging driver, via HTTP event collector.
However, it seems like INFO and ERR are not showing up in Splunk.
They show up fine in AWS Cloudwatch Logs, which we're trying to move away from, but currently blocked by this.
We're running on ECS, with latest ECS agent version. Couldn't find any related config options that could cause this.
Any hints? Has anyone noticed this before?
Thanks
Mikko
↧
Need help with field-extractions on these events
I have the following value:
**Events**
X|0001|NAME|PHONE
X|0002|NAME|ADDRESS|INFO1|INFO2
Based on the type (0001 or 0002) I want to extract different fields, is it possible ?
Can I split the event value based on a common separator (pipe) ?
↧
Timechart grouping
I am trying to analyze patterns of heap usage by Java Virtual Machine (JVM) level and 5 jvms grouped as a host. Now I want to timechart the heap by JVM and output it by host. I output all of the data by JVM, I am getting an unreadable graph.
Here is my search:
search |timechart span=10min avg(heap) by JVM.
With this search i am getting all the JVMS in graph(which is not readable), but I want a separate graph for each host with 4 jvms trending data.
↧
↧
Palo Alto Networks Add-on for Splunk: External search command 'pancontentpack' returned error code 2
I'm trying to set up the TA App on my indexer to retrieve ContentPack Apps and Threats from a Panorama instance, but I'm running into the following error when I try to manually run | pancontentpack
External search command 'pancontentpack' returned error code 2. Script output = "ERROR Unable to get apikey from firewall: local variable 'username' referenced before assignment "
App and TA versions are 6.0.1 running on Splunk Enterprise 6.6.1. I followed the instructions "Add Context to Searches" (https://splunk.paloaltonetworks.com/lookups.html) to create savedsearches.conf, and "Configure Adaptive Response:
(https://splunk.paloaltonetworks.com/adaptive-response.html#configure-adaptive-response) to create an XML API role in Panorama and added the credentials in the Add-On > Configuration > Account menu.
↧
Log size calculation
Hi,
Can i please know how to calculate the log size per day for a specific source or a sourcetype reporting to splunk.
↧
Splunk Add-on for Apache Web Server: If I have this app do I still need a forwarder to forward Apache logs?
I have Splunk 7.1 / RHEL65 / Test enviroment
(New to splunk)
I see you have Splunk Add-on for Apache Web Server, but do you still need a forwarder to forward the apache logs?
Rgds
Dee
↧
How do you calculate the log size per day for a specific sourcetype or source?
Hi,
Can i please know how to calculate the log size per day for a specific source or a sourcetype reporting to splunk.
↧
↧
Palo Alto Networks App for Splunk: pan_firewall datamodel issue after upgrading App to 6.0.1
I'm having issues with my datamodel-based dashboards after upgrading app to 6.0.1, and I think I've narrowed down the cause. Just to reiterate the troubleshooting steps for "Only 'Overview' or 'Real-time Event Feed' dashboard has data"
-Acceleration is enabled
-Data model is 100% built
-Increasing Time range to All time produces no additional
Here is an example dashboard search which is not populating results for me
=Search=
| tstats values(log.flags) AS log.flags, count FROM datamodel=pan_firewall WHERE nodename="log.url" """" log.action="*" GROUPBY _time log.dest_name log.app:category log.app log.action log.content_type log.vendor_action | rename "log.action" as action, "log.app" as app, "log.app:category" as "app:category", "log.content_type" as content_type, "log.dest_name" as dest_name, "log.flags" as flags, "log.vendor_action" as vendor_action, "log.*" as "*"
=Error shown=
This search has completed and found 2,860,331 matching events in 19.376 seconds. However, the transforming commands in the highlighted portion of the following search:
generated no results. Possible solutions are to:
check the syntax of the commands
**verify that the fields expected by the report commands are present in the events**
When I manually run this search, to look at results from the datamodel. I am noticing the following missing fields
| datamodel pan_firewall search | search *
Missing from Datamodel -- present in Datamodel
log.dest_name -- dest_name
log.app:category -- raw_category
log.content_type -- ??
log.vendor_action -- vendor_action
log.flags -- flags
When I replace all the field names on the left (missing in datamodel) with their present version on the right, and re-run the dashboard search manually... Everything starts working again.
Example "Fixed" search...
| tstats values(log.flags) AS log.flags, count FROM datamodel=pan_firewall WHERE nodename="log.url" """" log.action="*" GROUPBY _time dest_name raw_category log.app log.action vendor_action | rename "log.action" as action, "log.app" as app, "raw_category" as "app:category", "dest_name" as dest_name, "log.*" as "*"
Can someone please help me understand what is going on with the datamodel?
↧
Search Head Captain Skipping Searches
Hi Splunkers,
We have a Search Head Cluster with 3 search heads. We have 70 searches that are supposed to run every minute.
We find that 14-15% of searches are getting skipped on SH Captain. We tried to change the captain and observed the same phenomenon on new captain too. We do not have any SH designated for ad-hoc searches.
Please find the image below where other search heads are not experiencing any skip. Also, note that SH captain is taking higher number of searches.
![alt text][1]
Please let us know if there is a way to get around this.
[1]: /storage/temp/225585-sh-captain-skip-ratio.png
↧
Palo Alto Networks Add-on for Splunk 6.0.1: app_list and threat_list empty
This isn't an issue if you have the pancontent pack set up correctly, but I thought that the CSV Lookups app_list and threat_list were supposed to be pre-populated in the add-on , and then later updated by pancontentpack macro. I've noticed that these are both empty when downloading a fresh copy of the Add-on.
This Commit seems to confirm my suspicion
https://github.com/PaloAltoNetworks/Splunk_TA_paloalto/commit/646ff84dc69f5f38c1e754c3f60b545e29e83865
Both app_list.csv and threat_list.csv were emptied. I know I didn't have pancontentpack configured before, so perhaps I was just relying on the static app_list and threat_list lookups that came with the app and everything was mostly working OK. After I installed the latest version of the app, lost the default lookups, and didn't have pancontentpack working, dashboards were more broken.
↧
How can I use regex to match only certain parts of a field string value?
I am using a CSV lookup table (MyCSVTable) which contains a list of 10 digit numbers (examples: 2345678900, 2134567891, 3126549877, etc...). The CSV can look like this for example:
MyField1,MyField2
2345678900,1
2134567891,1
3126549877,1
I am using MyCSVTable to match against my event data field which also happens to be named MyField1 (same name as in MyCSVTable), and perform a calculation on an associated event data called MyField3.
Part of the problem I have is the MyField3 does not have a standard naming convention. For example, if I am matching MyField1=2345678900 from the CSV, the event data field MyField1 could have any one of these values:
+12345678900 OR 12345678900 OR 12345678900_A123456 OR 2345678900. All of which would be valid matches for my purposes.
Can I use rex or regex to reformat MyField1 in event data such that I am able to successfully match my number against any of these occurrences: +12345678900 OR 12345678900 OR 12345678900_A123456 OR 2345678900?
I tried this but it doesn't work:
index=<...> source=<...> | rex field=MyField1 "(?i)^(?.+?)(\s+1)?$" | lookup MyCSVTable MyField1 OUTPUT MyField2 | where MyField2=1 | stats sum(MyField3) by MyField1
Thank you in advance for your advice.
↧
↧
how to change the colors in bar graph result and make them constant?
I have a bar graph as below
![alt text][1]
Now how can change my current colors to display like below constantly, like every time I want to see only the below colors.
![alt text][2]
The following is my current html code. Now exactly where should I have to specify the above three color names.
option name="charting.axisLabelsX.majorLabelStyle.overflowMode" ellipsisNone
option name="charting.axisLabelsX.majorLabelStyle.rotation" 0
option name="charting.axisTitleX.visibility">collapsed
option name="charting.axisTitleY.visibility">visible
option name="charting.axisTitleY2.visibility">visible
option name="charting.axisX.scale">linear
option name="charting.axisY.scale">linear
option name="charting.axisY2.enabled">0
option name="charting.axisY2.scale">inherit
option name="charting.chart">bar
option name="charting.chart.bubbleMaximumSize">50
option name="charting.chart.bubbleMinimumSize">10
option name="charting.chart.bubbleSizeBy">area
option name="charting.chart.nullValueMode">gaps
option name="charting.chart.showDataLabels">none
option name="charting.chart.sliceCollapsingThreshold">0.01
option name="charting.chart.stackMode">stacked
option name="charting.chart.style">shiny
option name="charting.drilldown">all
option name="charting.layout.splitSeries">0
option name="charting.layout.splitSeries.allowIndependentYRanges">0
option name="charting.legend.labelStyle.overflowMode">ellipsisEnd
option name="charting.legend.placement">top
[1]: /storage/temp/225586-test.png
[2]: /storage/temp/225587-test1.png
↧
Remedy 8.1 integration issues with Splunk 6.6: Remedy Web Service has not been setup.
Hi All,
As per the documentation Remedy 9.1 can be integrated with Splunk using the Remedy Add-on. But our Remedy is still at 8,1 and would need 6 months for upgrade. So, I am trying my luck in integrating the existing version.
I have done the setup as per the documentation. But I am getting the following errors:
on running command to create new incident : Remedy Web Service has not been setup.
And in the _internal logs: Unable to get to the create incident WSDL file.
Kindly advise.
↧
Remedy Logs: Incident ID correlation with Request ID
Hi All,
We have recently started ingestion of Remedy Logs in Splunk. I am trying to correlate the various events created for each incident when its opened to 'in progress' to 'closed'.
When a new incident is created, an Incident Id is generated and logged. But when it is further updated and closed, request id is logged and not incident id. I am unable to find the log file name which has the correlation of incident id to request ID or some other key through which i can map incident id and request id.
Kindly advise.
↧
Merge two search results in one row
I have the below events and I want to merge the search results:
20171222.103330 Fr I - 0 Fn=makeRequest Endpoint=https://mydomain.api..net/v1/person/personid tid=e95126db-6184-4405-8c74-2ed978beb320 HttpStatusCode=200 ElapsedTime=55
I want to get the following result -
ErrorRate | tp90
I have the below two separate queries. How can I merge both queries -
index=abc "Fn=makeRequest" HttpStatusCode > 201 AND HttpStatusCode !=404 |timechart bins=1000 count as ErrorRate
index=abc "Fn=makeRequest" |timechart bins=1000 cont=FALSE perc90(ElapsedTime) as perc90
↧
↧
Why is the searchhead captain skipping some searches?
Hi Splunkers,
We have a Search Head Cluster with 3 search heads. We have 70 searches that are supposed to run every minute.
We find that 14-15% of searches are getting skipped on SH Captain. We tried to change the captain and observed the same phenomenon on new captain too. We do not have any SH designated for ad-hoc searches.
Please find the image below where other search heads are not experiencing any skip. Also, note that SH captain is taking higher number of searches.
![alt text][1]
Please let us know if there is a way to get around this.
[1]: /storage/temp/225585-sh-captain-skip-ratio.png
↧
Can you change the colors in bar graph results and make them constant?
I have a bar graph as below
![alt text][1]
Now how can change my current colors to display like below constantly, like every time I want to see only the below colors.
![alt text][2]
The following is my current html code. Now exactly where should I have to specify the above three color names.
option name="charting.axisLabelsX.majorLabelStyle.overflowMode" ellipsisNone
option name="charting.axisLabelsX.majorLabelStyle.rotation" 0
option name="charting.axisTitleX.visibility">collapsed
option name="charting.axisTitleY.visibility">visible
option name="charting.axisTitleY2.visibility">visible
option name="charting.axisX.scale">linear
option name="charting.axisY.scale">linear
option name="charting.axisY2.enabled">0
option name="charting.axisY2.scale">inherit
option name="charting.chart">bar
option name="charting.chart.bubbleMaximumSize">50
option name="charting.chart.bubbleMinimumSize">10
option name="charting.chart.bubbleSizeBy">area
option name="charting.chart.nullValueMode">gaps
option name="charting.chart.showDataLabels">none
option name="charting.chart.sliceCollapsingThreshold">0.01
option name="charting.chart.stackMode">stacked
option name="charting.chart.style">shiny
option name="charting.drilldown">all
option name="charting.layout.splitSeries">0
option name="charting.layout.splitSeries.allowIndependentYRanges">0
option name="charting.legend.labelStyle.overflowMode">ellipsisEnd
option name="charting.legend.placement">top
[1]: /storage/temp/225586-test.png
[2]: /storage/temp/225587-test1.png
↧
Indexing Remedy logs: Incident ID correlation with Request ID
Hi All,
We have recently started ingestion of Remedy Logs in Splunk. I am trying to correlate the various events created for each incident when its opened to 'in progress' to 'closed'.
When a new incident is created, an Incident Id is generated and logged. But when it is further updated and closed, request id is logged and not incident id. I am unable to find the log file name which has the correlation of incident id to request ID or some other key through which i can map incident id and request id.
Kindly advise.
↧