Cannot find a \local folder under %SPLUNK_HOME%\etc\apps\splunk_app_db_connect\ after installing the DB_Connect add on. Have restarted the SQL services and Splunk service. We are running Windows Server 2012 R2. There is a \locale folder but no inputs.conf contained in the folder.
Following this documentation to configure SQL audit log collection into Splunk.
http://docs.splunk.com/Documentation/AddOns/released/MSSQLServer/ConfigureDBConnectv1inputs
↧
Why is there no \local folder under splunk_app_db_connect when trying to configure trace and audit log collection?
↧
How to get the time duration between two scenarios?
Hey all,
I wanted to see if someone can help me out with this. Basically im trying to get a duration for the time in between 2 scenarios. Im trying to get how long it takes for each user to get from scenario_1 to scenario_2 by service. This is what I have so far and it seems to work when I do by individual service:
index=index_name (scenariotype="scenario_1" OR scenariotype="scenario_2) user_ID="*" service_name="*service_1*" | transaction user_ID | stats mean(duration) AS "Mean Duration(In Seconds)" by service_name
Stats table shows:
service_name | Mean Duration(In Seconds)
service_1
7.25
It returns a low number and when I manually checked the mean time by user_ID, it is correct.
However, when I want to get the mean duration for all services, I get a much higher number, especially for service_1 above. Keep in mind, I have 9 services Im trying to get numbers from. So basically when I run the following and dont specify a service_name or I include more than service name, i get much higher numbers for (exactly the same period of time) as the mean duration for each service(note service_1 is the same service as the above result but returning much higher number):
index=index_name (scenariotype="scenario_1" OR scenariotype="scenario_2) user_ID="*" | transaction user_ID | stats mean(duration) AS "Mean Duration(In Seconds)" by service_name
Stats table shows:
service_name | Mean Duration(In Seconds)
service_1 189.57
service_2 5.75
service_3 5.75
service_4 1.35
service_5 6.25
service_6 10.40
service_7 4.53
service_8 8.78
service_9 6.72
Ive also experimented with looking further back in the time and the mean duration goes up as I go further back in time if i dont specify 1 service or include more than 1 service or include all services.
Hopefully I made sense and someone can help me with what am I doing wrong.
thx!!
↧
↧
Why is my JSON format log getting truncated?
I have a log which has a JSON format line in the middle. Splunk is extracting the log but is truncating the JSON part to 26 lines. How do I get the full log without Splunk truncating the JSON lines?
↧
alert on search result that are higher than a specific value in the column
I am trying to trigger an alert based on a value that is in a column. Below is the search I am running
|`node_details(SERVER NAME)` | search Node_ID="Node3" (stats.key="node.cpu.sys.max" OR stats.key="node.cpu.user.max") | eval usage_by = case('stats.key'="node.cpu.user.max", "User", 'stats.key'="node.cpu.sys.max", "System") | eval stats.value = round(('stats.value'/10),1)| timechart span=5m avg(stats.value) by usage_by
basically I want to alert anytime the System is greater than X.
I have tried using customer alert condition and have added where System > 4
but that has not helped. Can someone recommend a solution please?
Thanks
![alt text][1]
[1]: /storage/temp/254918-usage.jpg
↧
Can I generate report to a shared network location
Hello,
Looking for suggestions on how to generate a Splunk report on a network drive.
Instead of email with attachment, is there a way to generate a report to be placed on a network share?
Thank you!
↧
↧
Splunk Add-on for ServiceNow - The ServiceNow Update set is outdated!
Hi,
It looks like the latest version of this update is not available as an update set:
http://docs.splunk.com/Documentation/AddOns/released/ServiceNow/ConfigureServiceNowtointegratewithSplunkEnterprise#Apply_the_integration_application
The latest version available says: Jakarta
However, you have a version on the ServiceNow store that says it's compatible with Kingston and at version 1.1.6
Can someone provide an export of this update set or provide a download link to the files?
↧
Can you help me create an alert that triggers when search results are higher than a specific value in a column?
I am trying to trigger an alert based on a value that is in a column. Below is the search I am running
|node_details(SERVER NAME) | search Node_ID="Node3" (stats.key="node.cpu.sys.max" OR stats.key="node.cpu.user.max") | eval usage_by = case('stats.key'="node.cpu.user.max", "User", 'stats.key'="node.cpu.sys.max", "System") | eval stats.value = round(('stats.value'/10),1)| timechart span=5m avg(stats.value) by usage_by
Basically, I want to alert anytime the System is greater than X.
I have tried using customer alert condition and have added where System > 4
But, that has not helped. Can someone recommend a solution please?
Thanks
![alt text][1]
[1]: /storage/temp/254918-usage.jpg
↧
The ServiceNow Update set is outdated — Shouldn't there an update for the Splunk Add-on for ServiceNow?
Hi,
It looks like the latest version of this update is not available as an update set:
http://docs.splunk.com/Documentation/AddOns/released/ServiceNow/ConfigureServiceNowtointegratewithSplunkEnterprise#Apply_the_integration_application
The latest version available says: Jakarta
However, you have a version on the ServiceNow store that says it's compatible with Kingston and at version 1.1.6
Can someone provide an export of this update set or provide a download link to the files?
↧
How do you create a table with each row being a log and every column being a recognized "Interesting Field"?
I was wondering if there is an easy way to create a table that contains every single recognized interesting field instead of doing the usual `| table field1, field2...` method.
To be clear I want to have each row in the table as a separate instance/log and not a summary of counts. In other words, I would like a substitution for `| table` but to capture every single interesting field that is recognized. Thanks!
↧
↧
After installing the DB_Connect Add-on, why can't I find \local folder when trying to configure trace and audit log collection?
I cannot find a \local folder under %SPLUNK_HOME%\etc\apps\splunk_app_db_connect\ after installing the DB_Connect add on. I have restarted the SQL services and Splunk service. We are running Windows Server 2012 R2. There is a \locale folder but no inputs.conf contained in the folder.
Following this documentation to configure SQL audit log collection into Splunk.
http://docs.splunk.com/Documentation/AddOns/released/MSSQLServer/ConfigureDBConnectv1inputs
↧
DMC not displaying accurate DISK usage values
Hi,
I see that DMC is unable to give the right volume usage for a particular partition in the servers. It is showing wrong partition value in every instance for that particular partition name.
Any specific permission that needs to be checked or how can this be fixed ?
| rest splunk_server_group=dmc_group_* /services/server/status/partitions-space
The above query results in false values than that of `df -h` in the server
Thanks
↧
Why _internal logs from heavy forwarder is not getting to indexers after a splunkd restart but _audit are?
All of a sudden _internal logs from HF stopped coming to indexers after a splunkd restart. But i see _audit logs making it to indexers. I see splunkd.log on HF is growing. There is no change in inputs.conf or outputs.conf before restart. What could be the reason?
↧
Splunk DB connect netowrk connection error
Hi All,
I installed DB connect on my Heavy forwarder No firewall running on my splunk and the Oracle database we are trying to connect requires firewall port to open. They opened the firewall port of database(bidirectional). Still i am getting the following error. All answers are welcome.
com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: IO Error: The Network Adapter could not establish the connection.
↧
↧
xyseries custom sorting
I want the results of the following query will be sorted by order I declare.
For some reason, it does not work so I might missed something:
my_query | eval _time = time| bucket _time span=1d | stats count by _time, app_risk | eval risk_order=case(app_risk=="Unknown",0, app_risk=="Very Low",1, app_risk=="Low",2, app_risk=="Medium",3, app_risk=="High",4, app_risk=="Critical",5) | sort -risk_order | xyseries _time,risk_order,count | rename "0" as "Unknown" "1" as "Very Low" "2" as "Low" "3" as "Medium" "4" as "High" "5" as "Critical"
Anyone?
Thanks!
↧
eval - Why am I being stupid?
I am attempting to write a search which uses eval show the difference between two assignment groups. A number of assignment groups which all begin with ABC. I want to group all of these as 'IDS'.I then want to show the allocated tickets to IDS and stack against the OTHER assignment group (which does not start with ABC). I then want this to show as a timechart stacked week by week.
This is what I have:
index="myindex" sourcetype="csv" "Assignment group"="wildcard*" | eval IDS=if(like("Assignment group","ABC*"),"IDS","OTHER") |timechart span=1w count by "Assignment group".
Can anyone advise what I am doing wrong here? The timechart shows individual ABC-**** groups in the chart rather than grouped IDS results against OTHER.
Thanks in advance for any help.
Rob.
↧
How do I extend the number of results that an external script returns to more than 100000 lines?
Hello,
I have an external script that makes calculations. The problem is that it is limiting the number of results to 100000. By default it is 50000, but I managed to extend it to 100000 by adding the following stanzas to `limits.conf` under the app's local folder:
[searchresults]
maxresultrows = 100000
[stats]
maxresultrows = 100000
[top]
maxresultrows = 100000
Now I'd like to extend that limit to 500000, but updating the `maxresultrows` values does not make any difference. For reference, my `limits.conf` file now looks like this:
[default]
max_mem_usage_mb = 0
[searchresults]
maxresultrows = 500000
[stats]
maxresultrows = 500000
[top]
maxresultrows = 500000
[set]
maxresultrows = 500000
[anomalousvalue]
maxresultrows = 500000
What am I missing?
Thank you and best regards,
Andrew
↧
Sourcetype configuration - Duplicate fields
Hello Splunkers,
I am trying to configure a sourcetype in Advanced section.
For example, I create a field alias by creating the key/value:
![alt text][1]
[1]: /storage/temp/254920-1.jpg
When I perform search on the data, I see both MD5 and md5 fields to be extracted and containing the same values.
However, I want to see only md5 in Interesting fields.
Why do I see both fields?
Thank you in advance!
Afroditi
↧
↧
How to hightlight a tabel cell based on a field of the search result?
I am trying to highlight the cells of my result table. I have seen multiple examples showing how to highlight a cell based on the value shown in the actual result table.
What I need to achieve is, that the cell gets highlighted based on another value of the search result. My search result looks like this:
1. Client System Timestamp OrderCount Color
2. Client1 WebShop 2018-09-12T13:00:00.000Z 200 red
3. Client1 WebShop 2018-09-12T14:00:00.000Z 100 yellow
4. Client1 BizTalk 2018-09-12T13:00:00.000Z 50 green
5. Client1 BizTalk 2018-09-12T14:00:00.000Z 90 yellow
6. ...
My query looks like this:
base search | chart values(OrderCount) over Timestamp by System
Which will result in the following table:
1. Timestamp WebShop BizTalk
2. 2018-09-12T13:00:00.000Z 200 50
3. 2018-09-12T14:00:00.000Z 100 90
4. ...
I want to highlight the OrderCount values (200, 100, 50, 90) based on their respective value of the field "Color" from my search result.
So the cell 200 of my table would be red.
Is there any way to accomplish this?
↧
Managing sourcetype
Hello Splunkers,
Is it possible to edit a sourcetype after its creation?
Thank you in advance!
Afroditi
↧
Search an lookup csv
Dears,
I'm trying to use the lookup for Splunk to read a file and tell me if I'm collecting the logs to the host of that file.
What I need: Check if I'm getting logs from hosts that are in a csv.
I am using the following query:
index = main OR index = client * | stats count by host | lookup client_sys hostname AS host
I also tried using the inputlookup command, but it did not work:
index = main OR index = client * NOT [| inputlookup client_sys.csv | fields host]
Is there any other way to do this?
Thanks a lot.
↧