Hello All,
I am running a report that uses multiple stats commands to achieve the final output, in this report I have two fields which depend on the number of machines I have. One is what we call runtime which uses all the data for that machine and is the runtime on that code level and is currently correct, the other is current installs for that code level which only uses the most recent file.
The runtime for each machine should go across multiple code levels since it can move from one to another and we want to see the amount of time spent on that code level.
The Install however, should only be counted for the current level, so if a machine was on code level A then B, then B being the most recent should have a single install and A should have 0.
I was attempting the following after my main stats command however it always returns a blank. Are there any suggestions? Thank you!
Main Search..... | appendpipe [ dedup Machine | stats count(Machine) as Real_Count by Code | fillnull value=0]
↧
Count a field value for one field, but not for another in Stats
↧
Field extraction showing up for different sourcetype
Hello. I used the Splunk field extractor to get a field from **sourcetype=sourcetype_a**
For some reason, when I search **sourcetype=sourcetype_b**, the field I extracted for **sourcetype_a** is showing up. The data in that field is nothing relevant as the logs are entirely different. Why is this happening, and how can I prevent it?
↧
↧
Cisco Security Suite - cannot configure
I've just installed Cisco Security Suite (v 3.1.2) on Splunk (v 7.0.1).
When I launch the app, it takes me to the 'App configuration' screen. I click on the 'Continue to app setup page' button and I get this:
500 Internal Server Error
View more information about your request (request ID = 5a54fe1e947fcd584de350) in Search
When I click on the 'View.....' link above, the search has no results.
Any ideas?
↧
Count items satisfying a condition
I have a event created each time a user does an action in my system (e.g. login, open_page, close_page).
I need to do statistics based on the user regularity: a regular user logins more than 5 times or more over a period, the others are occasional.
I use the query:
... event=login| stats count by user
which returns the following :
User A: 10
User B: 7
User C: 3
User D: 5
I am trying to obtain the following:
1. Number of regular users (login >=5 times)
2. Number of open_page events done by regular users
3. Proportion of close_page per user type (regular vs occasional)
Thanks for the help!!
↧
Multiple/Nested IF statement
My logic for my field "Action" is below, but because there is different else conditions I cannot write an eval do achieve the below.
**if** (Location="Varonis" **AND** **** (like(Path,"%Hosting%")
**then** Status=Action Required
**else if**(Location="Varonis" **AND** **** ( MonitoringStatus!="Monitored" **OR** MonitoringStatus=null )
**then** Status=Action Required
**else if**(Location="Varonis" **AND** **** ( DayBackUpStatus!="Backed Up" **OR** DayBackUpStatus=null )
**then** Status=Action Required
**else if**(Location="Varonis" **AND** **** ( DayBackUpStatus!="Backed Up" **OR** DayBackUpStatus=null )
**then** Status=Action Required
↧
↧
Custom Python Script Not Executing
I have created a python script for reading log data from a custom application. The script is copied in below folder
$SPLUNK_HOME\etc\apps\search\bin\splunk_script.py
The configuration is done using the Data Inputs - Local Inputs - > Scripts menu to execute the script every 5 seconds
The script reads the . log file and returns text in CSV format
However the script is not executing.
Pls share the steps to configure a python script execution using the scripts option in data inputs. The results returned by the python scripts should be visible using the Search & Reporting menu in Splunk
↧
Splunk Server Login Error
I am getting below error when we login to CLI for Splunk server(Shown in Screenshot)
Any suggestion to remediate the same.
Thanks for your help.
![alt text][1]
[1]: /storage/temp/226668-splunk.jpg
↧
Why are my json data extracted twice
My inputs.conf is:
[monitor:///var/log/grains.log]
sourcetype = grains_log
disabled = 0
index = os
My props.conf is as follows:
[grains_log]
INDEXED_EXTRACTIONS = json
KV_MODE = none
But I keep seeing double values.
Does someone has an idea what I miss here ?
↧
When will be new update available for "Splunk app for Microsoft Exchange" to support Splunk version 7.0?
Hello,
We are using Splunk version 7.0 in our work environment .
As mentioned in Splunkbase document "Splunk app for Microsoft Exchange" version (3.4.2) is compatible up to Splunk versions 6.6.
Any idea what will be the ETA for new release?
↧
↧
Time Input to Form Not Working
Maybe I've been overthinking this, but for the life of me I cannot get my Time Input to my form working! I'm using this documentation: http://docs.splunk.com/Documentation/Splunk/6.1.1/Viz/FormEditor#Add_a_time_input_to_a_for and this is my search string from my report:
index=main sourcetype=audit_main source=AUDIT_LOGS OS_USERNAME=%username%
I didn't see anything in the documentation that says I need to edit this search string. Even more importantly, however, I do not see a "Search Icon" when I go to edit a panel, let alone an option to "Edit Search String" or use a Shared Time Picker.
That said, I was able to get this partially working by playing around with the timerange a bit. My query works for items like last 15 minutes, last 24 hours, last 7 days, etc.....everything BUT for "All time". If I select "All time", get an error saying that they couldn't parse the search because of a comparator operator (Error in 'search' command: Unable to parse the search: Comparator '=' is missing a term on the right hand side.).
My source code is as follows:
What is going on? What am I doing wrong? Would greatly appreciate any help!
↧
port sweep 1 source to multiple destination to more than 4 dest_ports
This is the query which is for port sweep------- 1source->dest_ips>800->1dest_port
| tstats `summariesonly` dc(All_Traffic.dest) AS count from datamodel=Network_Traffic by All_Traffic.src,All_Traffic.transport,All_Traffic.dest_port
| lookup application_protocol_lookup dest_port AS All_Traffic.dest_port transport AS All_Traffic.transport OUTPUT app
| `drop_dm_object_name("All_Traffic")` | search app=* | search src!="192.168.176.81" | where count>800
What if i want to reframe my query to more than 4 dest_ports -----1source-> dest_ips>800->dest_port>4
can you please help me with this
↧
Supress "Y" axis scale
I am using a stacked bar chart to display average responses to survey questions. Each block displays the average for that question. The charts have four to five questions. I would like to be able to suppress the scale on the "Y" axis as it shows the total of the blocks in the bar which is confusing. ![alt text][1]
[1]: /storage/temp/226671-splunk-suspress-y-values.jpg
↧
Drill Down on Stacked Bar chart
The chart shows number of incidents by vendor during a time period. I would like to be able to drill down on each bar for specific information about that vendor. I have 41 vendors that I monitor which may or may not show up in the chart depending on their performance for that time period. It seems as though the standard drill down function will take you to one location (i.e. search, report, chart, etc.) Is it possible to design it for drill down to each vendor listed via xml coding? I'm guessing(hopefully wrong!) that if it can be done I would need to have reports done for incidents (or the other 19KPIs) for each vendor (800 reports?) ![alt text][1]
[1]: /storage/temp/226674-drill-down-question.jpg
↧
↧
How to get the forwarder IP address reproting to splunk
Hello,
Can i please know how to get the all forwarders IP addresses that a reporting to splunk without use of internal index as some of the users don't have access to the internal data . Therefore, searches created with index=_internal will not work for those people. Is there anyway to create the search without of the use of that to get the all forwarders IP's ?
↧
This Error.... No data collection for VNX is found in the inputs.conf. Do nothing and Quit the TA.
we keep getting “No data collection for VNX is found in the inputs.conf. Do nothing and Quit the TA.”. We have a inputs.conf /splunk/etc/apps/Splunk_TA_emc-vnx/local as I see it is documented. Not sure what else to try? Any one else run into this issue before?
Also, just a fyi: We have also set up the RSA key and NaviCLI as documented.
Sample inputs.conf stanza-
[vnx_data_loader://NAME1_file]
network_addr = 10.10.10.10
network_addr2 = 10.10.10.11
username = Splunkuser
password = Splunkerpassword
platform = VNX File
site = site1
loglevel = DEBUG
scope = 0
index = vnx_test
[vnx_data_loader://NAME2_block]
network_addr = 10.10.10.14
network_addr2 = 10.10.10.15
username = Splunkuser
password = Splunkerpassword
platform = VNX File
site = site1
loglevel = DEBUG
scope = 0
index = vnx_test
↧
cpu and memory usage consumed by a splunk dashboard?
My splunk infrastructure is in Linux.
Suddenly One of my Splunk dashboard consumes almost 20 mins. Earlier it used to consume around 2 mins. I haven't increased the time span recently or modified the dashboard queries for sure.
Is there any way that I can check the cpu and memory usage for that particullar dashboard? If there is a way is it in the linux server or I can check that in splunk search head itself?
↧
How do I collect SharePoint audit data using DBConnect
Hi There,
I am looking for a way to get SharePoint audit data into Splunk via DBConnect. Does anyone have a working script that I can use?
↧
↧
I have a tstats search that works for me (admin) but not other users (who inherit role from 'user). Why is that?
I have a user who is asking how to show earliest logs indexed by the indexer for a particular host. I tried this simple search using tstats, but when he runs it he gets no results back. Here is the search:
| tstats min(_time) as earliest_log max(_time) as latest_log WHERE index=winevent_dc_index host=somehost.uci.edu by host
| convert timeformat="%Y-%m-%d %H:%M:%S" ctime(earliest_log) ctime(latest_log)
Is there some special capability that needs to be added to his role for this to work? He gets back no results at all.
↧
KV store issue for collection SavedSearchHistory
From time to time, I am getting below warning:
WARN SavedSearchHistory - Can't persist saved-search history due to the KV-Store either being disabled or failing
It doesn't appear all the time, just randomly. What does this message mean? How to fix this? We have KV-Store enabled. Thanks.
↧
Setting the timestamp when using the collect command
I am searching yesterday's data and trying to insert it into an index for reporting purposes. I need to take multiple indexed events with various date/time fields and override them with the current date/time for the summary index table. The following search is a very simplified version that illustrates the issue.
index=blah
| eval _time=now()
| collect index=test
When I do the search, it inserts yesterday's date/time into the summary index _time field. Is there any way to reassign this?
Splunk 6.6.3.
↧