Articles on this Page
- 11/11/19--02:34: _double fields If St...
- 11/11/19--03:14: _count users who acc...
- 11/11/19--04:48: _Problem Importing S...
- 11/11/19--05:27: _SPL for displaying ...
- 11/11/19--06:04: _Can you visualize i...
- 11/11/19--06:20: _splunk custom app u...
- 11/11/19--06:25: _Splunk Platform Upg...
- 11/11/19--05:57: _running a search fo...
- 11/11/19--06:01: _How to search for e...
- 11/11/19--06:33: _CSV file index time...
- 11/11/19--07:32: _How to verify if ig...
- 11/11/19--07:41: _strptime conversion...
- 11/11/19--07:42: _Splunk as a tool fo...
- 11/11/19--06:40: _How to find the top...
- 11/11/19--09:32: _Splunk forwarder Sk...
- 11/11/19--09:52: _Minimum requirement...
- 11/11/19--10:20: _Storing spl in lookup
- 11/11/19--10:29: _Frozen Index Manage...
- 11/11/19--11:05: _Arista splunk forwa...
- 11/11/19--11:59: _Black listed softwa...
- 11/11/19--02:34: double fields If Statements
- 11/11/19--04:48: Problem Importing Sep Cloud Events into Splunk (Windows)
- 11/11/19--05:27: SPL for displaying overview panel in splunk app for infrastructure.
- 11/11/19--06:04: Can you visualize itsi with splunk cloud gateway?
- 11/11/19--06:20: splunk custom app using virtual environment - deploying the app
- 11/11/19--07:32: How to verify if ignoreOlderThan is working or not ?
- 11/11/19--07:41: strptime conversion difficulty from a string
- 11/11/19--07:42: Splunk as a tool for capacity and performance management
- 11/11/19--09:32: Splunk forwarder Skipping Log files occasionally
- 11/11/19--09:52: Minimum requirements for the Splunk Fundamentals 1 course
- 11/11/19--10:20: Storing spl in lookup
- 11/11/19--10:29: Frozen Index Management
- 11/11/19--11:59: Black listed software alerts
i try to generate different fields using if 2.
I would like to write a query which looks at the following:
if sender==x then eval field_a==time_a and eval field_b==time_b
if else sender===y then eval field_x==time and eval field_y==time_y
To the general scenario I want to calculate the duration of the processing of log files. For this log files are sent from a server_a to a server_b where they are processed and sent back to server_a.
Here I want to write a query which calculates how long the file needs from server_a to server_b... how long from server_b to server_a and total duration... speaks server_a -> server_b -> server_a
I have user names in the field ContextUsername in index/ sourcetype index=otcs sourcetype=OtcsSummarytimings. To analyze how users are working with the system, I would need the following two counts:
1. light users = who access the system only **once per week or only 52 times per year**
2. normal users = who access the system **more than once a week or more than 52 times per year**
I know I can use timechart, span=1w and a dc(ContextUsername) but I don't know how to realize the part with once a week or only/more than 52 times a year.
Any help would be much appreciated.
I am trying to import Sep Cloud Events to Splunk. I am reading this documentation but i cant install this in window. The scrip (wrapped.sh) cant run in a windows system. Here is the manual that Symantec shows https://support.symantec.com/us/en/article.HOWTO128173.html but it doesnt explain how can i configurate it in a windows system.
Anyone can help me?
I'm hoping someone could help.
I would like to create a dashboard for one of our hosts similar to the Splunk app for infrastructure overview page (as in screenshot below).
We are indexing as metrics (index=em_metrics) and would like to create the panels shown below.
We are working on itsi and it is necessary to be able to visualize the dashboards via mobile, for this the splunk cloud gateway app has been installed, but apparently it is not possible with this app, someone has any suggestions or knows in what way the dashboards could be seen in Mobile devices
I have difficult times to understand how to deploy an app which needs virtual environment and deploy the app in distributed environment?
Currently I have installed the app on the Heavy Forwarder manually but this is client of DS so this will not work because I think if I restart DS then the app will disappear from HF. So how can I handle to place the app to DS and then deploy the app to HF when the app requires a Virtual environment which currently is installed in bin folder of the app?
Thank you in advance
Recently upgraded from Splunk 7.1* to Splunk 8.0. I had installed Splunk Platform Upgrade Readiness App version 1 before this upgrade and it was loading. However due to obvious reasons it was skipping all the tests as the Splunk version was 7.1. Hence Upgrade to the latest available version - Splunk 8. Now post this The Splunk Platform Upgrade Readiness App app is not loading the test-run panel. I'm hence unable to run and test the instance compatibility with the upcoming python 3.0. I then upgraded to the new version(version 2) of the platform upgrade app. That didn't help either.
Any points on how I should proceed further to get this app loading?
I am trying to create a drilldown for my timechart, the idea is to drill down to the events that happened 30 mins before and after the clicked time (click.value in my case). I referred the answer
Now this is helping me pass the earliest time, but when i use the same concept with the latest time i am not getting the desired result. Below is my current query:
index="tutorial" categoryId=strategy earliest=[|gentimes start=-1|eval new = relative_time($click.value$,"-1800")| return $$new] latest=[|gentimes start=-1|eval latest1 =($click.value$ + 1800)| return $$latest1] | top clientip
after passing through the drilldown pipeline the query changes to:
index="tutorial" categoryId=strategy earliest=[|gentimes start=-1|eval new = relative_time(1572728400.000,"-1800")| return $new] **latest=[|gentimes start=-1|eval latest1 =(1572728400.000 1800)| return $latest1]** | top clientip
If you look closely the + sign just vanished. I tried many ways to find a work around for that but wasn't able to. Please take a look and try to run the query once by yourself.
@Raghav2384 @RVDowning @somesoni2 @woodcock @gcusello @mayurr98 please help guys!!
Hello wonderful people of the internet,
I'm still quite new when it comes to using splunk, so could use a bit of advice with this one. I have 2 CSV files, both containing a list of IP addresses. One of these is called IOC1.csv, and is a file of known malicious addresses.
The second CSV, called ignore.csv, contains all of the IP addresses I wish to exclude from the results (Basically, stuff we want to discount/tune out).
I'd like a search which could check all of the FW logs for any hits which have an IP from IOC1.csv located in there, but discount the event if an IP from ignore.csv is also present.
Could somebody advise on how this could be done?
Thank you all so much.
I have the following little csv file:
You can see in contains a header and two rows with the data.
I want to perform index time extraction of the fields. I also want to use timestamp from the time column.
This is my props.conf configuration:
INDEXED_EXTRACTIONS = csv
LINE_BREAKER = ([\r\n]+)
NO_BINARY_CHECK = true
TIMESTAMP_FIELDS = time
TIME_FORMAT = %Y-%m-%d
category = Custom
pulldown_type = 1
HEADER_FIELD_LINE_NUMBER = 1
disabled = false
No matter what i do Splunk always indexes the header as well. I don't want that. I have tried the following settings:
1. PREAMBLE_REGEX - this ignores the header, but then index time field extractions are not performed. Probably because the header is ignored (chicken and egg situation). I can work around this by listing the comma separated field names manually but i want schema on write support which Splunk doesn't seem to provide.
2. HEADER_FIELD_LINE_NUMBER = 1 Tried this setting which made no difference.
Does anyone know if it is possible to index csv file fields without the header and without defining column names manually in props.conf?
Hi I have implemented ignoreOlderThan for 7 days , I want to verify it if its working or not ? Is their any query or any place in DMC where i can validate that its working ?
I am having difficulty getting the strptime function to properly convert my date string into a usable and accurate time stamp. Here is an example of the string and the strptime function I have tried. Can you help with the proper conversion please?
string=**05-NOV-19 10.53.49.287000 AM AMERICA/CHICAGO**
This did not work: **| eval first_res_time = strptime(previous_resolution_time, "%d/%B/%y %H/%M/%S/%N")**
I am wondering if anyone has any experience or suggestions for using Splunk as a tool for Capacity and Performance management (in addition to using it as IT ops and Security tool)
Ultimately i would like to be able to report capacity and performance stats for different domains such as VM's , Network, Telephony, Storage, etc.
The way i see it right now I'll have 3 types of data sources:
1. Systems that Splunk has apps for and logs to monitor (vSphere, CISCO, etc) - this one should be straight forward
2. Systems that can be scripted to produce a daily, weekly or monthly reports (storage system, etc)- i think i should be able to monitor report directory and index the data sources such as .CSV ?
3. Systems that don't log or have ability to report capacity/performance related stat - someone will collect couple of KPI's once a month - what is the best place to store the "manual" data inputs? A CSV file that gets ingested into Splunk?
Hi I am using this search in order to find out what Bluecoat filter categories cause the most bandwidth utilization
index=bluecoat mysearch | fields sc_filter_category sc_bytes | eventstats sum(sc_bytes) as allbytes | stats sum(sc_bytes) as "totalbytes" by sc_filter_category,allbytes | eval "Bandwidth(MB)"= round(totalbytes/(1024*1024),2) | eval Percentage=(totalbytes/allbytes)*100 | sort 10 -"Bandwidth(MB)"
Tis seems to work fine.
My result is as an example a table like this
sc_filter_category allbytes, Bandwidth(MB), Percentage
category1, 100, 20,20
So what I would like to do is then in a second search be able to list what top two URLs cause the most bandwidth for each category.
The output would look like this
category1 www.abc.com, www.def.com, www.ghi.com
category2 www.abc1.com, www.def1.com, www.ghi1.com
I am not able to find out how to search dynamically using the result of the first search... any help appreciated.
I am having an issue in indexing the log file which gets rotated ever hour. The log file error.log gets rotated every hour at top of the hour and a new file is created with the same name(error.log). The old file gets renamed and zipped to error.log._timestamp.gz.
Sometimes splunk does not index the file for an hour and resumes the indexing once the file is again rotated so the complete 1 hour logs gets skipped. Before splunk resumes the indexing following error message is logged.
WatchedFile - Checksum for seekptr didn't match, will re-read entire file
Every file has a different content because each event has a timestamp so first 256 characters should not much the fishbucket.
_TCP_ROUTING = indexers123
index = indexname
sourcetype = sourcetypename
initCrcLength = 1024
disabled = 0
Splunk version is 7.0.3
What are the minimum requirements for a Splunk VM for the Splunk Fundamentals 1 course? I can see the reference hardware at https://docs.splunk.com/Documentation/Splunk/6.5.0/Capacity/Referencehardware but that's a bit much just for a VM for the course.
Is it possible to store a search string in a lookup column, retrieve the content and run it as a search?
| lookup test.csv lookup_key_field as event_code OUTPUT spl_field as search_string
| ... some command to actually run the search_string ...
Is there an app/script/mechanism out there that would allow you to list your available frozen indices by their human readable date/timestamps and allow you to move them back to thawed state and rebuild?
With a configuration like the one below. It appears as if the splunk forwarder is essentially load balancing telemetry logs, ie, randomly sending telemetry enties to either Splunk server, but not both. (The servers are independent) Is it possible to have a configuration like the one below send telemetry to both servers. If not, then what would be the best way to end up with a full set of telemetry on both servers?
splunk-server 18.104.22.168 port 9997
splunk-server 22.214.171.124 port 9997
http-commands protocol socket
I work in an enterprise environment. I'm trying to figure out a way to create a list of blacklisted software and have splunk send an alert to the score card whenever any blacklisted software is installed on a user's host. Any idea of how I can get this done?