Hi Guys!
I have an error duration in seconds, how can I convert it to [h]:mm:ss?
I used the below query but the if the total hours is 25hrs, it is showing as 1d + 1h.
| stats sum(DURATION) AS "DURATION"
| eval HOURS=tostring(DURATION,"duration")
Thank you!
↧
How to convert seconds to [h]:mm:ss?
↧
Changing time format
Currently I'm using a stats command to populate a few fields along with time.
The command is as follows,
stats values(session_id) as Session **values(_time) as Time** values(action) as Action_Performed values(success) as Rate by usage
Here I get **Time** in a strange format , like 1515424081.
Is there any way to change the format to something readable ?
↧
↧
Splunk stops to index once added indexed extractions field
This is my monitor under the `inputs.conf` file:
[monitor:///var/lib/docker/containers/.../*.log]
disabled = false
sourcetype = containers
index = my_container
it doesn't that fine because the logs are not split correctly and hence it really hard read them.
I thought then to use the below config under `props.conf`:
[containers]
INDEXED_EXTRACTIONS = json
But once I add this configuration, it stops to report the Splunk. I don't see any logs.
Any suggestions?
Note that these configuration are set on a machine which run the Splunk Forwarder.
↧
using self sign cert for Splunk 6.6.3 Indexer (Windows) and Splunk Forwarder (Windows). Both my Indexer and Forwarder OS is Win2k12
I followed these steps steps below:
http://docs.splunk.com/Documentation/Splunk/6.6.3/Security/Howtoself-signcertificates
http://docs.splunk.com/Documentation/Splunk/6.6.3/Security/HowtoprepareyoursignedcertificatesforSplunk
http://docs.splunk.com/Documentation/Splunk/6.6.3/Security/ConfigureSplunkforwardingtousesignedcertificates
I follow the steps below for my Indexer server, do i have to create self-sign-cert for my Forwarder too? Can I just copy over the cert files I created on the Indexer on to the Forwarder?
steps I follow create cert on Index server
myServerCertificate.pem
myServerPrivateKey.key
myCACertificate.pem
Combine these certs to one NewCert.pem file
update inputs.conf on Indexer to point to new NewCert.pem that I created
copy the NewCert.Pem to Forwarder, update outputs.conf to point to NewCert.pem?
I'm I missing something, because I can't seem to get it working
error i get in logs
ERROR tcpInputProc error encountered for connection from source error 140760FC:SSL routine :SSL23
thank you
↧
Splunk *nix app- Not getting the processes in a specific interval
The *nix app is retrieving the process (sshd, httpd etc) details running on the unix/linux servers. However, few processes are not running (on few servers) for quite long time and its not retrieving those events. Is this the issue with the line count post 256 getting omitted . Does it help in anyway if i try to change the ulimit values . Please help with this.
↧
↧
The "level" field is being automatically added by splunk, how to we ask splunk to extract log level from my json message ?
The "level" field is being automatically added by splunk, how to we ask splunk to extract log level from my json message ?
![alt text][1]
[1]: /storage/temp/226660-splunk-log.png
↧
I want to access the log of linux machine in a same network from my windows machine which have splunk enterprise so do i have to download UF in window and linux both?
I want to access the log of Linux machine in a same network from my windows machine and I know that for that I have to install UF but I don't know how to configure input.conf and output.conf to receive the data from Linux machine.
↧
Splunk indexing issues for logs: WatchedFile - Checksum for seekptr didn't match
Hello Everyone,
I have a questions regarding ingesting log files which doesn't have time stamp in the file name.
I am receiving the following error in splunkd.log file
**01-08-2018 02:30:21.007 -0600 INFO WatchedFile - Checksum for seekptr didn't match, will re-read entire file='/abcpad/gatst01/outbound/sys_data.log'.
01-08-2018 02:30:21.007 -0600 INFO WatchedFile - Will begin reading at offset=0 for file='/abcpad/gatst01/outbound/sys_data.log'.**
FYI, Log file is generated through a script with the same filename in every 15 mins. Logfile rolled over with new changes.
Request to help me where I'm going wrong in here. below is the log file.
And here is the sample log file:
01-08-2018 00:24:57.487 Used Space: 30055416
Free Space: 67914024
Usage Percent: 31%
File System: /u02
Total Size: 309637120
Used Space: 32651888
Free Space: 261259152
Usage Percent: 12%
File System: /u03
Total Size: 877304620
Used Space: 559123000
Free Space: 273617140
Usage Percent: 68%
File System: /u04
Total Size: 1032123136
Used Space: 779034500
Free Space: 200659836
Usage Percent: 80%
File System: /u05
Total Size: 103212320
Used Space: 67048924
Free Space: 30920516
Usage Percent: 69%
File System: /u06
Total Size: 659131600
Used Space: 285883800
Free Space: 339770612
Usage Percent: 46%
File System: /u07
Total Size: 294155264
Used Space: 64517568
Free Space: 214696256
Usage Percent: 24%
File System: /u08
Total Size: 294155264
Used Space: 180619292
Free Space: 98594532
Usage Percent: 65%
File System: /u09
Total Size: 294155264
Used Space: 174681436
Free Space: 104532388
Usage Percent: 63%
01-08-2018 00:24:57.500 MemTotal: 51629136 kB MemFree: 483604 kB Cached: 41778468 kB SwapCached: 10080 kB SwapTotal: 10751992 kB SwapFree: 10056880 kB
Thanks,
Ramu Chittiprolu
↧
Do we have to edit input or output of C:\Program Files\Splunk\etc\system\local in windows machine also if we want to access the log of a linux machine which already have UF installed?
I have splunk enterprise installed in window and I want to access the log of Linux machine which have UF installed but the input and output.conf is not touched so to access the log do we have to edit the input or output file of windows?
↧
↧
is there a way to check if makeresults stored the events in index or not ?
I am searching like this in Splunk:
| makeresults count=3 | eval _raw="demo event" | collect index=main sourcetype="sample"
It generates the events and also stores these 3 events in the index.
and then I searched this:
| makeresults count=3 | eval _raw="demo event" | collect index="randomindex" sourcetype="sample"
Here, "randomIndex" is not available in the Splunk instance, so it will not store the events, but it will show all 3 events anyway when the above search is fired. but still not stored anywhere.
I am trying same to do using REST API to search "makeresults" multiple times and store in the given index but there is no way to check whether the events has been stored successfully or not.
Is there a way to check whether the makeresults stored the event successfully or not.? using job properties or anything.?
↧
Do rest APIs support multiple instances on same host?
I need to fetch some configuration files through REST APIs. In case there are multiple Splunk instances on the same host, can the server specific configuration files still be accessed through REST APIs?
↧
Add new indexers, keeping old for historical
I have an indexer challenge that was hoping to get help with. We have 4 indexers with a significant amount of historical data. We are adding 4 new indexers with significantly more resources to overcome performance problems. Is it possible to do the following and if so what would be the best way to address this?
- Write all new events to the 4 new indexers
- Keep the 4 old indexers online and searchable, but do not write new events to these indexers
- Search is possible against all 8 indexers
- NO replication between the 4 old, and 4 new indexers. Only replication within their group.
Thanks in advance for the help
↧
TA-Mcafee 2.1.3 does not support latest version of McAfee EPO
We are sending McAfee logs from ePO DB using the documentation provided with the TA and DB connect (since 1 year)
http://docs.splunk.com/Documentation/AddOns/released/McAfeeEPO/ConfigureDBConnectv2inputs
But we notice that only the old mcafee client (Virus Scan) has correct logs in splunk, when it is the current client (EndPoint Security), data in splunk are not useful because crucial fields like file_name is empty (it is the file name of the threat)
From ePo console we can see that is it now Target Path + Target Name but not present.
↧
↧
Splunk upgrade to 7.0. List of supported apps
Hi,
Is there a handy way to find what apps/add-ons are supported in 7.0? We will be upgrading our splunk environments from 6 to 7 and have many apps.
↧
EVAL for ELSE IF condition
My logic for my field "Action" is below, but because there is different else conditions I cannot write an eval do achieve the below.
if (Location="Varonis" AND (like(Path,"%Hosting%")
then Status=Action Required
else if(Location="Varonis" AND ( MonitoringStatus!="Monitored" OR MonitoringStatus=null )
then Status=Action Required
else if(Location="Varonis" AND ( DayBackUpStatus!="Backed Up" OR DayBackUpStatus=null )
then Status=Action Required
else if(Location="Varonis" AND ( DayBackUpStatus!="Backed Up" OR DayBackUpStatus=null )
then Status=Action Required
↧
Have an alert where there is violation of license and a search where top 10 consumers of license, how do i combine both , where if there is a violation of license send me alert with top consumers? Is it possible?
Have an alert where there is violation of license and a search where top 10 consumers of license, how do i combine both , where if there is a violation of license send me alert with top consumers? Is it possible?
↧
How can you change the width of a column in a table(HTML) or add a new line break to a field?
Hello! So I am running to a problem where my table visualization looks weird because one of my columns is too long. The column that is too long is grabbing from a csv lookup file and I was trying to add a new line break in the csv file and doing the lookup in Splunk but it doesn't take in the new line break. So, I was wondering if there is a way to set that columns size on the table so the field will auto line break or if theres a way to change the field to have the line break or how to do it with the csv lookup file. thanks
↧
↧
Count of API calls over X time_taken, only if average time_taken is over a threshold
Hi,
I currently have a query that returns the a chart of API's whose calls average over a specific time limit (unique per API). I would then like to be able to display the count of calls over X seconds time_taken ONLY if that API had an average time_taken over X seconds.
Would I be correct in thinking that I should make my first search a sub search and then search on that to find the counts of timed out APIs?
Here is my current search for the APIs with average time_taken over a limit.
index=mykplan_main cs_uri_stem="AAA" OR cs_uri_stem="BBB"
| eval URI=cs_uri_stem
| eval URI = lower(URI)
| stats avg(eval(time_taken*.001)) as avg_duration by URI
| eval avg_duration=round(avg_duration,2)
| eval alert=if((avg_duration > 3 AND URI="AAA") OR (avg_duration > 1 AND URI="BBB") ,"alert", "ignore")
| where alert="alert"
| fields - alert
↧
Rename the existing Correlation search?
Hi Splunkers,
Whats the best way to rename the existing correlation search.?
![alt text][1]
[1]: /storage/temp/225685-correlation-search-name.jpg
↧
How to configure custom management port for Addon setup.xml
I am trying to make a custom Addon with a setup.xml
In the Splunk deployment that I am targeting, the management port has been changed from 8089 to 18089 (to avoid port conflicts with existing services).
After I install my app, I do see a 'Set up' action in the Apps list for my app.
But when I try to go to the Set up page by clicking on the link, I get a 404.
I've read about other addons with setup that indicated the problem may be that the management port is not 8089.
How to I configure my app to use the correct management port?
↧