hello
I dont understand why | table host Wear_Rate even if index="x" sourcetype="wmi:BatteryFull" OR sourcetype="wmi:BatteryStatic" returns results
Could you help me please??
index="x" sourcetype="wmi:BatteryFull" OR sourcetype="wmi:BatteryStatic"
| dedup host
| eval Wear_Rate = 100-(FullChargedCapacity *100/DesignedCapacity)
| table host Wear_Rate
↧
help on basic eval which dont returns results
↧
Timechart data from summary index
Hi, I have a summery index with events like this :-
3/06/2019 00:00:00 +0000, search_name=ABCD , search_now=1551916800.000, info_min_time=1551830400.000, info_max_time=1551916800.000, info_search_time=1551916803.490, SourceType="up2date-too_small", date="2019-03-06", info_max_time="1551916800.000", info_min_time="1551830400.000", info_search_time="1551916803.490", info_sid="scheduler_612483161search_RMD5b9c004924d61345e_at_1551916800_2619_CB367A2F-91DF-4379-90F8-63AC41173EAB", sum(b)=960, ABCD, search_name
now I want to use date from this event and plot a timechart by Sourcetype. Can anyone help please?
↧
↧
Best way to keep learning Splunk even if you are not working on it day in day out
Recently my project has changed which is totally different than what i have been doing (Splunking). But since i love Splunk so much i dont want to lose it as a skillset, considering the fact that it is one of the hottest technology out there. My question is to ask experts, what they recommend one of the best way to keep in touch with Splunk.
What i have been doing till now :
1) Installed Splunk N box on my macbook, and keep learning clustering stuff every other day.
2) Following the forum( answers.splunk.com)
3) Reading a book that i recently bought.
↧
Schema Accelerated Event Search performance
I am super stoked about the potential of Schema Accelerated Event Searches- might be one of the best improvements i've seen if i could actually get it to work- but it doesn't. :-(
Don't focus on the fact that i'm only returning the count of events... performance doesn't differ if i returned the raw events (which is ultimately what i want to do).... i'm just doing the count so i can make an apples-to-apples comparison.
So consider the following two searches over 15 minutes of data:
**SEARCH # 1**
|tstats summariesonly=true count from datamodel="Web" where Web.user="dmerritt"
The value returned was 25. The search itself took 2.676 seconds
**SEARCH # 2**
|from datamodel Web|search user=dmerritt|stats count
The value returned was 106. The search itself took 2 minutes, 14 seconds.
**QUESTIONS:**
1) Why the HUGE difference in performance?
2) Why is the result count different?
NOTE : Am running Splunk 7.1.5
↧
Real Time job keeps been killed
Hi
I have a real time search over the past 5 minutes, however it works for 30 seconds an then it dies.
any ideas?
I have this search at the top of my HOME page users log in and see data flowing into the system from there hosts.
![alt text][1]
![alt text][2]
Thanks in advance
Robert :)
[1]: /storage/temp/270738-2019-03-07-16-36-04-home-splunk-716.png
[2]: /storage/temp/270737-2019-03-07-16-32-26-search-splunk-716.png
↧
↧
[Bug] Edit Macro UI on "all configuration"
When tried to edit a macro in Settings\all Settings
-edit macro gives a 404
![alt text][1]
[1]: /storage/temp/269715-capture.png
It seams the Generated URL use ../data/.. it should be ../admin/
Is there any way to fix the render url to use the /admin/ instead of /data/
↧
Palo Alto app configuration page shows "Page not found"
The Palo Alto app was previously working, all dashboards displayed data. Now it's not working. I'm working through the troubleshooting steps here https://splunk.paloaltonetworks.com/troubleshoot.html and pretty sure it's a parsing issue. I'm working with the Networking team (who set up the PA) to check the syslog settings. But in the meantime, when I go to the Configuration page, I get Buttercup and page not found. I'm assuming this is a separate issue, and I have no idea what to do. Can anyone help me?
We have only 1 Splunk server, running 7.2.4.2. It's on prem. Both the Palo Alto app (6.1.1) and TA (6.1.1) are installed.
↧
Memory leaks in Windows Server
Hi,
We are trying to fin out memory leaks in windows servers, We are trying to track the Value of the counter Virtual_Bytes. How can we track the Value and check if it keeps on growing and how to determine the time for which it kept on increasing in value.
> 03/07/2019_11:27:38.497_-0500 > collection=Process object=Process > counter="Virtual_Bytes" > instance=winlogon Value=2199081287680> host = host1 index = perfmon source = Perfmon:Process sourcetype> = Perfmon:Process
Thanks,
Om
↧
Splunk Port 9997 SSL
Attempting to set up new Splunk 7.2.4.2 server on Redhat 7 using our own cert. Splunk web works fine with https using our cert. Configured inputs.conf and server.conf to allow ssl for receiving from forwarders. Get the following ERROR in splunkd.log:
TcpInputConfig - SSL context not found. Will not open splunk to splunk (SSL) IPv4 port 9997
inputs.conf and server.conf are as follows:
**inputs.conf**
[default]
host = myserver.com
[splunktcp-ssl:9997]
disabled = 0
[SSL]
serverCert = $SPLUNK_HOME/etc/auth/mycert.pem
sslPassword = mypassword
requireClientCert = false
**server.conf**
[general]
serverName = myserver.com
pass4SymmKey = symmkey
[sslConfig]
sslRootCAPath = $SPLUNK_HOME/etc/auth/rootcert.pem
Also perhaps a related issue?
ERROR IntrospectionGenerator:resource_usage - KVStoreConfigurationProvider - Unable to read an X509 cert from '' file
Thanks!
↧
↧
Can this Report be improved - and the Month shown on the x axis
At this time if i run this over 4 months the x axis shows Month, day - all i want to see is month.
And can this search be improved ?
gratzi
index=response source=responsetimes
| table _time, ACTION_TIME
| sort - ACTION_TIME
| convert rmcomma(ACTION_TIME)
| eval ACTION_TIME = (ACTION_TIME/1000)
| timechart avg(ACTION_TIME) as "Average" span=1d
| append [search index=response source=responsetimes
| table _time, ACTION_TIME
| sort - ACTION_TIME
| convert rmcomma(ACTION_TIME)
| eval ACTION_TIME = (ACTION_TIME/1000)
| eventstats p5(ACTION_TIME) as top5perc
| where ACTION_TIME < top5perc
| timechart avg(ACTION_TIME) as "Top 5%" span=1d]
| append [search index=response source=responsetimes
| table _time, ACTION_TIME
| sort - ACTION_TIME
| convert rmcomma(ACTION_TIME)
| eval ACTION_TIME = (ACTION_TIME/1000)
| eventstats p95(ACTION_TIME) as bottom5perc
| where ACTION_TIME > bottom5perc
| timechart avg(ACTION_TIME) as "Bottom 5%" span=1d]
| timechart first(*) as *
↧
What is the best way to keep learning Splunk even if you are not working on it day in day out?
Recently my project has changed which is totally different than what i have been doing (Splunking). But since i love Splunk so much i don't want to lose it as a skillset, considering the fact that it is one of the hottest technology out there. My question is to ask experts, what they recommend one of the best way to keep in touch with Splunk.
What I have been doing till now:
1) Installed Splunk N box on my macbook, and keep learning clustering stuff every other day.
2) Following the forum( answers.splunk.com)
3) Reading a book that I recently bought.
↧
Why am I getting an error in splunkd.log when setting up Splunk Port 9997 SSL
Attempting to set up new Splunk 7.2.4.2 server on Redhat 7 using our own cert. Splunk web works fine with https using our cert. Configured inputs.conf and server.conf to allow ssl for receiving from forwarders. Get the following ERROR in splunkd.log:
TcpInputConfig - SSL context not found. Will not open splunk to splunk (SSL) IPv4 port 9997
inputs.conf and server.conf are as follows:
**inputs.conf**
[default]
host = myserver.com
[splunktcp-ssl:9997]
disabled = 0
[SSL]
serverCert = $SPLUNK_HOME/etc/auth/mycert.pem
sslPassword = mypassword
requireClientCert = false
**server.conf**
[general]
serverName = myserver.com
pass4SymmKey = symmkey
[sslConfig]
sslRootCAPath = $SPLUNK_HOME/etc/auth/rootcert.pem
Also perhaps a related issue?
ERROR IntrospectionGenerator:resource_usage - KVStoreConfigurationProvider - Unable to read an X509 cert from '' file
Thanks!
↧
Can we limit the number of choices in the multiselect input? want to have an input as multiselect built from a search and limit the number of select e.g. to 2
Can we limit the number of choices in the multiselect input? want to have an input as multiselect built from a search and limit the number of select e.g. to 2.
↧
↧
JSON miliseconds not taken from timestamp
Hi!
I have a json log and dedicated sourcetype for it. Sourcetype looks like this:
[json]
disabled=false
KV_MODE=json
pulldown_type=true
TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%3N%:z
MAX_TIMESTAMP_LOOKAHEAD=200
TIMESTAMP_FIELDS=@timestamp
TRUNCATE=50000
SHOULD_LINEMERGE=false
But when it comes to see the event in Splunk it looks like it does not resolve the full timestamp. I.e. the beginning of the raw event looks like this: `{"@timestamp":"2019-02-20T07:51:09.003+00:00"`
While in Splunk I can see the time of an event: `19-02-20 08:51:09,000`
It does not take miliseconds. Do you see any mistake in the sourcetype configuration, why miliseconds are skipped?
Best Regards,
Przemek
↧
Configuration to achieve better performance of Splunk
I want to carry out performance monitoring of Splunk. I came across this benchmark while browsing https://docs.splunk.com/Documentation/Splunk/6.1.14/Installation/Referencehardware. What was the configuration in config files like limits.conf and indexes.conf while carrying out these measurements? Were some parameters like *memPoolMB* or *max_mem_usage_mb* tuned to achieve better performance or were the set to their default configuration only?
↧
I have Splunk Add-on for Amazon Kinesis but it is not picking up Latest stream?
Hi Experts ,
I have Splunk Add-on for Amazon Kinesis and I have setup Initial Stream Position to Latest .
Now when I disable the stream let's say for 2 days and I switch it on today, it is taking ages to show me the latest logs(stream) as Splunk is busy to collect older date logs first (last 2 days)which I have cross checked with the query .
What I am missing here I have no Idea , can someone please help me here ?
Regards
VG
↧
UF install error ?
Hello,
When trying to install the univeral forwarder 7.2.3 windows 64bits on windows Server 2012 we are receiving code error 2502 & 2503.
Your help is much appreciated.
Regards
↧
↧
Can we limit the number of choices in the multiselect input?
Can we limit the number of choices in the multiselect input?
I want to have an input as multiselect built from a search and limit the number of select e.g. to 2.
↧
I need help with line breaking - File Monitoring - Roxio SecureBurn Log file - .txt
Everytime a CD is burned with Roxio SecureBurn, a txt file log of the cd is created. The format of the .txt log file is:
Date: Thu Mar 7 13:47:00 2019
Computer Name: ComputerName01
User Name: domain.accountname
Project includes 1 folder(s) and 2 file(s)
============================================================================================
C:\Users\accountname\Desktop\TransferFolder\file1.txt e69f78a887b(rest of file hash)a35 3764543bytes 2019/3/7 08:13:15
C:\Users\accountname\Desktop\TransferFolder\file2.txt e69f78a887b(rest of file hash)a35 7764543bytes 2019/3/7 08:13:18
END OF FILE
My props.conf for the sourcetype I created includes:
SHOULD_LINEMERGE = false
Linebreaking is occurring inconsistently. Some of my events show up where each line is its own event, others include every bit of data in the .txt file as its own event. Any ideas - perhaps a more bulletproof way to force a break? Do Windows .txt files normally have inconsistancies with a carriage return or line breaking?
↧
Why is line breaking inconsistent - File Monitoring - Roxio SecureBurn Log file - .txt
Everytime a CD is burned with Roxio SecureBurn, a txt file log of the cd is created. The format of the .txt log file is:
Date: Thu Mar 7 13:47:00 2019
Computer Name: ComputerName01
User Name: domain.accountname
Project includes 1 folder(s) and 2 file(s)
============================================================================================
C:\Users\accountname\Desktop\TransferFolder\file1.txt e69f78a887b(rest of file hash)a35 3764543bytes 2019/3/7 08:13:15
C:\Users\accountname\Desktop\TransferFolder\file2.txt e69f78a887b(rest of file hash)a35 7764543bytes 2019/3/7 08:13:18
END OF FILE
My props.conf for the sourcetype I created includes:
SHOULD_LINEMERGE = false
Linebreaking is occurring inconsistently. Some of my events show up where each line is its own event, others include every bit of data in the .txt file as its own event. Any ideas - perhaps a more bulletproof way to force a break? Do Windows .txt files normally have inconsistancies with a carriage return or line breaking?
↧