index=db_apps_digital host=hst1* OR host=hst2* NOT host=hst5 NOT host=hst6 sourcetype="API.CMC-too_small"
| stats count latest(Timestamp) as latestTime by Properties.Message, Level
| eval latestTime=strftime(latestTime,"%Y-%m-%d")
| sort Level, -count
| head 10
I have got my search result to return the expected results. Giving me the count of the events with the latest date time shown.
Using the above eval causes the latestTime column to return blank values.
I now need to format the date time of the search result
from 2018-09-19T21:47:31.0043487+02:00
to 2018-09-19 21:47:31.
↧
DateTime Format for search result
↧
How do you set up a time range from 7 pm to 2 pm for a scheuled hourly report?
We had set up a report which triggers on an hourly basis from 8PM to 2PM (earliest = -1d@d+20h & latest = @d+14h) but we are getting correct reports starting from 12:00 AM only and before that its taking last 24 hours report (9PM, 10PM, 11PM reports).
Thanks,
Shaik Hussain
↧
↧
Adding new indexer to indexer cluster
If a new indexer is added to the cluster. Do we need to manually push the cluster-bundle from master to indexers OR Cluster-bundle will be automatically pushed to new indexer once it communicates with the master?
↧
Lookup file and create a pie chart by the match count in logs
I create a .csv file with error_code and Description, I am trying to compare error_code with the logs and create a pie chat that showes all the error description. I tried
Index=my_index | [|inputlookup error.csv | fields error_code | rename error_code as query] it seems to find the right logs but, it’s not giving stats count by error_code.
Thanks.,
↧
I want to calculate daily license usage against a fixed quota like 800gb
I want to calculate daily license usage against a fixed quota like 800gb.
Thanks,
Kanth
↧
↧
powershell scripted input fails intermittently
I've a powershell scripted input which is set to run at the start of the service. Since the servers reboot daily, this input runs once daily after the reboot. From past few days, this input is failing from a set of servers, but couldnt find a related error in the splunkd or splunk-powershell logs. Also if we restart the service manually, it works-so it is just an intermittent failure.
I need help to troubleshoot this issue.
↧
Display all fields from a lookup file via inputlookup , but match only one in the search?
I have a lookup which have 6-7 fields one of them is src_ip and have a search as follows,
index=myindex "searchterm" [| inputlookup "mylookup.csv" | fields src_ip] | stats values(field1) values(field2) by src_ip
Here it matches src_ip in "myindex" and bring out 3 fields i.e src_ip, field1, field2, however i want all the fields from the lookup in the results but compare only the src_ip with fields in "myindex", is it possible?
↧
How do I display all fields from a lookup file via inputlookup , but match only one in the search?
I have a lookup which has 6-7 fields. One of them is src_ip, which I'm trying to use in a search as follows:
index=myindex "searchterm" [| inputlookup "mylookup.csv" | fields src_ip] | stats values(field1) values(field2) by src_ip
Here it matches src_ip in "myindex" and brings out 3 fields i.e src_ip, field1, field2. However i want all the fields from the lookup in the results to compare only the src_ip with fields in "myindex" .
Is this possible?
↧
Office 365 AD audit log failure
I am having an issue with the Splunk add on for Office 365. It has been working somewhat fine for a couple months and then yesterday I started getting these errors.
2018-09-19 15:32:15,292 level=ERROR pid=28491 tid=MainThread logger=splunk_ta_o365.modinputs.management_activity pos=utils.py:wrapper:67 | start_time=1537396334 datainput="mgmt_ad_audit" | message="Data input was interrupted by an unhandled exception."
Traceback (most recent call last):
File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/utils.py", line 65, in wrapper
return func(*args, **kwargs)
File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 88, in run
with app.open_checkpoint(self.name) as checkpoint:
File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/collector.py", line 258, in open_checkpoint
checkpoint = LocalKVStore.open_always(fullname)
File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/checkpoint.py", line 167, in open_always
indexes = cls.build_indexes(fp)
File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/checkpoint.py", line 174, in build_indexes
for flag, key, pos in cls._replay(fp):
File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/checkpoint.py", line 103, in _replay
flag, key, _ = umsgpack.unpack(fp)
TypeError: 'int' object is not iterable
I am still receiving the general_audit logs without issue.
Any ideas?
↧
↧
Why is my PowerShell scripted input failing intermittently?
I have a PowerShell scripted input which is set to run at the start of the service. Since the servers reboot daily, this input runs once daily after the reboot. For the past few days, this input has been failing from a set of servers, but I couldn't find a related error in the splunkd or splunk-powershell logs. Also if we restart the service manually, it works, so it is just an intermittent failure.
I need help to troubleshoot this issue.
↧
What should I do about the following Office 365 AD audit log errors: "TypeError: 'int' object is not iterable"
I am having an issue with the Splunk add on for Office 365. It has been working somewhat fine for a couple months and then yesterday I started getting these errors.
2018-09-19 15:32:15,292 level=ERROR pid=28491 tid=MainThread logger=splunk_ta_o365.modinputs.management_activity pos=utils.py:wrapper:67 | start_time=1537396334 datainput="mgmt_ad_audit" | message="Data input was interrupted by an unhandled exception."
Traceback (most recent call last):
File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/utils.py", line 65, in wrapper
return func(*args, **kwargs)
File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 88, in run
with app.open_checkpoint(self.name) as checkpoint:
File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/collector.py", line 258, in open_checkpoint
checkpoint = LocalKVStore.open_always(fullname)
File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/checkpoint.py", line 167, in open_always
indexes = cls.build_indexes(fp)
File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/checkpoint.py", line 174, in build_indexes
for flag, key, pos in cls._replay(fp):
File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/checkpoint.py", line 103, in _replay
flag, key, _ = umsgpack.unpack(fp)
TypeError: 'int' object is not iterable
I am still receiving the general_audit logs without issue.
Any ideas?
↧
How to run this Windows Security Operations Center app without Adobe Flash Player ?
Windows Security Operations Center app requires Adobe flash. Even after installing the flash player to the latest version, this app page still says, `Splunk requires a newer version of Flash. (Minimum version: 9.0.124) Download Flash Player.`
I noticed Adobe flash is not supported anymore in chrome. Firefox says it supports, however, the app still gives the above error.
Can someone please advise a workaround this issue ?
↧
Change cell color based on another field value
Hi,
Is it possible to change the cell color not based on the cell value but based on another cell/field value?
Thanks
↧
↧
Can we specify files in archive to be collected?
I couldn't find a clear guideline of doing this.
Simply, can we specify monitor path deep inside archive?
e.g.
[monitor:///tmp/*.gz:/archive/*.xml]
I want UF get the files directly instead of passing entire archive file to indexer and do nullQueue routing.
↧
Parsing - Chrome Log
Hi Community,
We have some issue with one of our cloud product and we need to collect our chrome browser log.
So we havea log file like this :
[2368:3448:0911/104129.306:INFO:CONSOLE(9006)] "Handling message type", source: https://dhqbrvplips7x.cloudfront.net/directory/5039/assets/web-directory-7ffa5c07600e16c16ba367872647fa6c.js (9006)
[2368:3448:0911/104129.331:INFO:CONSOLE(9006)] "Handling message type", source: https://dhqbrvplips7x.cloudfront.net/directory/5039/assets/web-directory-7ffa5c07600e16c16ba367872647fa6c.js (9006)
[2368:3448:0911/104129.353:INFO:CONSOLE(9006)] "Handling message type", source: https://dhqbrvplips7x.cloudfront.net/directory/5039/assets/web-directory-7ffa5c07600e16c16ba367872647fa6c.js (9006)
[2368:3448:0911/104129.366:INFO:CONSOLE(9006)] "Handling message type", source: https://dhqbrvplips7x.cloudfront.net/directory/5039/assets/web-directory-7ffa5c07600e16c16ba367872647fa6c.js (9006)
[2368:3448:0911/104140.068:INFO:CONSOLE(9013)] "STASH-LOGGER: sendLogTraces", source: https://dhqbrvplips7x.cloudfront.net/directory/5039/assets/web-directory-7ffa5c07600e16c16ba367872647fa6c.js (9013)
[2368:3448:0911/104150.489:INFO:CONSOLE(372)] "Other topic:
{
"topicName": "channel.metadata",
"eventBody": {
"message": "WebSocket Heartbeat"
}
}", source: chrome-extension://bkmeadpinckobiapkihcoenmipobdaio/background/background.js (372)
[2368:3448:0911/104150.499:INFO:CONSOLE(312)] "Sending ping WS healthcheck", source: chrome-extension://bkmeadpinckobiapkihcoenmipobdaio/background/background.js (312)
[2368:3448:0911/104150.519:INFO:CONSOLE(372)] "Other topic:
{
"topicName": "channel.metadata",
"eventBody": {
"message": "pong"
}
}", source: chrome-extension://bkmeadpinckobiapkihcoenmipobdaio/background/background.js (372)
[2368:3448:0911/104150.519:INFO:CONSOLE(374)] "Pong WS healthcheck received.", source: chrome-extension://bkmeadpinckobiapkihcoenmipobdaio/background/background.js (374)
I made this props.conf but it's not correct in my search
My props.conf
[chrome:log]
LINE_BREAKER=^\[\d+.\d+:\d+\/\d+.
CHARSET=latin-1
SHOULD_LINEMERGE=true
TIME_FORMAT=%m%d/%H%M%S.%3N
category=Miscellaneous
description=A common log format with a predefined timestamp. Customize timestamp in "Timestamp" options
disabled=false
pulldown_type=true
TIME_PREFIX=\[\d+.\d+
But in my search i have again 2 or more line by events (like this) :
![alt text][1]
[1]: /storage/temp/254995-recherche-splunk-663-google-chrome-2018-09-20-0827.png
Can you help me :)
Many Thanks
↧
What is Search Head? Can it be seen in the UI?
In the Splunk Architeture, it is known that Splunk has major 3 components.
1. Forwarder - Instance installed at the log collection devices
2. Indexer - Virtual Instance installed with SPLUNK for parsing the collected logs
3. Search Head - Not very clear about where do we have it. Can somebody explain in non-technical words, what is a Search Head and Where can we find it?
↧
How to use subsearch to search across two indexes with no common field
I have one ID in a particular index and using that I want to find events in another index.
My search looks like this -
index=abc_test [ search index=xyz_test 12345 | stats latest(xyzID) as xyzID | fields xyzID ] | table _time, _raw
Basically, in my index abc_test, i have the value of xyzID, but with a different field name. So here I just want to see all events that contain the value of xyzID. But this search is giving me no results found. When I run these two commands individually I am getting results. Eg. index=xyz_test 12345 | stats latest(xyzID) as xyzID | fields xyzID gives me xyzID=56789
and when I search index=abc_test 56789 I am getting events. But in the subsearch format it is not working.
Can someone please suggest what is going wrong here ?
↧
↧
Chart ordering with span
I am generating a basic chart with the following command:
index=test | eval latency = (_indextime - _time) | chart count by latency span=10
The order of the columns in the chart seems to be lexical so I get:
0-10,10-20,100-110,110-120, ...
How do I adjust the command so that the columns are ordered numerically ?
0-10,10-20,20-30,30-40, ...
NB: there are too many bins for a solution using "rename" to be practical.
Many thanks
Graham
↧
Splunk stopped collecting Syslog logs
I installed splunk last week. Аnd I'm only collecting data (syslog) from one source.
Data stopped being collected this morning. I use wireshark on source server and splunk, and I see that syslog goes and comes, but I don't see logs in splunk. Latest event 3 hours ago.
License: Trial license group
License expiration Nov 17, 2018 4:04:30 PM
Licensed daily volume 500 MB
Volume used today 121 MB (24.135% of quota)
OS Windows 10 (Microsoft Windows [Version 10.0.16299.15])
SPLUNK Version:7.1.3 Build:51d9cac7b837
↧
Is there a way to calculate bandwidth requirements for Splunk index replication in a indexer cluster?
Basically this situation is this:
Customer asked what would be their bandwidth requirements for the replication between indexers.
Say if the license size per day is 200GB, with compression roughly 50% indexed data stored should be about 100GB.
now they have 2 indexers in the cluster with repfactor of 2 and search factor of 2.
so my calculation is below (not sure if it is correct)
based on Splunk docs the 50% consist of the below:
15% for the rawdata file.
35% for associated index files.
Total rawdata = (100*0.15)* 2 (this is the rep factor) = 30 GB
Total index files = (100*0.35)* 2 (this is the search factor) = 70 GB
So a total of 100GB of data will be replicated.
for the bandwidth calculation of 100GB per day:
(100/86400)*1024*1024 = 1213.63 KB/s
This is what I have come up with so far. Any advise would be appreciated.
Also what happens if it is a multisite cluster..
↧