Hi All,
I have to monitor a folder where there are very huge files with file name automatically generated.
Is there some way (instead of write a custom UNIX script that moves only small files to another folder that will be then monitored by the forwarder) to blacklist files that have a size greater than (suppose) 10 MB?
Any other suggestion with Splunk stanza attributes is appreciated.
Thanks a lot,
Edoardo
↧
Blacklist files greater than a certain size from inputs.conf
↧
what is the difference between crcSalt and CHECK_METHOD=modtime?
I know both of the two settings can help me to index the whole file,
What the difference between the two?
Is there some thing one can do but the other cannot?
↧
↧
"Parameter name: Path is not readable" - Splunk Add Monitor Command Error
Hello Team Splunk,
I am trying to add a monitor to a log file. When I do this as either the 'splunk' user or the 'root' user I receive the following error: "**Parameter name: Path is not readable.**" I noticed that as the 'splunk' user I cannot read the file with the *vi* program. However I can read the file as the root user. So why would I receive this error if the 'root' user can read the file and I am running the ./splunk program as 'root'. I also noticed that the log files I am trying to forward are on a network file system that is mounted on the operating system (OS). I am not sure if this mount makes a difference or not.
Also, I noticed I can add the entire directory but not the specific file I want to forward to the indexer. Also, when I monitor the entire directory the indexer only monitors some other out of date log file and not the log file I am after. 0_o I noticed that the files in this directory are executable except for the specific log file I am trying to monitor.
Regards,
rogue_carrot
↧
Translating string in search string
In my search strings I often rename columns using "AS". Is there a way I can expose those as parameters or something so that when I generate a message.pot file they are included?
Thanks
↧
Combining three (x,y) coord series into one graph
Hi,
I have 3 simple graphs generated by these three queries respectively
index=“app_event” | eval starttime = strftime ($$payload.beginVal$$, “%F %T.%9Q”) | chart count(starttime) as BeginVal by starttime
index=“app_event” | eval endtime = strftime ($$payload.endVal$$, “%F %T.%9Q”) | chart count(endtime) as EndVal by endtime
index=“app_event” | eval otherttime = strftime ($$payload.anotherVal$$, “%F %T.%9Q”) | chart count(othertime) as OtherVal by othertime
The count values are always 1. So coords can be assumed to be like
1. (1,1) , (3,1) (7,1)
2. (2,1), (5,1) (11,1)
3. (4,1), (8,1)
I want to merge these three charts into one chart by x axis value such that resultant chart looks like
(1,1) (2,1) (3,1) (4,1) (5,1) (7,1) and so on. But when I hover over the bar columns I want to be able to know the source of th column that is, is it BeginVal, EndVal or OtherVal.
Could someone please help me with the query.
Thanks!
↧
↧
cannot see all splunk servers using rest
Trying to get a list of all servers - i have a 3 tiered solution SH, IDX, HF
| rest splunk_server=* /services/server/status/resource-usage/hostwide
Only shows the SH and IDX
If i run the cmd locally on the HF i get the expected output.
Why is this ? is it possible to configure the SH to know there is a HF ?
gratizi
↧
Drop field name from lookup table similar to return function
Hi All,
To give some context, the return function in Splunk when used with a subsearch allows you to drop the field name when used with the "$" symbol. So for example in the subsearch: [search index=A | fields test | return $test], rather than returning test=B or test=C, this will only return "B" and "C".
If I create a search like: index=A [inputlookup lookup.csv | return $test] (which doesn't work), is there any way to only return the value in the inputlookup "B" and not test=B. Or if there are any other ways to do this?
Thanks
↧
Multivalue field extraction
Hello,
I cannot configure multivalue field extraction. I have a following event. the last 4 lines Time Stamp and Message shall be extracted as separate values togather with value following the FROM: section on the first line. I used props.conf. and Transforms.conf (MV_ADD) however no use.
From: "Rnvr"
Subject: Control Center System Event
Date: Fri, 15 Jun 2018 18:14:07 +0400
Message-ID:
Return-Path: r@cou.ge
Received: from mail.cou.ge (LHLO mail.cou.ge) (192.168.222.10) by
mail.cou.ge with LMTP; Fri, 15 Jun 2018 18:13:58 +0400 (GET)
Received: from localhost (localhost [])
by mail.court.ge (Postfix) with ESMTP id 75C1519E007B
for ; Fri, 15 Jun 2018 18:13:58 +0400 (+04)
[2018-Jun-15 06:04:42 PM (GET)] Hardware event occurred (The controller write policy has been changed to Write Back.) on server
[2018-Jun-15 06:04:43 PM (GET)] Hardware event occurred (The virtual disk cache policy has changed.) on server
[2018-Jun-15 06:04:44 PM (GET)] Hardware event occurred (The virtual disk cache policy has changed.) on server
[2018-Jun-15 06:13:16 PM (GET)] Digital input 'Digital Input 1' deactivated.
↧
"ttl" in alert_actions.conf is ignored.
I configured like below in `etc/system/local/alert_actions.conf`.
[email]
ttl = 1209600
I thought job of scheduled alert that action is sending email, would be expired after 14 days.
But my scheduled alert ignored this limit, and it displayed that the alert would be expired after 18 days in search activity.
Did I make mistake?
If someone tell me some information about it, I appreciate.
Additional Information:
Splunk ver : 6.6.6
Alert schedule : 0 8 * * 1
earliest : -6d@w1
latest : @w1
I didn't configure any ttl settings in savedsearches.conf
↧
↧
Splunk Add On for Google Cloud Platform - message="Not enough time to send data for indexing."
Hi,
Splunk Version - Splunk 7.0.2 (build 03bbabbd5c0f) - Role: Heavy Forwarder
Splunk_TA_google-cloudplatform version = 1.2.0
I have configured pub/sub inputs to collect logs from a Google Cloud Platform. As per the recommendations on the Splunk Documentation below, I have created 5 cloned pub/sub inputs for throughput and performance.
https://docs.splunk.com/Documentation/AddOns/released/GoogleCloud/Troubleshoot
Large pub/sub subscriptions
For large pub/sub subscriptions, we recommend cloning existing inputs that are ingesting the same subscriptions to increase data throughput and performance. These identical inputs can be in the same instance or in different instances.
To manage a large number of subscriptions to one Splunk instance, aggregate subscriptions belonging to the same Google Cloud Service account into one input to save resources.
I see data not being indexed on and off.
Checking the pub/sub logs I found this error:
xxxx-xx-xx xx:xx:xx,xxx level=ERROR pid=2383 tid=MainThread logger=splunk_ta_gcp.modinputs.pubsub pos=pubsub.py:_try_send_data:201 | datainput="gcp_qa_pubsub_all_2" start_time=xxxxxxxxxx| message="Not enough time to send data for indexing." lapse=8.34614610672 ttl=10
What is the reason for this error? And how do we fix it?
↧
AWS logs via Kinese splunk destination Http Event Collector getting indexed but not displaying in Splunk Apps
I have AWS cloudtrail, vpc flow logs and cloudwatch logs being indexed and are searchable in splunk via kinesis firehose->splunk destination-> HTTP Event collector->index but the splunk app for aws does not display any data
How do you configure the splunk app for aws to use this splunk recommended input architecture and display AWS log data?
same question for Splunk Enterprise, or Splunk Enterprise Security no show data but a index="aws_vpc_flow_logs" shows all logs?
https://www.splunk.com/blog/2018/01/12/power-data-ingestion-into-splunk-using-amazon-kinesis-data-firehose.html
↧
Hi All, I would like to know how to hook the callback to Splunk light weight forwarder
While pumping the logs from the device to splunk through light weight splunk forwarder( LWF ), due some issues if device lost the connectivity to splunk machine, the LWF has to notify via calling the registered callback.
Does the existing LWF framework has the provision to hook the callback
If not is there any other asynchronous mechanism to handle the above usecase
↧
Dashboard panel is empty ,on running search shows result
My panel in a dashboard is showing nothing,completely blank,no error nothing.However when I enable search in the panel and runs it in the search app,the query is showing proper result.
Any idea what is happening, leads would be helpful.
TIA.
↧
↧
Prevent tstats from truncating large fields
I have an accelerated data model with a field with large strings in it.
When I use the spl
| data model dm_name ds_name search | table *
I can see the whole fields.
When I use tstats:
| tstats latest (_time) as _time latest (ds_name. data) as data from datamodel=dm_name.ds_name
where (nodename = ds_name)
groupby ds_name. id prestats=true
the data fields are truncated.
I tried to change [stats] maxvaluesize in limits.conf without success. There seems to be no such config for tstats.
How can I prevent tstats from truncating large fields?
↧
Is there any way to resolve similar multiple alerts assigned in alert manager?
Hi,
We got 100 alerts for similar issue. need to resolved those at one go.
when alerts triggers,we assigned it to our name, but when applied filter to title description , we find 100 alerts for those only.Right now resolving one by one.
↧
Is the timestamp from which the setting value of ttl starts as the report execution time? Or if I check the results of the report on Splunk Web, ttl starts from time of check?
I made the following settings in `alert_actions.conf`.
[email]
#14days
ttl=1209600
And I thought that the expiration date of the report executed at `6/11 AM 8 o'clock` was `6/25 AM 8 o'clock`.
However, when I check the search activity,
The expiration date was `6/29 16:56`.
Then I checked dispatch file again and I found only timestamp of the file `generate_preview` is `6/15 16:56`.(*`6/29 16:56` is Just After 14 days from `6/15 16:56`.)
With reference to the following materials, I think that this file is updated when checking the report results from the GUI.
https://www.splunk.com/blog/2012/09/10/a-quick-tour-of-a-dispatch-directory.html
In other words, if I checked the report from Splunk Web, is the specification that restarts calculating ttl from that time?
If someone knows about it, please tell me.
↧
Enchance search results with subsearch on different sourcetypes? (DNS src ip & timestamp with DHCP ip & timestamp)
Hello Splunkers!
For some time I'm trying to figure out how to feed results of a DNS blacklist check versus DHCP logs with respect to the time of event in DNS log and it's counterpart DHCP log.
Let's say I run the following query to get results of my DNS Blacklist hits:
index="msad" sourcetype="msad:nt6:dns" questionname="BLACKLISTED_DOMAINS" source_ip!="8.8.8.8"
| table _time source_ip
| dedup source_ip
This gives me a nice table showing the host (by IP) attempting access to blacklisted domain and most recent time that it happened.
Now I wish to use the resulting table as input into a search (DHCP or any other log that can correlate IP to Hostname with Time) that will resolve/correlate the resulting IPs with hostnames at the time of the resulting event.
I can't figure this out. I've tried running a subsearch but to my understanding it accepts only single values as input (thus I can feed it IPs, but I loose the time and the results might indicate different host in a dynamic DHCP enviroment for past events).
Is this possible? How? :)
↧
↧
How to create the below alert?
I have below two events for a host which shows eventcode=6005 meaning PC ON and evencode=6006 meaning PC OFF. I want to create an alert for sending an alert if the host or computer is Off for more than two hours. So basically, it should take the latest event by host and check if eventcode=6006 for off and then subtract that time from now and if greater than 2 hours should send an alert for this host or computer being OFF. How can I do that.
6/25/18
6:09:23.000 AM
06/25/2018 05:09:23 AM
LogName=System
SourceName=EventLog
EventCode=6005
EventType=4
Type=Information
ComputerName=USOLPWDW7361HNK.NAO.global.gmacfs.com
TaskCategory=None
OpCode=None
RecordNumber=358246
Keywords=Classic
Message=The Event log service was started.
6/25/18
6:08:14.000 AM
06/25/2018 05:08:14 AM
LogName=System
SourceName=EventLog
EventCode=6006
EventType=4
Type=Information
ComputerName=USOLPWDW7361HNK.NAO.global.gmacfs.com
TaskCategory=None
OpCode=None
RecordNumber=358233
Keywords=Classic
Message=The Event log service was stopped.
↧
Joining four tables into one?
Hi,
I have a dashboard which in which one of the panels features a table, currently made out of 4 separate searches (technically 4 tables just next to each other), like so:
![alt text][1]
The searches for each one look like this:
base search... | stats latest(AvailabilityFlex) AS Availability latest(RollOutFlex) AS RollOut latest(LeadershipFlex) AS Leadership
where for the other metrics the stats command looks for other metrics, i.e
base search ... | stats latest(AvailabilitySub) AS Availability latest(RollOutSub) AS RollOut latest(LeadershipSub) AS Leadership
Is there an easy way of combining these searches all into one table, with the same structure as it currently has? A table with 4 columns and 4 rows, the first column one being the 'metric' and the name of that for each row?
Thanks,
Sam
EDIT: The reason for this is because when you generate the PDF it really stretches out the table, making it look much less professional. If anyone knows how to keep panels all grouped together when doing this, that would also work!
[1]: /storage/temp/252068-table.png
↧
How to modify timewrap legend ?
Hi ! I am trying to modify the legend generated by the timewrap command. I saw that we could slightly change it with the parameter "series" but it's not really giving me what I want.
Let's say I want to have a sum of prices from this request :
index=sandbox earliest=-13d | timechart sum(prices) as "Sum of the prices" span=d | timewrap 1w series=relative
The legend will be Sum of the `prices_1week_before` and `Sum of the prices_latest_week` . I would like to have something like `Sum of the prices for the week before` and `Sum of the prices for the latest week` .
How can I get this ? Thanks !
↧