Hi folks,
I've installed a HF on a SCOM server to collect SCOM logs to Splunk. On the HF I've installed the [Splunk Add-on for Microsoft System Center Operations Manager][1] to collect logs using scheduled PowerShell scripts. The logs are indeed collected, but not on the interval I expected. One of my collection stanzas with the name "Events" uses the default quartz cron settings, which is `0 0 * ? * *`. This should mean the the logs are collected every hour, but they are not, they are collected every midnight instead.
The add-on GUI on the HF for the collection stanza says `0 0 * ? * *`, as well as the setting `schedule` in stanza `[powershell://_Splunk_TA_micosoft_scominternal_used_Events]` in `inputs.conf`, as well as the setting `interval` in stanza `[Events]` in `microsoft_scom_task.conf`. Yet the logs are only collected every midnight.
Anyone got an idea on why this is, or how I could go forward in troubleshooting this?
[1]: https://splunkbase.splunk.com/app/2729/
↧
Why doesn't my quartz scheduler cron settings used on Splunk Add-on for MS SCOM work?
↧
Native Splunk Password Expiry Alert - does it work?
I have setup the Splunk native password policy on my company's implementation but it seems like the password expiration alert does not work as expected. Today I have many users complaining that their password has expired but did not receive, or notice any warning.
I was assuming that the 15 day alerts would be a "highlighted bar" at the top of the splunk page (fine for daily users) but for occasional users I was expecting an email. Reading over the docs I can only find information on how to set this but not any detail on what it actually does.
↧
↧
splunk license: _internal vs event length
I'm trying to understand how Splunk calculates license. There is particular index "snort" which receives some JSON input and laucher reports this index has increased significantly. If I do this query
index=_internal source=*license_usage.log type=Usage idx=snort
| stats sum(b) as bytes
| eval MB = round(bytes/1024/1024,1)
| fields MB
it reports me 9GB for a given period. If I estimate length of each event and sum these values in such a way
index=snort
| eval len_raw = len(_raw)
| stats sum(len_raw) as bytes
| eval MB = round(bytes/1024/1024,1)
| fields MB
it gives me 18MB. I.e, there is about 500 times difference. I understand there may be issues due to encoding (ASCII vs UTF8), yet it would make 2 times difference, not 500. There are other sources which allow me to estimate the size and number of events from these sources and it seems 18MB should be the right number. Any ideas why numbers reported in _internal log are so much different?
↧
Is it possible to create a Choropleth Map by city?
Hi,
I have a Choropleth Map for this search:
....
| iplocation Ip, City
| stats count by Country
| geom geo_countries featureIdField=Country
Is it possible to create such a map by City?
Thanks in advance
↧
How can I get a license ID?
Hello All,
Could you help me to get a license ID. I want to renew a license for Splunk Enterprise.
↧
↧
Subsearch time range
Hello,
I'd like to run a subsearch with different time range than the parent search. Have to get mac addresses, and need a bigger time range to see results in DHCP logs. you help what's wrong with this ?
index=fw src_translated_ip="$subsearch_src_ip$"
| dedup src_ip
| rename src_ip as dest_ip
| join type=left max=1 dest_ip [ search index=dhcp earliest=-1h@h sourcetype=isc:dhcp dhcp_type=DHCPACK ]
| table dest_ip dest_mac
thanks
↧
Format different dates in Splunk 7.1.1
We have a Field, say, XYZ with date-time values but format for all values is not same. For some values format is "MM/DD/YYYY HH:MM:SS AM/PM" or "YYYY/MM/DD HH:MM:SS" and so on.
We have to put all the date time values in same format and then calculate the no. of days from each date till today.
↧
unable_to_write_batch in db connect add-on
when installing and configuring the add-on, the following problem occurred.
2018-08-21 18: 10: 29.047 +0300 [QuartzScheduler_Worker-6] INFO org.easybatch.core.job.BatchJob - Job 'FULL_DB' started
2018-08-21 18: 10: 29.301 +0300 [QuartzScheduler_Worker-6] INFO c.s.dbx.server.dbinput.recordwriter.HecEventWriter - action = write_records batch_size = 1000
2018-08-21 18: 10: 29.301 +0300 [QuartzScheduler_Worker-6] INFO c.s.d.s.dbinput.recordwriter.HttpEventCollector - action = writing_events_via_http_event_collector
2018-08-21 18: 10: 29.322 +0300 [QuartzScheduler_Worker-6] INFO c.s.d.s.dbinput.recordwriter.HttpEventCollector - action = writing_events_via_http_event_collector record_count = 1000
2018-08-21 18: 10: 29.559 +0300 [QuartzScheduler_Worker-6] ERROR c.s.d.s.task.listeners.RecordWriterMetricsListener - action = unable_to_write_batch
java.io.IOException: HTTP Error 400: Bad Request
I use Splunk DB Connect 3.1.3 with jre1.8.0_181 and postgresql JDBC driver in Windows Operation system.
When I test my SQL query with the db-connect SQL Explorer, I get the correct data from my PostgreSQL database.
When I use input with rising or batch I get this HTTP Error
↧
How to have different color of bar in a bar chart?
I have tried to color each bar of the bar chart differently in the following query but didn't got any answers which could be satisfactory to quench my question.
`
index=some_value summ_type=some_value
| stats dc(state) as "STATE" by IDMODEL
| sort -"STATE"
| table "STATE", IDMODEL
| rename "IDMODEL" as "MODEL ID"
| head 10
`
I have tried the following solution available online but didn't get desired result.
`
`
↧
↧
Does the SplunkJS Stack has the Dashboard Editor in it?
Want to use the Dashboard Editor to edit a dashboard but cannot find such a component.
↧
How to write corn schedule of alerts for every 5 min between 6 am to 11 pm CST everyday in Splunk?
How to write corn schedule of alerts for every 5 min between 6 am to 11 pm CST everyday in Splunk?
I have written as:
*/5 6-23 * * *
Please suggest if this is correct or not?
↧
Get top combination from a multi value field
Hi, I have a multi value field who has data something like below which has been extracted from some web service.
I am looking to find the combination which occurs maximum time -
Event 1 Combo 1 -
A
B
C
D
Event 2 Combo 2 -
B
C
D
F
Event 3 Combo 3 -
G
B
Q
R
There could be different combinations. I want to compare these combinations and get the one which occurs in maximum events.
↧
Dynamic input in Dashboard Panel
Hi,
We want to create a dashboard with Dynamic inputs. Like we will provide a dropbox with SourceType. Depending upon the value of Source type different input text boxes should be provided so that user can enter fields and search results will be provided.
Different input fields dynamically appear on Dashboard depending on the source type.
Thanks,
Prashant
↧
↧
Find Time between events, including current Time.
Hello all,
I've seen examples of how to find time between events using streamstats, and also to find the time since the most recent event using stats, but how would I accomplish doing both?
Ultimately I'm trying to detect a loss of information that's reported every 10 minutes, so I'm using streamstats to search for differences of > 10 min, however this "outage" isn't detected until after the data is reported again, thus giving streamstats two items to actually compare. I need all of these deltas, and also the time since the most recent as occurred.
Thanks, and here's some code I have:
search
| streamstats current=t last(_time) as last_time by field
| eval outage= last_time - _time
| eval outage=tostring(outage, "duration")
| table field _time outage
↧
Upload txt file - metafields source and sourcetype not searchable
Hello,
i just uploaded a txt file with some logs, through GUI Add data ->upload.
Data is indexed, and I can search it by typing
index = test
I can see that all metafields like source and sourcetype has been assigneged according to my settings, but....
when i search for
source = my_source
or
sourcetype = mylogfile.txt
i get zero results.....
I know that no stanzas are generated in inputs.conf when you upload a file, but is it normal behaviour in case of uploading files?
↧
Heavy Forwarders as an intermediary Layer Using indexer discovery
Hey,
we are using multiple HF to collect data from different groups of UF before sending it to a multi site Indexer Cluster. I want to activate
indexer discovery to make it easier to size/change the Indexer Cluster. I know the process only from UF and am wondering if it is the same for HF. Do I just change the outputs.conf on the HF similar to the changes I do on the UF when activating Indexer Discovery?
I tried it in my test environment and have problems to get it working. Should it work that way ? I just want to check with you If I am having the right idea or if there is something fundamentally wrong with my understanding of Indexer Discovery.
Thanks, Chris
↧
Do Accelerated Table Datasets need a root event?
In the Table Datasets Acceleration [documentation][1], it lays out the process of accelerating a table dataset datamodel object. Because Table Datasets differ from normal data models, they have to be created through the Table Datasets Addon in the search app.
After creating a Table Dataset, I moved the Table Dataset datamodel object to the app I wanted it to live in, then accelerated it. Currently, I can access the datamodel in a '| tstats' pipe by using the following:
| tstats summariesonly=true avg(foo) FROM datamodel:My_TableDataset
Which, according to the [Accelerate Datasets documentation][2], does not leverage the benefit of acceleration due to syntax:
"To do this, you identify the data model using FROM datamodel=:"
Events are being populated correctly in the datamodel, also viewable when using Pivot.
Whenever I try to use the correct 'datamodel=My_TableDataset' syntax, I get the following error:
"Error in 'TsidxStats': Invalid or unaccelerable root object for datamodel"
Because Table Datasets aren't created/defined the same way as normal datamodels, what does this mean? How do I troubleshoot this issue and access the accelerated table benefits from this dataset?
Info from the Data Model settings page:
MODEL
- Datasets ... 1 Search Event
- Permissions ... Shared Globally. Owned by nobody.
ACCELERATION
- Status ... 100% Completed.
Type = table
[1]: http://docs.splunk.com/Documentation/Splunk/7.1.2/Knowledge/Acceleratetables
[2]: http://docs.splunk.com/Documentation/Splunk/7.1.2/Knowledge/Acceleratedatamodels
↧
↧
How to see Events coming into the Indexer?
I am forwarding events from windows events from Graylog to a load balance point in front of a UF using a TCP input then forwarding to my indexers. I can see in the metrics.log on the UF that data is coming in and I can see on the indexer data coming in from the IP of of my UF. When I search i am not seeing that sourcetype.
Where can I look to see what might be happening on the indexer?
Thanks!
↧
Splunk 7 upgrade - ERROR DispatchThread - Failed to read runtime settings: File :/opt/splunk/var/run/splunk/dispatch/subsearch_***/runtime.csv does not exist
Hi All,
We just upgraded to Splunk 7 and a subsearch started auto-finalizing after 9000s timeout. Running this search by itself takes ~220s.
Search.log shows a long list of (900s worth) entries of:
`ERROR DispatchThread - Failed to read runtime settings: File :/opt/splunk/var/run/splunk/dispatch/subsearch_tmp_###/runtime.csv does not exist`
I've seen plenty of https://answers.splunk.com/answers/104690/error-dispatchthread-error-reading-runtime-settings-file-does-not-exist-splunk-6-0-upgraded.html[Old splunk Answers][1] about this being a known issue in Splunk 6 and that it should be surpressed. Curious if others are seeing this in Splunk 7 and if there is a better explanation of what is happening and how to resolve it.
[1]: https://answers.splunk.com/answers/104690/error-dispatchthread-error-reading-runtime-settings-file-does-not-exist-splunk-6-0-upgraded.html
↧
unable to extract all matching values in a single line; the interesting field only captures first matching value.
The string is a single line, i am unable to extract all matching value in this line.
the interesting fields that the splunk has, it extracts only name1 for e.g, name2, name3 and name4 are not being extracted.
may i request your help.
SNMPVariable value='name1' (oid='enterprises.14179.2.2.1.1.3.0.39.227.7.142.160', oid_index='', snmp_type='OCTETSTR'),
SNMPVariable value='name2' (oid='enterprises.14179.2.2.1.1.3.0.163.142.226.49.48', oid_index='', snmp_type='OCTETSTR'),
SNMPVariable value='name3' (oid='enterprises.14179.2.2.1.1.3.160.35.159.93.36.0', oid_index='', snmp_type='OCTETSTR'),
SNMPVariable value='name4' (oid='enterprises.14179.2.2.1.1.3.160.35.159.93.55.112', oid_index='', snmp_type='OCTETSTR')
↧