We will be deploying forwarders outside of our network and using SSL. These forwarders will be forwarding the raw data to another forwarder just inside our network. Once the raw data arrives inside our network it will again send to another forwarder (heavy) into our intranet. Once this heavy forwarder receives that data it then sends on to the indexers.
Question:
Once the original raw data, sent by forwarders that is outside our network, arrives on the first forwarder in our network it is possible to send data to the heavy forwarder in our intranet to not have to use SSL in outputs.conf?
Thx
↧
SSL Forwarder to Forwarder
↧
How is accessing health and stability of space is done in splunk?. Also what is meant by defining gaps in Splunk ?
I will like to access the health and stability of space of my Linux system which forwards data to my Splunk search head. How do I go about it in Splunk?
Additionally,
what is gaps in Splunk ? How do you define gaps in Splunk ?
Thank you
↧
↧
Wildcards not working in sourcepath in inputs.conf
I am facing issues to have my input.conf work using wildcards.
I am monitoring the same directory where 2 different files are being generated, and each one should be indexed in different sourcetypes.
Path: C:\Splunkfiles\faturamentoSAP\
File 1: splunk.YYYYMMDDHHMMSS_1.CSV
File 2: splunk.YYYYMMDDHHMMSS_2.CSV
The file 1 should be indexed in sourcetype 1, and file 2 should be indexed in sourcetype 2.
I have configured both sourcepath in "Data inputs » Files & directories" like below but it is not working:
C:\Splunkfiles\faturamentoSAP\*1.CSV
C:\Splunkfiles\faturamentoSAP\*2.CSV
If I configure as "C:\Splunkfiles\faturamentoSAP" it is bringing both files to the same sourcetype that is not what i want.
How configure the sourcepath using wildcards?
Many thanks and regards.
↧
combining 2 fields (values) from the different sources (A OR B) the same sourcetype (test) in the single search
I am trying to perform a ratio calculation on 2 fields (values) coming from the different sources,the same sourcetype, Individually my searches works fine , returning correct values , bur when I combine them together I got really weird results. Many Thanks for your help !
(sourcetype= "test" source=A host =192.168.1.1 fieldA=* ) OR source=B
eval sourceA=round(fieldA/1000),2
eval sourceB=round(fieldB/1000),2
stats max(sourceA) as SA max(sourceB) as SB
|eval percent_ratio=round(SA/SB,2)
|stats max(percent_ratio)
Legend:
fieldA (sourceA),
fieldB (sourceB)
↧
How to make a multi-select token with dynamic options that defaults to selecting all?
I have a dashboard which will dynamically return values in a separate multiselect token based on user input to an initial token. However, I would like the multiselect token to default to having all options selected with the ability to uncheck any option and remove them from results.
The functionality I am looking for is nearly identical to the "Fields to use for predicting" in the Machine Learning Toolkit but I cannot figure out how to see the source code in that app. The only difference is that I would like to default to having all values selected.
↧
↧
Pie chart - Customise the labels Font size
/how to customise the pie chart font size labels
Please post the working snippet
↧
Timestamp is wrong
Hi,
In my logs I send time flag in epoch format.
The syntax of the log is:
time=XXX|user=YYY...
In my custom sourcetype I defined in props.conf the following:
SHOULD_LINEMERGE = 0
TIME_FORMAT = %s
TIME_PREFIX = time=
I also tried TIME_FORMAT = %s%3N
The max timestamp look ahead is 128 although the time is the first field in my logs.
Unfortunately, the time that is show under Time column is the time of my Splunk server and not the time in the log itself.
Where is my mistake?
Thank you
↧
When adding "no_appending_timestamp = true" logs stop coming in
Hi all,
I have an UDP port 1514 where I forward syslog data to (It is a homelab, I am aware a syslog server with forwarder would be better).
My current config in /opt/splunk/etc/apps/search/local/inputs.conf is:
[udp://1514]
connection_host = ip
index = syslog
sourcetype = syslog
But now, logs are coming in as:
Sep 15 21:12:10 10.0.60.1 Sep 15 20:13:28 HOSTNAME dnsmasq[28957]: reply apps[.]splunk[.]com is 54.186.82.128So I would like to not prepend the log with the timestamp. Serveral resources indicate that I should add "no_appending_timestamp = true" to my config. Which would make it the following: [udp://1514] connection_host = ip index = syslog sourcetype = syslog no_appending_timestamp = true But when I do this, the logs are not coming in anymore (at least, I cannot query them). Does someone know what the problem is?
↧
Installing Universal Forwarders on Linux hosts running as Search Heads, Indexers, Deployment Server, etc
Hi everyone,
I had a hard time figuring out the confusing (but excellent effort though) documentation for Splunk Add on for Unix and Linux. I had went through the docs and Answers but am not 100% sure. My questions are:
- In a distributed environment, where I want to collect logs from my search head (cluster), indexers, cluster master, licenses master, deployment server, heavy forwarders, etc, must I install a Linux Universal Forwarder on each of them? It is clear that the Universal Forwarder must be installed on Linux hosts but how about these Splunk instances that are also running Linux itself? All my Splunk instances' logs are forwarding to indexers btw.
- Specifically for indexers, the documentation states:
"If the indexer is also a *nix host and you want to collect *nix data from it, complete the procedure at Enable the data and scripted inputs within the Splunk_TA_nix add-on on the host."
Seeing the above comments, do I need to install a Linux Universal Forwarder on the indexers?
- How about Cluster Master, Deployment Server, etc that are also running on Linux? Do I need to install Linux Universal Forwarder on them? Or do I just install the Splunk_TA_nix add-on? Or do I install both? I don't see any mention about them in the docs.
Thanks for the advice in advance.
↧
↧
Can the Share button in reports can be disabled?
Hello,
Can the Share button in reports can be disabled or enabled based on the role?
Is there any capability for this?
Thanks.
↧
combine multiple values of a table column based on other column value
I have table in my panel that has columns including owner,country,position,wbs.
Right now, seperate rows are made if one owner has multiple wbs.
I wanted to make a single row for a owner that has multiple wbs values and shown in a comma seprated wbs values.
Right now i have
owner wbs
abc 100
xyz 101
abc 102
abc 103
it should show
owner wbs
abc 100,102,103
xyz 101
↧
同時実行イベント数のタイムチャートの作成方法
下記の様なデータの同時実行数の集計を考えています。
CallID,JoinTime,LeaveTime,CallState
146792,2018-08-01 07:59:19,2018-08-01 08:22:11,COMPLETED
146794,2018-08-01 08:00:00,2018-08-01 09:59:00,COMPLETED
146795,2018-08-01 08:00:00,2018-08-01 09:01:21,COMPLETED
146796,2018-08-01 08:00:08,2018-08-01 08:22:08,COMPLETED
・JoinTime 開始時刻
・LeaveTime 終了時刻
開始時刻と終了時刻の間が実行時間で、同時に実行されているイベントの
タイムチャートを作成したいです。
search文を下記の通り、作成していますが、上手くいきません。
(CallState=COMPLETED host="")
| eval duration=LeaveTime-JoinTime
| concurrency duration=duration start=JoinTime
| timechart max(concurrency)
splunk初心者で申し訳ありませんが、ご教示いただけますと助かります。
↧
Dynamic Anomaly detection
Hi,
I have Perf i.e. Performance data (OMS) where CounterName and CounterValues are present for different Computers
So I am running saved search every 15 min. to raise an alert and my criteria is
1. Any computer which shows consistent a specific counter value or range then it is baseline but if deviate for specific interval then should trigger an anomaly. E.g. computer A shows 86% for processor time so, Splunk should not report as anomaly as it is baseline for it but when deviate as shows 96% for next interval then only for that specific time it should report it.
How I can achieve this.
↧
↧
Is it possible to have another column header on top of a column header?
So basically it'll be like this...
![alt text][1]
[1]: /storage/temp/256013-column.png
I wanna know if there's a way for Column 2 and Column 3 to have their 'Mother' header.
I'm really sorry if this was already asked, I just don't know the term to be used to search in this website. Please help.. I thank you in advance!
↧
Passed AppInspect but still no AppInspect badge
We've submitted our app to the AppInspect-API and we the report came back with no errors and no failures. How come we don't get the AppInspect badge?
↧
event count comparison with today vs yesterday vs last week vs prior week
Hi,
I have two strings like "opend" and "exited" in the events. So i need to count how many opened and exited today and that should be comparing today with yesterday vs last week vs prior week in the the form of line chart. legends should be like -Today-opened, -Today-exited,Yesterday-opened,Yesterday-exited and last week and prior week as well. Can you please help me.
↧
Alert not triggering despite seeing results
I have an alert set up in my Splunk Enterprise Security environment that is set to trigger when we receive a notable that is marked as either *high* or *critical* urgency. This search has worked in the past but did not trigger over the weekend even though a high urgency notable was created.
When I run the search manually over the time range, I can see that it returns results, so this does not appear to be an issue with the search logic. There is also no throttling configured. The search is on a cron schedule to run every 2 minutes and looks over the last 125 seconds of events.
Looking through the audit logs, I can see the following events to confirm that the saved search completed running:
Audit:[timestamp=09-16-2018 13:50:24.206, user=XXXX, action=search, info=completed, search_id='scheduler__admin_YWFtX2FsbF9jdXN0b20tc2VhcmNoZXM__RMD571711d067188a19b_at_1537102200_43245_2C5B5B40-2FE3-4255-A621-60A171C4E9C0', total_run_time=2.63, event_count=0, result_count=0, available_count=0, scan_count=0, drop_count=0, exec_time=1537102203, api_et=1537102075.000000000, api_lt=1537102200.000000000, search_et=1537102075.000000000, search_lt=1537102200.000000000, is_realtime=0, savedsearch_name="New Notable - Alert", search_startup_time="2235", searched_buckets=11, eliminated_buckets=0, considered_events=0, total_slices=0, decompressed_slices=0][n/a]
Audit:[timestamp=09-16-2018 13:50:34.832, user=XXXX, action=search, info=completed, search_id='rsa_scheduler__admin_YWFtX2FsbF9jdXN0b20tc2VhcmNoZXM__RMD571711d067188a19b_at_1537102200_43245_2C5B5B40-2FE3-4255-A621-60A171C4E9C0', total_run_time=2.63, event_count=0, result_count=0, available_count=0, scan_count=0, drop_count=0, exec_time=1537102203, api_et=1537102075.000000000, api_lt=1537102200.000000000, search_et=1537102075.000000000, search_lt=1537102200.000000000, is_realtime=0, savedsearch_name="New Notable - Alert", search_startup_time="2235", searched_buckets=11, eliminated_buckets=0, considered_events=0, total_slices=0, decompressed_slices=0][n/a]
The event that should have triggered an alert occurred at 13:50:31PM on 09-16-18. The second event in the audit log shows that the search was completed three seconds later at 13:50:34PM. Is it possible that these overlapped which prevented the alert from being triggered? The next scheduled search was run at 1:52:24PM so this should have triggered an alert as well.
↧
↧
eval last event of 3 date fields
Hello,
3 date fields (A B C) :
in the source file `|20180830|NULL|20180223`
How can I compare this 3 dates and extract a new field : LastEvent ?
the LastEvent value in this case is `20180830`
Il suppose `| fillnull value="0"` to begin
but for the rest ? :-)
thank's for your help
↧
In data model Is it possible to create a new dataset and then indent the existing datasets and its corresponding childs?
I have an existing data model with a dataset (root event) and child. what I want is to indent this existing dataset to a new one. Making the existing dataset child of the new dataset that I'm going to create.
Is this possible?
Thanks!
↧
Is it possible to access values within HTML form objects rather than inputs?
Hello,
I'm the process of designing a simple data entry form using Splunk, but I am worred that too many inputs (50+) will make the interface slow and unusable. As an alternative, I'm trying to use a simple HTML form and then extract the values when a checkbox value is selected. As a simple example, I have an HTML form that has two inputs: firstname and lastname
HTML Form
What I'd like to do is generate a result set based on those values by selecting a checkbox. I want to concatenate the values from all of the form fields and then assign it to a single input. Something along the lines of this:Submit form.firstname + form.lastname
Once the value has been concatenated, I can use the `makeresults` command to format the data before writing it to index.
Is this possible? Any pointers would be greatly appreciated!
Best regards,
Andrew
↧