Hello and good afternoon.
I did run into the following issue and was wondering if anybody experienced the same and/or probably even has a solution:
The Splunk Indexer and Forwarder we have are on these versions: *Splunk 7.1.2 (build a0c72a66db66)*, *Splunk Universal Forwarder 7.1.2 (build a0c72a66db66)*. The OS on both hosts is *CentOS Linux release 7.5.1804*
In the GUI we configured (as admin user) for the Forwarder under **Data inputs | Forwarded inputs | Files & directories** certain entries. They are written on the Forwarder into file */opt/splunkforwarder/etc/apps/_server_app_SERVERCLASS1/local/inputs.conf*, with *SERVERCLASS1* being the Server Class.
Entries in the Forwarders *inputs.conf* look, after adding them through the GUI, for instance like this:
[monitor:///home/donald.duck/splunk_upload_dir/my_app1/*syslogs.log.txt]
disabled = 0
index = my_app1_index
sourcetype = my_app1_sourcetype
blacklist = \.filepart$
host = server1
[monitor:///home/goo.fey/splunk_upload_dir/my_app2/*applogs.log.txt]
disabled = 0
index = my_app2_index
sourcetype = my_app2_sourcetype
blacklist = \.filepart$
host = server2
In our environment however, the need arose to add also the ***crcSalt =*** entry for each section on the Forwarders *inputs.conf* file. Otherwise all source files won't be indexed properly or rather "won't be displayed as Sources" I should say.
So in respect to the above examples, the file looks afterwards like follows:
[monitor:///home/donald.duck/splunk_upload_dir/my_app1/*disney1.log.txt]
blacklist = \.filepart$
disabled = 0
index = my_app1_index
sourcetype = my_app1_sourcetype
host = server1
crcSalt =
↧
crcSalt entries getting deleted on Forwarders inputs.conf, when changing Forwarder Data Inputs through GUI
↧
Automate report using script
Hi,
The report is scheduled to be run every 10 days from search head, the report itself is too big to be send out through email.
Is it possible to add a script that on completion of the report will send it to a different folder ?
Thanks,
↧
↧
I want to combine two chart query output as 1
Part A:
index=web splunk_server_group=hotel sourcetype=hotellog eventname=hotel-book earliest=-3d| eval dateyearweek = strftime(_time, "%Y-%U")| stats count(eval(like(success,"false"))) as F, count(eval(like(success,"true"))) as S by sitename, dateyearweek | eval P=((S*100)/(S+F))| chart values(P) over sitename by dateyearweek
Part B:
index=web splunk_server_group=hotel sourcetype=hotellog eventname=hotel-book earliest=-3d| eval weeknumber= strftime(_time, "%Y-%U")| chart count by sitename, weeknumber
Requirement: I want to combine both output as 1
↧
How can I connect multiple database in Splunk DBConnect?
I have installed and configure Splunk DB_Connect in my Splunk instance, connect one database with it and it's working successfully.
But I want to connect multiple database server without creating multiple db connection in the Splunk DB_Connect. Is it possible?
Is there any limitations of database servers connections?
Please suggest me the best way to connect multiple database server without creating multiple db connection and please attached the link also.
↧
How to Flatten nested XML attribute data
We have data coming in XML in the following format:
Sample Event 1:
Sample Event 2:
Please note that the data is coming exclusively in XML attributes, and not in elements.
We need to flatten out the data using SPL.
We have tried multiple combinations of spath and mvexpand. However, since data is in attribute tags, we are not split it into separate rows to show it in a table form when it is of the form given the in the second XML event.
I am not sure we can handle this using a regex since, apart from a few, the attributes are not uniform throughout.
Can someone please help?
Thanks in advance.
Regards,
Anirban.
↧
↧
How can I join two searches on a common field?
I'm trying to append a two tables on a common key. I am using `|appendcols` but the two tables are not internally joined, just placed side by side. Am I correct to use `|appendcols`?
↧
Splunk 7 upgrade - "ERROR DispatchThread - Failed to read runtime settings: File :/opt/splunk/var/run/splunk/dispatch/subsearch_***/runtime.csv does not exist"
Hi All,
We just upgraded to Splunk 7 and a subsearch started auto-finalizing after 9000s timeout. Running this search by itself takes ~220s.
Search.log shows a long list of (900s worth) entries of:
ERROR DispatchThread - Failed to read runtime settings: File :/opt/splunk/var/run/splunk/dispatch/subsearch_tmp_###/runtime.csv does not exist
I've seen plenty of old answers like this one https://answers.splunk.com/answers/104690/error-dispatchthread-error-reading-runtime-settings-file-does-not-exist-splunk-6-0-upgraded.html being a known issue in Splunk 6 and that it should be supressed. Curious if others are seeing this in Splunk 7 and if there is a better explanation of what is happening and how to resolve it.
↧
How do I filter results based on approximately 115 partial values of a field?
I have a list large list of products. I need to search the list but filtering out some results based on the partial values of the **ProdDesc** field. Examples of ProdDesc would be something like : MD5864,WINDOWS,PROC1 or MA9874,ANDROID,PROC3, etc.
I can use `ProdDesc != \*5864\* and ProdDesc != \*ANDROID\*...`. The problem is that the list of partial results has 112 items. When I add `ProdDesc != \*[partial value]\*` more than 26 times, the query returns no result at all. There seems to be a limitation of how many times I can use `!=\*[partial value]\*`.
I'm using Splunk Enterprise version 6.5.3 and I'm an end user, not an Admin.
I wold appreciate any help provided. Thank you.
↧
Splunk DB Connect: Why am I getting "The value is not set for the parameter number 1" when updating the SQL query in the Edit Input panel?
As I go through the manual process of trying to migrate queries from dbConnect v1.x to dbConnect 3.1.3, I'm having issues with the Edit Input panel.
I follow the steps.
1. Choose a valid connection - check
2. Browse structure and type SQL .... - check
3. Pick a rising column and set the checkpoint value - check
4. Update SQL to accept the checkpoint value and make sure it works.
This is where I run into a problem.The second I start typing "WHERE TIME_STAMP > ? .... The Rising Column dropdown completely empties out and the query returns:
com.microsoft.sqlserver.jdbc.SQLServerException: The value is not set for the parameter number 1. ( No results found )
This makes me unable to save the query and actually set a Rising Column and a Checkpoint Value
If I execute the same query using EVENT_TIME, things work ... but, both EVENT_TIME and TIME_STAMP are 'bigint' objects, and they both show up in the batch query results, so the question would be, why is TIME_STAMP an invalid field to use as a rising column in dbConnect 3 but works perfectly in dbConnect 1?
The target database is MS SQL Server
↧
↧
How to set up Slack alerts with Linux Hostname Environment Variable placeholder in alert_actions.conf
I'm setting up Slack alerts and would like to deploy uniformly to our heavy forwarders. To do so, I'd have to add a placeholder to their alert_actions.conf
[slack]
disabled = 0
param.from_user = XXXXXXXXXXXXXXXXXXX #$HOSTNAME or $HOST
param.webhook_url = https://hooks.slack.com/services/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
I was wondering if something like the linux hostname variable can be used within this file like above.
↧
split _raw data into multiple table fields
I have the following data in _raw and I need to split the data at the semicolon into multiple fields in a table
LOG INPUT (_raw)
2018-08-22 10:45:19,834 ;Application 1;Status Known;SEARCH_STRING;APP_STATUS
2018-08-22 10:44:19,834 ;Application 2;Status Unknown;SEARCH_STRING;APP_STATUS
2018-08-22 10:43:19,834 ;Application 4;Status Offline;SEARCH_STRING;APP_STATUS
2018-08-22 10:42:19,834 ;Application 5;Status Known;SEARCH_STRING;APP_STATUS
2018-08-22 10:41:19,834 ;Application 3;Status Known;SEARCH_STRING;APP_STATUS
2018-08-22 10:40:19,834 ;Application 1;Status Offline;SEARCH_STRING;APP_STATUS
I want a table that looks like
Date | Application Name | Status | Search | Ingore
2018-08-22 10:45:19,834 | Application 1 | Status Known | SEARCH_STRING | APP_STATUS
2018-08-22 10:44:19,834 | Application 2 | Status Unknown | SEARCH_STRING | APP_STATUS
2018-08-22 10:43:19,834 | Application 4 | Status Offline | SEARCH_STRING | APP_STATUS
2018-08-22 10:42:19,834 | Application 5 | Status Known | SEARCH_STRING | APP_STATUS
2018-08-22 10:42:19,834 | Application 3 | Status Known | SEARCH_STRING | APP_STATUS
2018-08-22 10:41:19,834 | Application 1 | Status Offline | SEARCH_STRING | APP_STATUS
↧
Forwarding specific data to third-party system
I am working on a POC third-party system for some of our data and need to get data from Splunk forwarded over to it.
I was looking through this link [http://docs.splunk.com/Documentation/Splunk/6.6.3/Forwarding/Forwarddatatothird-partysystemsd][1]
[1]: http://docs.splunk.com/Documentation/Splunk/6.6.3/Forwarding/Forwarddatatothird-partysystemsd
And was hoping someone might have done what I am trying to do.
We want to send all of our Windows & IIS logs from our forwarders to the third-party system as a syslog feed.
All of our forwarders currently send directly to our backend indexers (which are a set of 3 different indexer clusters).
From looking at that link, it seems like if I want to separate data (only some sourcetypes/indexes/etc) that is getting sent from the forwarders to the other location, I have to pass the data through a heavy forwarder. I want to avoid doing this because that would mean repointing all of our forwarders to go through the heavy forwarder.
Can the division of the data be done from the forwarders themselves? Or even by making a change on the indexer side to get the raw data over to the third-party through a syslog feed?
↧
Why is Splunk Cutting off data received with collect command?
Splunk is cutting some data that is received through `collect` made on a server.
I have already reviewed the props.conf and inputs.conf files.
Has anyone seen anything about this?
Thankful.
↧
↧
when writing a report what are the important parameters
please let me know the important parameters and how they should be set with out a mistake.
↧
How can I combine two chart query outputs as 1?
Part A:
index=web splunk_server_group=hotel sourcetype=hotellog eventname=hotel-book earliest=-3d| eval dateyearweek = strftime(_time, "%Y-%U")| stats count(eval(like(success,"false"))) as F, count(eval(like(success,"true"))) as S by sitename, dateyearweek | eval P=((S*100)/(S+F))| chart values(P) over sitename by dateyearweek
Part B:
index=web splunk_server_group=hotel sourcetype=hotellog eventname=hotel-book earliest=-3d| eval weeknumber= strftime(_time, "%Y-%U")| chart count by sitename, weeknumber
Requirement: I want to combine both outputs as 1 search query.
↧
How can I connect multiple databases in Splunk DBConnect?
I have installed and configured Splunk DB_Connect in my Splunk instance, and connected one database with it and it's working successfully.
But I want to connect multiple database server without creating multiple db connections in the Splunk DB_Connect. Is this possible?
Are there any limitations of database server connections?
Please suggest to me the best way to connect multiple database servers without creating multiple db connections and please attach a link also.
↧
Can I use join by using multiple fields from the main search to match a single field on the subsearch?
I have a search with the following table as output:
time customer circuit_id parent_circuit device_card
8:10 zzzzzzzz aaaaaaa bbbbbbbbbbb ccccccccccc
Is it possible to use the values of the fields "circuit_id", "parent_circuit" & "device_card" using join command (or whatever command will work) to match a single field "prineid" from another index (main) and sourcetype (tickets)? So basically the "prineid" field of `index=main sourcetype=tickets` can have the values of aaaaaaa OR bbbbbbbbbbb OR ccccccccccc. I want the output/table to include another column "ticket" which is a field from index=main sourcetype=tickets:
time customer circuit_id parent_circuit device_card ticket
8:10 zzzzzzzz aaaaaaa bbbbbbbbbbb ccccccccccc dddd
As additional info, the main search is an alert for an outage and the subsearch looks for any tickets that may have been already opened for the outage.
↧
↧
Suming two numeric fields results in a concatenation of the two fields.
Hello Splunk Ninjas,
First time I've seen this: I have two fields, clearly regognised as numeric fields by Splunk. They are named:
"Put Count"
"Put1 Count"
I want to sum these fields, so I do this:
eval Put_Count_Sum= "Put Count" + "Put1 Count"
But instead of `Put_Count_Sum` being the sum of both fields, Put_Count_Sum is equal to the text string: "Put CountPut1 Count"
I understand it might have something to do with my fields having spaces, but not sure how to work around that.
Thank you.
↧
Regex extract just ID inside of Brackets
So I have this data> Aug 22 09:13:46 someservername <118>1 2018-08-22T09:13:46.743+00:00 ip.address LOGSTASH - - - {"timestamp":1534929226738,"process_id":62,"source":"OpsCodi:0","event_type":"SECURITY_MGMT_REGISTRY","data2":{"srctype":"ops_console"},"user":"U654321","target":"some.server.of.ours","message":"Add User [U123456] ","log_level":"INFO"}
I don't have a way to modify the field extractions or anything so I'm at the mercy of splunk. No admin rights so I've been working on some serious splunk fu with my search.
> index=index sourcetype=sourcetype source="source//*.log" | multikv | mvexpand _raw | search URGP_0="User [*]*" | regex URGP_0=(\[(\w+)\]) | table URGP_0
So all I want to see is just U123456 and I intend to pipe this into a table in my dashboard once I have the regex working properly.
I am no master with regex but I've plugged it into various checkers online and they all show that it should be working but splunk just continues to show me the full field value which looks like this
> User [U123456] ","log_level":"INFO"}
Yes its a terrible field but well prior to me putting in the mvexpand there were no fields detected so now I at least have something to work with.
Thank you for your help with this.
↧
Add LatestEvent Column to Sparkline Chart
I have a search that is currently working to give me a spark line for different event types. The search looks like this:
eventtype=PS-*
| chart sparkline count by eventtype
Now I can take the fields from the chart and pipe that to table and it works fine too. What I want to do is add a "Latest" column for each EventType that displays the date of the most recent event for each event type. From there I'd also like to add a "First" field as well.
I've tried using stats and eval but those both seem to break the sparkline.
↧