I want to write some Scala that writes out to the Splunk logging API, so I went [here][1] to get started. It links [here][2] to get the JAR. The only JARs there are for the SDK and SimData. The only logging link there is to [Github][3]. The Github link includes links to source code, which is fine, but I'm new to Java, and I don't know how to build it.
I assume it was a mistake to purport to link to a JAR where there is no JAR. So first: is there some missing link to an actual logging JAR?
But I'm happy to build from source, if someone can point me to instructions on doing that. Can anyone help?
[1]: https://dev.splunk.com/enterprise/docs/java/logging-java/getstartedloggingjava/installloggingjava
[2]: https://dev.splunk.com/enterprise/downloads/
[3]: https://github.com/splunk/splunk-library-javalogging/releases
↧
Installing the Java logging JAR
↧
Excluding a source
I have a host sending log data and I am wanting to exclude a specific directory from being ingested and/or indexed but no matter what I try, the data continues to appear.
I am using a heavy forwarder that is acting as my config server for the agent and I have the indexer on another instance.
The source to be excluded is "/var/log/lsyncd/lsyncd-status.log" but I['m logging to exclude the "/var/log/lsyncd" directory
I have tried adding the following to $SPLUNK/apps/Splunk_TA_nix/local/inputs.conf on both the forwarder and indexer both but the data continues to flow:
monitor:///var/log/lsyncd
disabled => false
I have also tried adding a blacklist option using blacklist=(*\.log) but again without the desired result.
What am I missing or how should I be configuring this?
↧
↧
Time chart for average of duration by Channel span 1h
I have the following data and i am trying to create a time chart of the data for average duration by channel
"_time",duration,CH
"2020-02-13 11:30:32.367",275,BOSRetail
"2020-02-13 12:47:59.334",202,LTSBRetail
"2020-02-13 11:02:54.025",216,BOSRetail
"2020-02-13 11:26:11.459",264,BOSRetail
"2020-02-13 11:53:03.636",179,BOSRetail
"2020-02-13 11:20:53.384",269,BOSRetail
"2020-02-13 10:58:52.428",264,BOSRetail
"2020-02-13 09:41:22.445",216,LTSBRetail
"2020-02-13 09:56:09.820",233,LTSBRetail
"2020-02-13 10:58:13.035",240,LTSBRetail
"2020-02-13 11:47:48.664",325,BOSRetail
"2020-02-13 12:21:27.147",274,LTSBRetail
"2020-02-13 11:18:59.352",235,BOSRetail
"2020-02-13 11:23:25.297",257,BOSRetail
"2020-02-13 11:03:32.007",274,HalifaxRetail
"2020-02-13 11:02:15.745",181,LTSBRetail
"2020-02-13 11:47:03.084",264,BOSRetail
"2020-02-13 15:28:01.956",260,HalifaxRetail
"2020-02-13 11:54:23.306",276,BOSRetail
"2020-02-13 11:55:58.454",215,LTSBRetail
"2020-02-13 11:00:05.081",240,HalifaxRetail
"2020-02-13 11:56:38.345",236,BOSRetail
"2020-02-13 11:49:52.787",226,BOSRetail
"2020-02-13 15:24:13.651",247,HalifaxRetail
"2020-02-13 09:31:26.887",194,LTSBRetail
"2020-02-13 11:51:59.928",262,BOSRetail
"2020-02-13 11:57:18.917",227,HalifaxRetail
"2020-02-13 09:42:04.574",171,LTSBRetail
"2020-02-13 15:25:51.943",334,HalifaxRetail
for unknown reason the average duration values are not reflecting on the timechart using the below query
**| timechart span=1h avg(duration) by CH**
↧
How do I transpose a trellis label into a code for using in a drilldown
I have a trellis view where I break down my charts into Cities. The labels are something like 'Charlotte, NC'. I can make a drilldown to my details page using the form.city=$trellis.value$.
The problem is now I want to improve the performance on my target page. It currently is pulling data for all 100 of my cities then filtering by the city name using a lookup table to convert 'Chartlotte, NC' to 'clt' which I can then apply to a hostname filter.
index=data sourcetype=searchdata "string"
| eval fields=split(host, "."), market=mvindex(fields, 1)
| lookup sitemapping sitecode as market OUTPUT region, sitecity, sitecode
| search sitecity="Charlotte, NC"
| ...
What I would like to do is use tag::host="clt" so that I can filter the records in the initial search.
One option is to extract the code somehow from the Trellis, the other is to convert from the label to the code in my query before I do the search part.
I tried putting an inputlookup before the search, but that ends up filtering out all the data due to the results of the inputlookup.
| inputlookup market-mapping | search sitecity="Charlotte, NC" | fields sitecode
| search index=data sourcetype=searchdata "string" tag::host=sitecode
The inputlookup by itself returns 'clt' in the example. Running the search by itself returns my data
Thanks
↧
[SSL:UNKOWN_PROTOCOL] unknown protocol(_ssl.c:741)
I made alert action In Add on Builder.
(I want to receive alert results and create a splunk user.)
I have this ERROR that I can not solve.
signiture="Unexpected error:[SSL:UNKNOWN_PROTOCOL]unknown protocol(_ssl.c:741)
I know my response does not work well.
But How should I avoid this ERROR?
username = event.get('email')
password = event.get('password')
roles = event.get('roles')
data = {'name': username, 'password': password, 'roles': roles}
response = requests.post("https://mng_uri:8089/services/authentication/users", data=data, verify=False, auth=("admin","passme"))
Thank you for helping me.
↧
↧
Splunk AddOn for Salesforce UserAccountId field
Hi,
We are using Splunk to query the LoginHistory object from our Salesforce org. In the login report, there are two fields : UserId and UserAccountId. May I know what values do these two fields refer to? Sometimes they have same values, sometimes they have different values.
Per following release note from Splunk AddOns, it stated "Version 2.0.0 of the Splunk Add-on for Salesforce supports multiple accounts or custom endpoints. Therefore, there is a new field in version 2.0.0 called **UserAccountId**."
https://docs.splunk.com/Documentation/AddOns/released/Salesforce/Releasehistory
What is this UserAccountId refer to in a LoginHistory record?
Thanks,
Aryne
↧
Table cell renderer does not work on Firefox
I've very similar javascript as below in my dashboard which adds up the color in the table. As I've updated dashboard.css I cannot utilize XML color palette, so I had to use table cell renderer.
require([
'underscore',
'jquery',
'splunkjs/mvc',
'splunkjs/mvc/tableview',
'splunkjs/mvc/simplexml/ready!'
], function(_, $, mvc, TableView) {
var CustomRangeRenderer = TableView.BaseCellRenderer.extend({
canRender: function(cell) {
// Point-C
return _(['My Column Name', 'Name']).contains(cell.field);
},
render: function($td, cell) {
// Point-D
if(cell.value=="red" || cell.value=="green" || cell.value=="yellow")
{
$td.html("")
}
else if(cell.value=="NoData" || cell.value=="null")
{
$td.html("")
}
else
{
$td.html("
"+cell.value+"
")
}
}
});
//List of table IDs to add icon
var tableIDs = ["Mytable1", "Mytable2"];
for (i=0;i↧
Splunk 8.0.2 report acceleration problems
Prior to updating to Splunk Enterprise 8.0.2 scheduled accelerated reports ran extremely fast:
Report A
Duration: 37.166
Record count: 314
After updating to Splunk Enterprise 8.0.2 the report ran extremely slow:
Report A
Duration: 418.621
Record count: 300
Given the [patch notes][1] for 8.0.2 – I'm not seeing any changes to acceleration or summary indexing, so is it safe to assume this is a fluke?
[1]: https://docs.splunk.com/Documentation/Splunk/8.0.2/ReleaseNotes/MeetSplunk
↧
Filtering out data (from a forwarder) on Indexer?
hi, i have several universal forwarders deployed, and im getting lots of events i want to filter out.
I understand from reading answers here i need to do this on the indexer (or else install heavy forwaders on my endpoints, which i dont want to do).
This is a raw entry that im trying to drop / filter out from my indexer (ie to keep it from using up lots of my license):
02/13/2020 10:19:09.016
event_status="(0)The operation completed successfully."
pid=1216
process_image="c:\Program Files\VMware\VMware Tools\vmtoolsd.exe"
registry_type="CreateKey"
key_path="HKLM\system\controlset001\services\tcpip\parameters"
data_type="REG_NONE"
data=""
This is the entry from the inputs.conf on the forwarders that is sending some of the events i want to filter out:
[WinRegMon://default]
disabled = 0
hive = .*
proc = .*
type = rename|set|delete|create
And i have added these lines on my indexer (and restarted), but im still seeing the events come in:
#on props.conf ( located in: C:\Program #Files\Splunk\etc\users\admin\search\local\props.conf):
[WinRegMon://default]
TRANSFORMS-set= setnull
#on transforms.conf ( located in: C:\Program #Files\Splunk\etc\users\admin\search\local\transforms.conf):
[setnull]
REGEX = process_image=.+vmtoolsd.exe"
DEST_KEY = queue
FORMAT = nullQueue
Thanks!
(ive been referencing many answers, including this good one):
(h)ttps:// answers.splunk.com/answers/37423/how-to-configure-a-forwarder-to-filter-and-send-the-specific-events-i-want.html
↧
↧
Need help in regular expression to extract data.
I need to filter the data from below _raw only the SPLUNKXML =""
_raw
2020-02-13 01:04:18.910, COUNT="863132", URL="http://122.32.10:8080/HP/Material", SAD="GET", SPLUNKXML="201 1581573606000 049726 $658262 SPlunk - Picked 634 EA 1581399738000 ", IPCODE="111", Timestamp="2020-02-13 01:00:06.75"
OUtput needed:
SPLUNKXML= "201 1581573606000 049726 $658262 SPlunk - Picked 634 EA 1581399738000 "
↧
Need help with Configuring Splunk Add-on for Cisco ESA
Hello All,
I have been going through Multiple posts but still not able to configure my Splunk Add-on for Cisco ESA. I have some confusion and need your opinion on it.
I have a Distributed environment and have installed Splunk Add-on for Cisco ESA on both Search Head & Deployment Server. The question is:
- Where should I configure the Inputs (Search Head or Deployment Server).
- Where should I push the ESA logs (Search Head or Deployment Server).
On Cisco ESA, the logs are currently configured through FTP and I was wondering if there is a way to push/share or access these logs or should I use the SCP method.
I would greatly appreciate your suggestions.
Thanks in advance,
↧
HF upgrade from v6.6 to v 7.3.3
Hi All,
I am planning to upgrade a heavy forwarder from v6.6.6 to v 7.3.3
What should be my approach to upgrade? Can i directly upgrade the HF to v 7.3.3 or, I have to upgrade it to v7.0 and then to v7.3.3
Please help.
Thanks.
Regards,
Abhi
↧
Dashboard multiple lookup filters
Hi there,
I am trying to create a dashboard with some filters..
Roughly:
3 boxes populated and filtered by a lookup or kvstore lookup
cat (car manufacturer) - for instance could be car manufacturer ( lets say i chose mercedes)
subcat (type) - petrol/diesel/electric (i choose a petrol filter)
result (cars listed assoicated with above filters) - (it lists car models from merc that are petrol)
but then maybe i wanna go back and have 2 types of filters so i would then go back to "subcat" and choose both "petrol and electric"
the result would then list both types filtered into to "result"
how can i accomplish this?
Thanks!
↧
↧
sum multiple session duration
Hi at all,
I have a very strange problem that I'm trying to solve.
I have a data source with the following fields:
- user
- dest_ip
- start_time
- end_time
I have to understand how long a user used network connections in each hour.
the problem is that I could have parallel sessions that I cannor sum because I could have more than 60 minutes of connection in one hour and it isn't acceptable.
In addition I could have a connection from 10.05 to 10.10 and another from 10.45 to 10.50 so I cannot take the start of the first and the end of the second.
Someone can hint how to approach the problem?
Ciao.
Giuseppe
↧
Indexed data vanishes after few hours and cause 0 events for 2 3 hours time frame in a day
I have a Clustered environment and monitoring setup for application logs,universal forwarders push data to indexers .
Lately , I have been facing issue where the application logs are getting indexed and available .
But after few hours when I search for the present day logs on splunk there are 0 events for 2 3 hours time frame and indexed data vanishes.
Not sure if this is a known issue , any help would be of much help.
Thanks
↧
Splunk Forwarder connection to Cluster Master
Hi All,
I am trying to build a query through which we can track if all the Splunk forwarders are connected to Cluster Master. Wanted to create an alert if there are issues when forwarder is not able to connect with Cluster master.
Could you please help with the query.
↧
Restrict user access to specific lookup table
I have a lookup table that stores employee data to map employee numbers and departments.In the dashboard I will use the following spl, but I don't want the user to query the lookup table or export it separately. Is there any way to solve this problem?
index=idx_foo | rename owner.email as user_mail | join type=left user_mail [inputlookup append=t company_emp_all.csv] | fields project, user_name, user_dept
↧
↧
What is the best way of moving data from splunk to HDFS storage for processing using Apache Spark
We are currently trying to set up a reliable solution for moving data from Splunk to HDFS location. This is not for archiving. We would like to move the data to HDFS location so that we can further process the data in the HDFS cluster using Apache Spark processing framework. We have looked at these options
1. Forward data from Splunk HF to Apache Nifi Syslog processor to push the data to HDFS
2. Forward data from Splunk HF to Apache Nifi TcpListener processor to push the data to HDFS
3. Splunk Hadoop connect (After looking at Splunk documentation, it looks like this plug-in does not work with the latest versions)
4. Splunk DSP where the data will be moved directly to Kafka and from there move to HDFS
Thanks in advance
Manu Mukundan
↧
Splunk shows no logs (0 events) on it for some amount of time in a day .) 0 event count on splunk though the monitored logs have data
I have a clustered splunk environment and monitoring in place for quite a few application logs.
Lately , I have been encountering an issue with data collection in Splunk .
For some frame of time everyday(2 to 5 hours) , I do not see any data even though the application server has logs generated.
But for the rest of the day it works just fine .
Universal Forwarders and indexers are working just fine.
This is affecting the dashboards and alerts , as the data is been missed out .
Example log:
2020-02-13T05:01:45.249-0500 INFO 801 | UNIQ_ID=2AB2130 | TRANS_ID=00000170151fda6c-171dce8 | VERSION=18.09 | TYPE=AUDIT| UTC_ENTRY=2020-02-13T10:01:45.178Z | UTC_EXIT=2020-02-13T10:01:45.230Z,"Timestamp":"2020-02-13T10:01:45.062Z","Data":{"rsCommand":"","rsStatus":"executed","pqr":"2020-02-13T09:57:13.000Z","rsStatusReason":"executed","XYZ":"2020-02-13T09:57:29.000Z","rsMinutesRemaining":"6","remoDuration":"10","internTemperature":"12","ABC":"2020-02-13T10:00:20.000Z","Sucction"}}
Can anyone give some insight ,If you have faced or come across this kind of issue.
I suspect Splunk is getting confused with the time format of the actual event and the time and year value format inside the event likeabc,pqr,xyz timestamp in the example log above.. But doesn't help me how to go about and solve this issue.
↧
Get columns that have non-zero value columns over time (using timechart)
Hi Team,
Can anyone help me on this -
I want to Get columns that have non-zero values over time (using timechart).
_time Column1 Column2 Column3 Column4 Column5 Column N
2/14/2020 2:11 0 0 0 0 0 0
2/14/2020 2:12 0 0 0 0 0 0
2/14/2020 2:13 1 0 0 0 0 0
2/14/2020 2:14 0 0 1 0 0 0
2/14/2020 2:15 0 0 0 5 0 0
2/14/2020 2:16 0 0 0 0 0 0
2/14/2020 2:17 0 0 0 0 0 0
2/14/2020 2:18 0 0 0 0 0 0
The query I am using (But I am not able to remove zero value columns )
index=servers sourcetype=server_list Columns ="*"
| timechart span=1m count as Total by Columns
| where Columns > 0
↧