Hello Guys,
I have Splunk instance which is receiving data from different instances like DEV, QA, UAT and PROD. For then we have separate index like DEV_app, QA_app, UAT_app and PROD_app and they are sharing same sourcetype i.e. app.
Now the issue is, I need to filter events coming in two indexes i.e. Need to seperate debug logs, and as they are sharing same sourcetype so i can't apply filter based on it, as DEV_app filter data need to do to, DEV_debug, QA_app to QA_debug like this.
Any one has some solution to it?
↧
Filtering the data to different indexes
↧
Issues with Joining: Maybe there is a better way?
We have the following search that stopped working:
| tstats summariesonly=true sum(everything.rawlen) as rawBytes from datamodel=storage_billing by splunk_server,index,everything.bucketId,host
| rename everything.* as *
| eval rawMBytes=rawBytes/1024/1024
| join splunk_server, bucketId
[ dbinspect index=*
| eval rawSizeMB=rawSize/1024/1024
| fields splunk_server, bucketId, path, state, startEpoch, endEpoch, modTime, sizeOnDiskMB,rawSizeMB ]
| search state=cold
| eval compression=sizeOnDiskMB/rawSizeMB, newRawMBytes = rawMBytes * compression
| eventstats sum(rawMBytes), sum(newRawMBytes) by splunk_server, bucketId
| eval margin_of_error= round( ( sizeOnDiskMB - 'sum(newRawMBytes)' ) / sizeOnDiskMB,4)
| stats sum(newRawMBytes) as MBytes_Used, count(bucketId) as Bucket_Count by splunk_server,index,state,host
| eval GBytes_Used=round(MBytes_Used/1024,2)
| rename host as "Volume Name"
| rename MBytes_Used as Space
| eval "Copy Type"="Primary"
| eval F4="Copy"
| fields "Volume Name", Space, "Copy Type", F4
We have narrowed the issue down to ` join splunk_server, bucketId ` as when we run
| tstats summariesonly=true sum(everything.rawlen) as rawBytes from datamodel=storage_billing by splunk_server,index,everything.bucketId,host
| rename everything.* as *
| eval rawMBytes=rawBytes/1024/1024
or
| dbinspect index=*
| eval rawSizeMB=rawSize/1024/1024
| fields splunk_server, bucketId, path, state, startEpoch, endEpoch, modTime, sizeOnDiskMB,rawSizeMB
Separately, they work just fine. When we try to join then, thats when the search breaks. For reasons, this search is going back 7 years. Our current theory is that it is timing out before completion. Is there a way to streamline the search? Is `join` the right way to do this? is there a faster, better way to do this?
↧
↧
Splunk Stream
I am streaming DNS traffic successfully from some Microsoft DNS servers, however I am unable to populate any 'Estimate' statistics in the Splunk Stream application. Data is coming in, but Splunk isn't reporting on the amount of traffic.
Any ideas?
↧
How to average all columns in a chart for a group of results?
Here's what I'm trying to do:
Imagine a search result from Splunk comes back with results:
User | Field 1 | Field 2 | Field 3 | Field 4
_______________________________________________________________________
A | 1 | 0 | 1 | 2
B | 3 | 0 | 1 | 1
C | 0 | 0 | 0 | 0
Desired Result:
A chart
Field 1 | Field 2 | Field 3 | Field 4
------------------------------------------------
1.33 | 0 | .666 | 1
So the goal is to get the average User's value for each field.
↧
Solaris 10 Universal Forwarder Install Errors
We are having problems with upgrading Splunk Forwarders on Solaris Sparc 10 hosts for vulnerability remediation. We were using 6.3.x and needed to update to a 6.5.x or later Splunk version. The errors are:
# ./splunk start
ld.so.1: splunkd: fatal: relocation error: file /opt/splunkforwarder/bin/splunkd: symbol pthread_condattr_setclock: referenced symbol not found
Killed
There's nothing logged other than the first_install.log which has the Splunk version and platform information: `PLATFORM=SunOS-sparcv9`
Where should we start to troubleshoot?
↧
↧
Change the color of a line representing one series based on another series
Hi, I am sorry if this has been asked previously.
In effect, I look for a number of current day wires, and compare that to wires a week ago.
I need to display current day wires (Total_Today) as a line chart and change the color of the line based on a calculation (at present it is a percentage but this may change). These are repesented by eval greencCount, redCount, YellowCount.
Any help is truly appreciated.
----------
index= IndexA source="SourceA" TRXTYPE="Wires" earliest=-0@d latest=now
| timechart span=1h count by TRXTYPE limit=25
| accum "Wires" as accum_Total_Today
| timechart last(accum_*) as * span=1h
| appendcols
[search index=IndexA source="SourceA" TRXTYPE="Wires" earliest=-7d@d enddaysago=7
| timechart span=1h count by TRXTYPE limit=25
| accum "Wires" as accum_Total_Week_Ago
| timechart last(accum_*) as * span=1h | eval _time=_time+(604800)
] | eval greenCount = if(Total_Today >= Total_Week_Ago,Total_Today,0)
| eval yellowCount = if(Total_Today < Total_Week_Ago AND Total_Today/Total_Week_Ago*100 <=70,Total_Today,0)
| eval redCount = if(Total_Today < Total_Week_Ago AND Total_Today/Total_Week_Ago*100 >70, Total_Today, 0)
| table _time Total_Today
----------
↧
Filter out events Windows before Indexing
Hi Guys!
How to create a filter to discard Windows logon events (EventID = 4624), but only when the LogonProcessName field is equal to 'NtLmSsp'?
The logs are in XML format.
I've tried several REGEX, but none worked.
Please, who has an idea?
4624/EventID>2 0 12544 0 0x8020000000000000 602433466 Security DC01.mydomain.com NULL SID--0x0MTI\user01user01mydomain0x2807316813NtLmSsp NTLMCOMP01{00000000-0000-0000-0000-000000000000}-NTLM V200x0---%%1833---%%18430x0%%1842
*props.conf*
[XmlWinEventLog]
TRANSFORMS-set=setnull
*transforms.conf*
[setnull]
REGEX = (?m)(4624<\/EventID>).+(NtLmSsp\s+<\/Data>)
DEST_KEY = queue
FORMAT = nullQueue
- Other REGEX used unsuccessfully:
REGEX = (?m)EventCode\s*=\s*4624.*?LogonProcessName\s*=\s*NtLmSsp\s
REGEX = (?m)LogonProcessName=(NtLmSsp)
REGEX = (?m)^EventCode=(4624).+(LogonProcessName=NtLmSsp)
Thank you very much in advance.
[]s
↧
tcpin_cooked_pqueue blocking
I've recently made a career change, so I have a new Splunk environment where they leverage intermediary forwarders. Two of the intermediary forwarders are having their tcpin_cooked_pqueue fill which causes blocking. I would really appreciate some help troubleshooting and coming up with a suggested fix.
1, Since the tcpin_cooked queue is very early, the first question is obviously are later queues filling causing a backup; that's not the case only the tcpin cooked queue is filling. Also, parallel queues are enabled and set to 2.
2. Once the business day is over, the queue quickly empties.
3. The intermediary forwarders (where the queue filling happens) are physical systems running Suse Enteprise Server 11 with a load average around 2 during the day (1 processor, 16 cores, 32 threads), are using about 5.5GB of the available 32GB of memory. Network wise its receiving around 300KB/s and transmitting around 3005KB/s and has about 400 forwarders connected to it.
3. In terms of ulimits:
virtual address space size: unlimited
data segment size: unlimited
resident memory size: unlimited
stack size: 8388608 bytes [hard maximum: unlimited]
core file size: 1024 bytes [hard maximum: unlimited]
data file size: unlimited
open files: 10240 files
user processes: 256476 processes
cpu time: unlimited
Linux transparent hugepage support, enabled="never" defrag="never"
Linux vm.overcommit setting, value="0"
The key maybe that the forwarders sending typically are coming over fairly low bandwidth connections, so that may cause a lot of network connections per fairly low data ingestion rate.
↧
Set up deploymentclient.conf during Forwarder Install
Hello,
Is it possible to setup deploymentclient.conf parameters via command line?
I have used DEPLOYMENT_SERVER parameter during forwarder installation via command line. It adds the target-broker but I am looking for a command line option to set parameters like below:
[deployment-client]
disabled = false
phoneHomeIntervalInSecs = 1800
handshakeRetryIntervalInSecs = 12
Does anybody know how to do it?
↧
↧
Issue with Column chart when scale is set to log
Hello,
In Splunk 7.1.6, Column chart restrict the Y axis scale to 1 when using log scale. (for linear working fine)
I am not setting up max value = 1 for Y axis but still it restrict to 1 even though values are greater than 1.
See the attachment for better understanding. In Splunk 7.0.3 its working as per expectation.
![alt text][1]
![alt text][2]
[1]: /storage/temp/263784-703-vs-716.png
[2]: /storage/temp/263785-splunk-716-settings.png
↧
customizing inputs for Splunk App for Web Analytics
Due to extensive lack of foresight, I am working in an environment with Splunk instance that is ingesting Tomcat logs (supporting a Liferay instance) that are not in the standard index/sourcetype (i.e., not access_combined) with non-standard field extractions. Basically the field extractions line up more with IIS than with Apache access logs.
Has anyone successfully managed to implement the Splunk App for Web Analytics in a similar scenario? After digging through the .conf files, I would think it would just require adjusting all of the sourcetype and field references to use my environment's settings, but in some cases I am not entirely able to tell which are standard fields, and which are fields being created by the app.
So has any one had any success trying this?
↧
Xyseries to display dates in descending order? (important)
sample query:
index=foo "string of data"="age needed"age earliest=-5d
| stats dedup_splitvals=t , values(_time) AS _time by dept, "age_needed"
| sort department
| fields - _span
| eval conv_age=strptime(Date,"%Y-%m-%dT%H:%M:%S.%Q")
| eval age=((now()-conv_age)/84000)
| eval day=strftime(_time,"%Y-%m-%d")
| stats avg(age) as super_Age by dept, day
| eval super_Age=round(super_Age,2)
| xyseries Vertical day super_Age
**Current** result (needs to change)
Dept **1/29 1/28 1/27**
dept 1 value value value
dept 2 value value value
**What I need**
Dept **1/30 1/29 1/28**
dept 1 value value value
dept 2 value value value
Thank you in advance!
↧
Dispatch is less than 1GB but I keep getting warning messages of 5GB
Hello,
I keep getting warning messages that my dispatch directory is full (5GB) even though the dispatch dir size is less than 1 GB. And also, my queries stop running hence I have to clean up the dispatch dir to make the Splunk run again.
Kindly advise.
↧
↧
How to fetch events after we have got stats on the events , and we no more have the events in the results
Hi,
I'm trying to filter on the logs of spring boot application.
I want to calculate the time that a POST request takes.
The search query im trying is
**index="xyz" correlationid="1234"| stats values(correlationid) min(_time) AS start max(_time) AS end | eval duration=end-start**
Here, i manually search for the events which are POST requests, then i get the correlation id of that request, and then i use it in the query.
The reason why im directly not using the string "POST" is that there are other logs too that get generated after a POST request is made till the POST returns status as successful. SO i have to consider all those events. Is there a way where in i can search the correlation id from all the events and then use the obtained correlation id to fetch all the events with that correlation id.
Example of logs
10.30 2019 | 1234 | POST /data
10.31 2019 | 1234 | data verified
10.32 2019 | 1234 | successfully posted data
I need the duration 10.32-10.30=0.02
↧
Splunk Stream: Splunk suddenly stops indexing netflow data every 2 hours
Hi community, I've configured Splunk Stream to ingest NetFlow data (stream collector and splunk indexer running on the same box), and it's actually working. But exactly every 2 hours, there is a 10 minute gap of data. Packet captures show normal traffic during that gap, so it looks like if Splunk is not indexing that data.
Any idea of what could be the reason?
Thanks!
↧
How can I use token in dashboard to show result with specifict time
Hello everyone, I have 3 different dashboards, one of them shows me all the events in 24 hours, the other one shows me the same events of this dat but with one hour earliest I mean from -1h to now and the last one shows me the same but -15m to now
The problem comes when i use the timepicker to choose a different day, then all the dashboards show me the same events for 24 hours
What i am trying to do is to put the earliest=-1h and latest=now with tokens for each dashboard
↧
Why do I keep getting warning messages of 5GB even though dispatch is less than 1GB?
Hello,
I keep getting warning messages that my dispatch directory is full (5GB) even though the dispatch dir size is less than 1 GB. And also, my queries stop running, hence I have to clean up the dispatch dir to make Splunk run again.
Kindly advise.
↧
↧
When using the Splunk Stream app, why does Splunk suddenly stop indexing NetFlow data every 2 hours?
Hi community,
I've configured Splunk Stream to ingest NetFlow data (stream collector and Splunk indexer running on the same box), and it's actually working. But exactly every 2 hours, there is a 10 minute gap of data. Packet captures show normal traffic during that gap, so it looks like Splunk is not indexing that data.
Any idea of what could be the reason?
Thanks!
↧
how to add the search results to existing lookup?
i have a table that has 30 columns and some rows,
table 1
column1 column2 ---------- column30
ww xx -------------------------- aa
expecting table will like this
column1 column2 ---------- column30
ww xx -------------------------- aa
---------
-----
-----
etc...
so my question is how to add more rows to it with out deleting the old lookup.
↧
How do we fetch events after getting stats on the events , and we have no more of the events in the results?
Hi,
I'm trying to filter on the logs of spring boot application.
I want to calculate the time that a POST request takes.
The search query im trying is
**index="xyz" correlationid="1234"| stats values(correlationid) min(_time) AS start max(_time) AS end | eval duration=end-start**
Here, I manually search for the events which are POST requests, then I get the correlation ID of that request, and use it in the query.
The reason why im directly not using the string "POST" is that there are other logs too that get generated after a POST request is made till the POST returns status as successful. SO I have to consider all those events. Is there a way to search the correlation ID from all the events and then use the obtained correlation ID to fetch all the events with that correlation ID?
Example of logs
10.30 2019 | 1234 | POST /data
10.31 2019 | 1234 | data verified
10.32 2019 | 1234 | successfully posted data
I need the duration 10.32-10.30=0.02
↧