I have data in json format as following:-
{Run=1 , Average=2.1, Max=3, Min=1.4, Transaction=Sample1}
{Run=1 , Average=2.1, Max=3, Min=1.4,Transaction=Sample2}
{Run=2 , Average=2.3, Max=3.1, Min=1.5,Transaction=Sample1}
{Run=2 , Average=2.3, Max=3.1, Min=1.5,Transaction=Sample2}
{Run=3 , Average=2.6, Max=3.2, Min=1.6,Transaction=Sample1}
{Run=3 , Average=2.6, Max=3.2, Min=1.6,Transaction=Sample2}
I want to compare all the fields with each other for the top 2 Run Values.
The below query is getting the data in the format I exactly need but i'm not able to pass Run Values via query dynamically:-
index=pt sourcetype=_json | search Run=3 |rename Average as "Average1" | rename Maximum as "Max1"| join Transaction type=inner[ search index=pt sourcetype=_json | search Run=2 |rename Average as "Average2" | rename Maximum as "Max2"| table Transaction, Average2] |table Transaction, Average1,Average2,Max1, Max2
Result:-
Transaction Average1 Average2 Max1 Max2
Sample1 2.6 2.3 3.2 3.1
Sample2 2.6 2.3 3.2 3.1
How to capture the second max value for a field like Run and get the desired results?
↧
How to capture 2nd max value of a field and compare with 1st max value of the same field
↧
Can you help me with with Module 9, Task 6 in Splunk training and certification Fundamentals 1 course?
For some reason the Parenthesis on avg(Duration) won't work.
I have entered the answer "index=main sourcetype=db_audit | stats avg(Duration)" but when I enter it, I get an additional set of parenthesis
↧
↧
Connection issues: when I created a new indexer, our data is not showing up.
There are a couple of indexes in inputs.conf.
I just added a new index with a new port. All other indexes are working fine and servers can send data to indexes. The problem is with the newly added one. When I do telnet from universal forwarder to indexer, all other ones are establishing a connection, but I can't establish a connection to the new one.
Am I missing something here? Can someone figure out where the problem is?
Thanks a lot in advance.
↧
Cisco Network App and Search & Reporting App Time Difference
With no TZ configured, my Search & Reporting App is displaying the correct time (UTC-10:00 or 13:00 HST) but, my Cisco Networks App is displaying a time 10 hours ahead (23:00 HST) of our local time.
When I edit the props.conf in the TA-cisco_ios folder, I enter "TZ = UTC" under the syslog stanza, now the display time is correct (13:00 HST) for the Cisco Network App, but now the Search & Reporting App is displaying a time 10 hours behind (03:00 HST) our local time.
I tried editing both props.conf in the TA-cisco_ios and search App folders with no success.
All of my event logs' time are correct, so how do I get both Cisco Network and Search & Reporting App to display the correct time?
↧
scale color with value in %
Hi
I need to do a scale color on a value in % but nothing happens
my search is
eventtype=Charge
| stats first(FullChargedCapacity) AS FullChargedCapacity first(DesignedCapacity) AS DesignedCapacity first(_time) AS _time BY host
| eval Wear_Rate = 100-(FullChargedCapacity *100/DesignedCapacity)
| where Wear_Rate >5
| eval Wear_Rate=round(Wear_Rate, 1). " %"
| table time host FullChargedCapacity DesignedCapacity Wear_Rate
Could you help me please??
↧
↧
help on subsearch
Hi
I use the search below in order to display the model of an host for **only** the host which have a Wear_Rate>0
But the Model field is empty
Could you help me **to display the model for all the machines which have a Wear_Rate>0 please??**
eventtype=Charge AND (NOT host=E* AND NOT host=I*)
| stats first(FullChargedCapacity) AS FullChargedCapacity first(DesignedCapacity) AS DesignedCapacity first(_time) AS _time BY host
| eval time = strftime(_time, "%m/%d/%Y %H:%M")
| eval Wear_Rate = 100-(FullChargedCapacity *100/DesignedCapacity)
| where Wear_Rate >0
| dedup host
| join type="outer"
[ search index="x" sourcetype="x"
| rex "Model=(?.*)"
| stats values(model) as Model by host
]
↧
Correlation 2 sourcetype with common fields different name
Hello guys,
I have 2 sourcetype, the sourcetype A have the fields [ IP , hostname , source_mac ] , the sourcetype B have the fields [ Username , mac_addres ]
I need a correlation the sourcetype A source_mac with sourcetype B mac_addres, because it's the same MAC.
Return table with fields [ Username , mac_addres, IP ,hostname ]
I'm trying this:
**index=main (sourcetype=A)
| fields IP , hostname , source_mac
| dedup IP , hostname , source_mac
| append
[ search sourcetype="B"
| dedup mac_addres
| fields mac_addres, Username
| eval Match=coalesce(source_mac, mac_addres)
| table Match,IP , hostname , Username**
But don't work, return the sourcetype=A and sourcetype=B.
Any suggestion ?
↧
Sme search with same slot time doesnt returns same number of events
Hi
I have something strange
when I execute the search below, I have 47 events on a one week slot time
eventtype="AppliService" AND (NOT host=E* AND NOT host=I*) Name="MBAMAgent" State="Stopped"
| dedup host
| table _time host DisplayName Name Started State
when I execute the search below on the same slot time, I have only 4 events for MBAMAgent
**eventtype="AppliService" AND (NOT host=E* AND NOT host=I***) (Name="dot3svc" OR Name="WlanSvc" OR Name="Winmgmt" OR Name="LanWlanSwitchingService" OR Name="PCServicesWinSrv" OR Name="CcmExec" OR Name="vpnagent" OR Name="wuauserv"OR Name="RCAgentMgr" OR Name="W32Time" OR **Name="MBAMAgent"** OR Name="BDESVC" OR Name="mfevtp" OR Name="mfemms" OR Name="McAfeeFramework" ) **State="Stopped"**
| dedup host
| table _time host DisplayName Name Started State
How is it possible because I use the same searc fields?
Thanks for your help
↧
Moving files and folders inputs to heavy forwarder
Hi Splunkers,
we use approach to collect logs on syslog and than point Splunk on logs with Files & Directories inputs. All inputs were located on the indexer (single-node deployment).
It was deployed another node as Heavy Forwarder, also with the purpose to move inputs there.
Each folder has logs from particular asset, where data is collected and separated by date (deep structure).
Previously we've moved about 30 inputs, and it worked nice and quick. Now we've moved around 700 inputs there.
To avoid license violation (when Splunk potentially might re-index all old logs) we've added a stanza ignoreOlderThan=1d for each input.
After restarting Splunk on the HF node, it takes a long time to start forwarding events to the indexer.
As I understand it re-reads all the file structure to keep this "ignoreOld" policy.
Question - how can we improve the process, what may we change in confihurations to speed-up processing and forwarding data in case new Splunk restarts on HF?
↧
↧
Changing formatter.html without restarting splunk
When making a custom visualization, is there way of getting splunk to notice my changes to `formatter.html` without restarting Splunk?
Using `debug/refresh` or `_bump` has no effect.
Thank you
↧
Splunk stats command to get total count of existing field values in an additional new column
I have an index that has vulnerabilities that are affecting hosts.
index=vulnerabilities
Fields in the index are:
host, VulnID, VulnName
I have a lookup name Assets. It has field name DNS. THis field DNS is to be used as host in index's query. e.g.
index=vulnerabilities
| stats .........
| lookup Assets DNS AS host .....
I need a query that gives me attached image results with fast performance because I have a lot of affected hosts with a lot of vulnerabilities. I will be using this query to create a scheduled report so I can reference this report in my dashboard to create panels. My query will be looking at few days back based on my scans:
![alt text][1]
[1]: /storage/temp/269608-query.png
↧
SNMP Modular Input Activation Key Barebones snmp_ta
Encountered the following error while trying to save: The following required arguments are missing: activation_key.
Where to update the activation key?
↧
Is it possible to have multiple custom alert trigger conditions
When I create new alert, I choose Custom Trigger Condition. Is it possible if I write multiple trigger conditions use AND/ OR operator:
search count=0 AND category= something
Where category is from lookup table.
↧
↧
SNMP Traps not being parsed properly
I am trying to collect the traps from a UPS device. When I installed my app on my development instance which is a standalone environment, I was able to collect the data properly.
I then tried to ingest the data to our production environment which is a clustered environment (multiple indexers, search heads and heavy forwarders). I have installed the SNMP TA on one of the Heavy forwarder and all the search heads [SNMP TA is not present on indexers]. From UPS, I am sending the traps to a heavy forwarder and heavy forwrarder is in-turn ingesting it to our indexers. Ports are opened. I am able to receive the data but the data is not being parse properly. I am getting some garbage symbols and values. I have placed a custom MIB for APC UPS in the mibs directory of the app on the Heavy forwarder.
I saw that few people have said that by changing the trap_host in the inputs.conf stanza to DNS or IP has resolved their issue. I tried that as well but still has no luck.
Could you please let me know if I have missed any steps here?
↧
Splunk phyton script trap issue
IN my environment i am using phyton script to sent trap .please find the script below ...
Script is working fine using this i am able to generate the incident through tool.
For my concern is i have received only one source detail in the incident.
i need all the source details what ever available in the column need to sent in trap server .
Please help any one...
import os
import csv
import gzip
import subprocess
import shlex
os.chdir(os.path.dirname(__file__))
if __name__ == "__main__":
#Read the environment variables that Splunk has passed to us
scriptName = os.environ['SPLUNK_ARG_0']
numberEventsReturned = os.environ['SPLUNK_ARG_1']
searchTerms = os.environ['SPLUNK_ARG_2']
queryString = os.environ['SPLUNK_ARG_3']
searchName = os.environ['SPLUNK_ARG_4']
triggerReason = os.environ['SPLUNK_ARG_5']
browserUrl = os.environ['SPLUNK_ARG_6']
rawEventsFile = os.environ['SPLUNK_ARG_8']
#file where you want to write the content
logFile = open('D:\Splunk\splunk_alert_events.txt', 'a')
#We got the file name from the envioenment vars
eventFile = csv.reader(gzip.open(rawEventsFile, 'rb'))
#logFile.write(eventFile)
i=0
for row in eventFile:
if i==0:
i+=1
else:
myhost=row[2]
source=row[3]
sourcetype=row[1]
logFile.write(myhost + "\n")
logFile.write(browserUrl + "\n")
logFile.write(scriptName + "\n")
logFile.write("queryString" + "\n")
logFile.close()
logFile = open('D:\Splunk\splunk_alert_trapsDC.txt', 'a')
proc = subprocess.Popen(['C:\Windows\System32\VivekB.exe','-d', '10.182.73.70','-v', '1.3.6.1.4.1.4842.200.1.0','STRING',myhost,'-v', '1.3.6.1.4.1.4842.200.1.1','STRING',source,'-v', '1.3.6.1.4.1.4842.200.1.2','STRING',browserUrl,'-v', '1.3.6.1.4.1.4842.200.1.3','STRING',sourcetype,'-v', '1.3.6.1.4.1.4842.200.1.4','STRING',scriptName,'-v', '1.3.6.1.4.1.4842.200.1.5','STRING',queryString,'-v', '1.3.6.1.4.1.4842.200.1.6','STRING',searchName,'-v', '1.3.6.1.4.1.4842.200.1.7','STRING',triggerReason],shell=False)
logFile.write(proc + "\n")
logFile.close()
logFile = open('D:\Splunk\splunk_alert_trapsDR.txt', 'a')
prog = subprocess.Popen(['C:\Windows\System32\VivekB.exe','-d', '10.182.73.164','-v', '1.3.6.1.4.1.4842.200.1.0','STRING',myhost,'-v', '1.3.6.1.4.1.4842.200.1.1','STRING',source,'-v', '1.3.6.1.4.1.4842.200.1.2','STRING',browserUrl,'-v', '1.3.6.1.4.1.4842.200.1.3','STRING',sourcetype,'-v', '1.3.6.1.4.1.4842.200.1.4','STRING',scriptName,'-v', '1.3.6.1.4.1.4842.200.1.5','STRING',queryString,'-v', '1.3.6.1.4.1.4842.200.1.6','STRING',searchName,'-v', '1.3.6.1.4.1.4842.200.1.7','STRING',triggerReason],shell=False)
logFile.write(prog + "\n")
logFile.close()
↧
Search for POST params
Hi- I am pretty new to Splunk.
Can we search for a specific (form) parameter against a POST REST call ?
↧
Splunk stops randomly
I have been having issues with my splunk where the splunk service stops randomly. here are some logs from splunkd.log right before it went down.
Mostly uses Splunk with Carbon Black add-on to generate reports
02-22-2019 18:26:55.084 +0800 ERROR StreamGroup - unexpected rc=1 from IndexableValue->index
02-22-2019 18:26:55.084 +0800 ERROR STMgr - dir='C:\Program Files\Splunk\var\lib\splunk\_internaldb\db\hot_v1_70' out of memory failure rc=1 warm_rc[-2,8] from st_txn_start
02-22-2019 18:26:55.084 +0800 ERROR StreamGroup - unexpected rc=1 from IndexableValue->index
02-22-2019 18:26:55.084 +0800 ERROR STMgr - dir='C:\Program Files\Splunk\var\lib\splunk\_internaldb\db\hot_v1_70' out of memory failure rc=1 warm_rc[-2,8] from st_txn_start
02-22-2019 18:26:55.084 +0800 ERROR StreamGroup - unexpected rc=1 from IndexableValue->index
02-22-2019 18:26:55.084 +0800 ERROR STMgr - dir='C:\Program Files\Splunk\var\lib\splunk\_internaldb\db\hot_v1_70' out of memory failure rc=1 warm_rc[-2,8] from st_txn_start
02-22-2019 18:26:55.084 +0800 ERROR StreamGroup - unexpected rc=1 from IndexableValue->index
02-22-2019 18:26:55.084 +0800 ERROR STMgr - dir='C:\Program Files\Splunk\var\lib\splunk\_internaldb\db\hot_v1_70' out of memory failure rc=1 warm_rc[-2,8] from st_txn_start
02-22-2019 18:26:55.084 +0800 ERROR StreamGroup - unexpected rc=1 from IndexableValue->index
02-22-2019 18:26:55.084 +0800 ERROR STMgr - dir='C:\Program Files\Splunk\var\lib\splunk\_internaldb\db\hot_v1_70' out of memory failure rc=1 warm_rc[-2,8] from st_txn_start
02-22-2019 18:26:55.084 +0800 ERROR StreamGroup - unexpected rc=1 from IndexableValue->index
02-22-2019 18:26:55.615 +0800 ERROR TailReader - Ignoring path="C:\Program Files\Splunk\var\log\splunk\splunkd.log" due to: bad allocation
02-22-2019 18:26:55.646 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
02-22-2019 18:26:56.787 +0800 ERROR IntrospectionGenerator:resource_usage - KVStorageProvider - Internal read failed with error code '13053' and message 'No suitable servers found: `serverSelectionTimeoutMS` expired: [socket timeout calling ismaster on '127.0.0.1:8191']'
02-22-2019 18:26:57.662 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
02-22-2019 18:26:59.474 +0800 WARN IntrospectionGenerator:resource_usage - RU - Failure shapshoting all processes, skipping this collection cycle. Status code is 1455
02-22-2019 18:26:59.677 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
02-22-2019 18:27:01.693 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
02-22-2019 18:27:02.412 +0800 WARN SearchResultsFiles - Error while reading C:\Program Files\Splunk\var\run\splunk\dispatch\scheduler__nobody__sos__RMD59d4672721e98f163_at_1550830500_46\metadata.csv: Insufficient system resources exist to complete the requested service.
02-22-2019 18:27:02.412 +0800 WARN DispatchSearchMetadata - could not read metadata file: C:\Program Files\Splunk\var\run\splunk\dispatch\scheduler__nobody__sos__RMD59d4672721e98f163_at_1550830500_46\metadata.csv
02-22-2019 18:27:02.427 +0800 WARN SearchResultsFiles - Error while reading C:\Program Files\Splunk\var\run\splunk\dispatch\scheduler__nobody__sos__RMD5b76b5354b306efbb_at_1550681280_630\metadata.csv: Insufficient system resources exist to complete the requested service.
02-22-2019 18:27:02.427 +0800 WARN DispatchSearchMetadata - could not read metadata file: C:\Program Files\Splunk\var\run\splunk\dispatch\scheduler__nobody__sos__RMD5b76b5354b306efbb_at_1550681280_630\metadata.csv
02-22-2019 18:27:02.662 +0800 WARN SearchResultsFiles - Error while reading C:\Program Files\Splunk\var\run\splunk\dispatch\scheduler__nobody__sos__RMD5b76b5354b306efbb_at_1550767680_1113\metadata.csv: Insufficient system resources exist to complete the requested service.
02-22-2019 18:27:02.662 +0800 WARN DispatchSearchMetadata - could not read metadata file: C:\Program Files\Splunk\var\run\splunk\dispatch\scheduler__nobody__sos__RMD5b76b5354b306efbb_at_1550767680_1113\metadata.csv
02-22-2019 18:27:02.677 +0800 WARN SearchResultsFiles - Error while reading C:\Program Files\Splunk\var\run\splunk\dispatch\scheduler__nobody__sos__RMD5b76b5354b306efbb_at_1550811142_4\metadata.csv: Insufficient system resources exist to complete the requested service.
02-22-2019 18:27:02.677 +0800 WARN DispatchSearchMetadata - could not read metadata file: C:\Program Files\Splunk\var\run\splunk\dispatch\scheduler__nobody__sos__RMD5b76b5354b306efbb_at_1550811142_4\metadata.csv
02-22-2019 18:27:02.677 +0800 WARN SearchResultsFiles - Error while reading C:\Program Files\Splunk\var\run\splunk\dispatch\scheduler__nobody__sos__RMD5b76b5354b306efbb_at_1550824415_3\metadata.csv: Insufficient system resources exist to complete the requested service.
02-22-2019 18:27:02.677 +0800 WARN DispatchSearchMetadata - could not read metadata file: C:\Program Files\Splunk\var\run\splunk\dispatch\scheduler__nobody__sos__RMD5b76b5354b306efbb_at_1550824415_3\metadata.csv
02-22-2019 18:27:03.709 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
02-22-2019 18:27:05.724 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
02-22-2019 18:27:06.302 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:06.302 +0800 WARN ExecProcessor - Couldn't create stderr pipe for command "python "C:\Program Files\Splunk\etc\apps\TA-Cb_Defense\bin\carbonblack_defense.py"": The operation completed successfully.
02-22-2019 18:27:06.302 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:06.302 +0800 ERROR ExecProcessor - Couldn't create output pipe for command "python "C:\Program Files\Splunk\etc\apps\TA-Cb_Defense\bin\carbonblack_defense.py"": The operation completed successfully.
02-22-2019 18:27:06.662 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:06.662 +0800 WARN ExecProcessor - Couldn't create stderr pipe for command "python "C:\Program Files\Splunk\etc\apps\TA-Cb_Defense\bin\carbonblack_defense.py"": The operation completed successfully.
02-22-2019 18:27:06.662 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:06.662 +0800 ERROR ExecProcessor - Couldn't create output pipe for command "python "C:\Program Files\Splunk\etc\apps\TA-Cb_Defense\bin\carbonblack_defense.py"": The operation completed successfully.
02-22-2019 18:27:07.130 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:07.130 +0800 WARN ExecProcessor - Couldn't create stderr pipe for command "python "C:\Program Files\Splunk\etc\apps\TA-Cb_Defense\bin\carbonblack_defense.py"": The operation completed successfully.
02-22-2019 18:27:07.130 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:07.130 +0800 ERROR ExecProcessor - Couldn't create output pipe for command "python "C:\Program Files\Splunk\etc\apps\TA-Cb_Defense\bin\carbonblack_defense.py"": The operation completed successfully.
02-22-2019 18:27:07.740 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
02-22-2019 18:27:08.630 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:08.630 +0800 WARN ExecProcessor - Couldn't create stderr pipe for command "python "C:\Program Files\Splunk\etc\apps\TA-Cb_Defense\bin\carbonblack_defense.py"": The operation completed successfully.
02-22-2019 18:27:08.630 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:08.630 +0800 ERROR ExecProcessor - Couldn't create output pipe for command "python "C:\Program Files\Splunk\etc\apps\TA-Cb_Defense\bin\carbonblack_defense.py"": The operation completed successfully.
02-22-2019 18:27:08.990 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:08.990 +0800 WARN ExecProcessor - Couldn't create stderr pipe for command "python "C:\Program Files\Splunk\etc\apps\TA-Cb_Defense\bin\carbonblack_defense.py"": The operation completed successfully.
02-22-2019 18:27:08.990 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:08.990 +0800 ERROR ExecProcessor - Couldn't create output pipe for command "python "C:\Program Files\Splunk\etc\apps\TA-Cb_Defense\bin\carbonblack_defense.py"": The operation completed successfully.
02-22-2019 18:27:09.349 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:09.349 +0800 WARN ExecProcessor - Couldn't create stderr pipe for command ""C:\Program Files\Splunk\bin\splunk-MonitorNoHandle.exe"": The operation completed successfully.
02-22-2019 18:27:09.349 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:09.349 +0800 ERROR ExecProcessor - Couldn't create output pipe for command ""C:\Program Files\Splunk\bin\splunk-MonitorNoHandle.exe"": The operation completed successfully.
02-22-2019 18:27:09.380 +0800 WARN IntrospectionGenerator:resource_usage - RU - Failure shapshoting all processes, skipping this collection cycle. Status code is 1455
02-22-2019 18:27:09.708 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:09.708 +0800 WARN ExecProcessor - Couldn't create stderr pipe for command "python "C:\Program Files\Splunk\etc\apps\TA-Cb_Defense\bin\carbonblack_defense.py"": The operation completed successfully.
02-22-2019 18:27:09.708 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:09.708 +0800 ERROR ExecProcessor - Couldn't create output pipe for command "python "C:\Program Files\Splunk\etc\apps\TA-Cb_Defense\bin\carbonblack_defense.py"": The operation completed successfully.
02-22-2019 18:27:09.755 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
02-22-2019 18:27:11.771 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
02-22-2019 18:27:13.787 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
02-22-2019 18:27:15.802 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
02-22-2019 18:27:17.818 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
02-22-2019 18:27:19.443 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:19.443 +0800 WARN ExecProcessor - Couldn't create stderr pipe for command ""C:\Program Files\Splunk\bin\splunk-admon.exe"": The operation completed successfully.
02-22-2019 18:27:19.443 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:19.443 +0800 ERROR ExecProcessor - Couldn't create output pipe for command ""C:\Program Files\Splunk\bin\splunk-admon.exe"": The operation completed successfully.
02-22-2019 18:27:19.802 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:19.802 +0800 WARN ExecProcessor - Couldn't create stderr pipe for command ""C:\Program Files\Splunk\bin\splunk-netmon.exe"": The operation completed successfully.
02-22-2019 18:27:19.802 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:19.802 +0800 ERROR ExecProcessor - Couldn't create output pipe for command ""C:\Program Files\Splunk\bin\splunk-netmon.exe"": The operation completed successfully.
02-22-2019 18:27:19.833 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
02-22-2019 18:27:20.161 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:20.161 +0800 WARN ExecProcessor - Couldn't create stderr pipe for command ""C:\Program Files\Splunk\bin\splunk-powershell.exe"": The operation completed successfully.
02-22-2019 18:27:20.161 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:20.161 +0800 ERROR ExecProcessor - Couldn't create output pipe for command ""C:\Program Files\Splunk\bin\splunk-powershell.exe"": The operation completed successfully.
02-22-2019 18:27:20.521 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:20.521 +0800 WARN ExecProcessor - Couldn't create stderr pipe for command ""C:\Program Files\Splunk\bin\splunk-regmon.exe"": The operation completed successfully.
02-22-2019 18:27:20.521 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:20.521 +0800 ERROR ExecProcessor - Couldn't create output pipe for command ""C:\Program Files\Splunk\bin\splunk-regmon.exe"": The operation completed successfully.
02-22-2019 18:27:20.880 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:20.880 +0800 WARN ExecProcessor - Couldn't create stderr pipe for command ""C:\Program Files\Splunk\bin\splunk-winevtlog.exe"": The operation completed successfully.
02-22-2019 18:27:20.880 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:20.880 +0800 ERROR ExecProcessor - Couldn't create output pipe for command ""C:\Program Files\Splunk\bin\splunk-winevtlog.exe"": The operation completed successfully.
02-22-2019 18:27:21.239 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:21.239 +0800 WARN ExecProcessor - Couldn't create stderr pipe for command ""C:\Program Files\Splunk\bin\splunk-winprintmon.exe"": The operation completed successfully.
02-22-2019 18:27:21.239 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:21.239 +0800 ERROR ExecProcessor - Couldn't create output pipe for command ""C:\Program Files\Splunk\bin\splunk-winprintmon.exe"": The operation completed successfully.
02-22-2019 18:27:21.599 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:21.599 +0800 WARN ExecProcessor - Couldn't create stderr pipe for command ""C:\Program Files\Splunk\bin\splunk-powershell.exe" --ps2": The operation completed successfully.
02-22-2019 18:27:21.599 +0800 WARN Thread - ExecProcessor: about to throw a ThreadException: _beginthreadex: The paging file is too small for this operation to complete.; 78 threads active
02-22-2019 18:27:21.599 +0800 ERROR ExecProcessor - Couldn't create output pipe for command ""C:\Program Files\Splunk\bin\splunk-powershell.exe" --ps2": The operation completed successfully.
02-22-2019 18:27:21.849 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
02-22-2019 18:27:24.208 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
02-22-2019 18:27:26.224 +0800 WARN JournalSlice - Exception while compressing slice: bad allocation
the above is the last row from the logs, do let me know if other logs is required for better understanding.
↧
↧
Are there any way to restrict the role to create custom app and to list apps?
Hi Splunk plrofessionals,
I would like to create the role which is enable to create own custom app by barebones template.
So I made the role of "test_role" which is created by user role and the capabilities of admin_all_objects.
Also I would like to restrict to access "myApp" by the user who has test_role. So I gave the permission of "myApp" which is read and created by only admin role.
And then I created the user of "testuser" who has test_role, and logined as "testuser"
I hoped not to see the app name of "myApp" in App management list, however there is "myApp" in the list.
Does anyone know how to create a role which is possible to create own app and to non-display the specific app in the list of apps?
I appreciate any opinion.
Regards,
↧
Heavyforwarder transforms.conf split data into multiple indexes
Hello experts,
Need help. My requirement is to extract 1st set of lines into 1st index and 2nd set into 2nd index. And ignore all other lines from a log file.
Below is my configuration which is obviously failing.
I have seen other blogs' solution - successfully able to separate events into two indexes without using [discardAll] from transforms.conf and unspecified index in inputs.conf. But it will redirect all my ignored lines into main idx which I don't want.
**inputs.conf**
[monitor://D:\splunk_test\target.log]
disabled = false
sourcetype = Custom_S
index = target_index_one
interval = 10
crcSalt =
**props.conf**
[Custom_S]
TRANSFORMS-set = discardAll,index2one,index2two
**transforms.conf**
[discardAll]
REGEX=.
DEST_KEY=queue
FORMAT=nullQueue
[index2one]
REGEX=(First_Filter)
DEST_KEY=_MetaData:Index
FORMAT=target_index_one
[index2two]
REGEX=(Second_Variant)
DEST_KEY=_MetaData:Index
FORMAT=target_index_two
↧
Can i subscribe to reports to send an email
Can i do a subscription setup on splunk so that i can get an email with csv file attached of the report..
↧