Howdy, I'm struggling with the following and hoping you can help. To summarize, I require a 'value' column, which is the left most column that contains all the possible values I have defined in an eval statement. The values in this left most column are all the possible values that might be in the data. The other columns consist of all the possible status values that might be in the data. As an example
Value Status1 Status2 Status3 Status4
Value1
Value2
Value3
Value4
Value5
All values and Status must be display, whether there is data in the index or not. For example, If I have this data.
Value Status
Value1 Status1
Value1 Status1
Value1 Status2
Value1 Status3
Value2 Status1
Value2 Status2
Value3 Status3
Value3 Status1
The the chart\table result should be as follows
Value Status1 Status2 Status3 Status4
Value1 2 1 1 0
Value2 1 1 0 0
Value3 1 0 0 0
Value4 0 0 0 0
Value5 0 0 0 0
I've danced around this for a couple of days without any success. Looked up and tried all sorts of things without success. Any thoughts or help y'all might offer will be greatly appreciated. Thank you.
PS: I'm really trying to not use joins in any way, so as to avoid the costs associated with it.
↧
How to force all needed rows and columns to display in a chart, table or other, when sometimes no data is found.
↧
Displaying one column value as tooltip to another column for a table
Hi,
I am using one table in my dashboard. if possible I wanted to display one column values as tooltip to another column.
basically in below table Threshold value need to be displayed as tooltip when mouse rollover on Alert_Destription or Issues_Count.
![alt text][1]
[1]: /storage/temp/282632-2020-02-13-14-33-38-kpi-monitoring-beta-splunk-726.png
↧
↧
How to forward indexed data to RSA NetWitness?
So I will start with the details of my setup. I am running a single server instance on a network of ~300 endpoints. All of my systems are forwarding to a total of 4 indexes currently. I am using Splunk (currently 7.2.6) strictly for audit collection and review.
We have a requirement to send our audit data to our client for their collection requirements as this system is here to support our business with them. They are using RSA's NetWitness and want the data converted to syslog format over UDP.
I have seen a few write-ups on this out there but I feel like they do not fit my situation close enough to trust them. So how do I send the data in the 4 relevant indexes to them in syslog format from my Splunk Enterprise server? Also, how do I set a limit on how much and how fast this forwarding would take place? I don't want to kill bandwidth just so they can warehouse data I am already storing.
Thanks!
↧
Windows Event Logs Analysis - parsing of the logs is not what it is expecting
Is anyone having trouble with evenitid add-on working with Splunk_TA_windows add-on?
The Windows logs are being parsed and in a nice readable format but eventid seems to be expecting something different than what is being parsed. I'm getting results that don't match what I believe eventid is expecting.
example:
On the EventSources dashboard the Event Sources panel returns nothing for Error - All - * in the imput fileds. But if do a manual search just based on Type I get the following types (`event_sources`| stats count by Type)
Computer
OperatingSystem
Processor
Roles
Site
SiteLink
Subnet
This clearly doesn't seem to be what eventid is looking for. Any ideas on what could be happening ?
↧
Is it possible to regex a sourcetype on a per file basis
One of our 3rd party apps has some pretty unfriendly logging. The app itself carries out somewhere between 20-30 jobs, each of which has its own log. the issue we have is that all logs are written to one directory and the log files themselves are named such as this
20200213.445933.log
The only way to distinguish between job log files is by a header within each log that has a description included. A further issue is that every line in the file is prefixed with a date and time. This results in Splunk splitting every line into a separate event even when the true event may be several lines long. for example:
[2020-02-13 15:00:34] #########################################################
[2020-02-13 15:00:34] # Log File Path: /data/logs/jobs/20200213.445933.log
[2020-02-13 15:00:34] # Creation Date: Thu Feb 13 15:00:34 GMT 2020
[2020-02-13 15:00:34] # Description: DQ:Import DQ CAR Files
[2020-02-13 15:00:34] # Parameters: --terminatetime 175000 -mapping 52000 -daemon yes -rb true
[2020-02-13 15:00:34] #########################################################
[2020-02-13 15:00:34] 'INIT' actions:
[2020-02-13 15:00:34] Collect Files
[2020-02-13 15:00:34] Collect Files Action
[2020-02-13 15:00:34] Connected: ftp://***********************
[2020-02-13 15:00:34] Filter: ^BT.*\.CAR
[2020-02-13 15:00:35] Files found: 0
[2020-02-13 15:00:35] Retrieving batches for mapping : DQ CAR Records
[2020-02-13 15:00:35] Found no Batch files to import
[2020-02-13 15:00:35] No 'CLSE' actions
[2020-02-13 15:01:35] 'INIT' actions:
[2020-02-13 15:01:35] Collect Files
[2020-02-13 15:01:35] Collect Files Action
[2020-02-13 15:01:35] Connected: ftp://***********************
[2020-02-13 15:01:35] Filter: ^BT.*\.CAR
[2020-02-13 15:01:35] Files found: 0
[2020-02-13 15:01:35] Retrieving batches for mapping : DQ CAR Records
[2020-02-13 15:01:35] Found no Batch files to import
[2020-02-13 15:01:35] No 'CLSE' actions
[2020-02-13 15:02:45] 'INIT' actions:
[2020-02-13 15:02:45] Collect Files
[2020-02-13 15:02:46] Collect Files Action
[2020-02-13 15:02:46] Connected: ftp://***********************
[2020-02-13 15:02:46] Filter: ^BT.*\.CAR
[2020-02-13 15:02:46] Files found: 0
[2020-02-13 15:02:46] Retrieving batches for mapping : DQ CAR Records
[2020-02-13 15:02:46] Found no Batch files to import
[2020-02-13 15:02:46] No 'CLSE' actions
[2020-02-13 15:03:47] 'INIT' actions:
[2020-02-13 15:03:47] Collect Files
[2020-02-13 15:03:47] Collect Files Action
[2020-02-13 15:03:47] Connected: ftp://***********************
[2020-02-13 15:03:47] Filter: ^BT.*\.CAR
[2020-02-13 15:03:47] Files found: 0
[2020-02-13 15:03:47] Retrieving batches for mapping : DQ CAR Records
[2020-02-13 15:03:47] Found no Batch files to import
[2020-02-13 15:03:47] No 'CLSE' actions
One event would actually look like this:
[2020-02-13 15:00:34] 'INIT' actions:
[2020-02-13 15:00:34] Collect Files
[2020-02-13 15:00:34] Collect Files Action
[2020-02-13 15:00:34] Connected: ftp://***********************
[2020-02-13 15:00:34] Filter: ^BT.*\.CAR
[2020-02-13 15:00:35] Files found: 0
[2020-02-13 15:00:35] Retrieving batches for mapping : DQ CAR Records
[2020-02-13 15:00:35] Found no Batch files to import
[2020-02-13 15:00:35] No 'CLSE' actions
Our 3rd party developer has advised that this cannot be changed, so the only option is to work around this in splunk somehow.
I was wondering if it is possible to regex out the description in each log and assign it as a sourcetype. Each sourcetype could then have its own event splitting rules. Is this possible?
↧
↧
Newly created LDAP group not accepting created roles
We have a few users that need access to application logs. We have our active directory admins create a group and once they create that group it shows up in splunk for us to add a role to.
The latest group to be created shows up in the "Access controls » Authentication method » LDAP strategies » LDAP Groups" page but once I try to add a role other than "user" it doesn't show as added in the UI even when the message at the top of the screen says the role has been added.
The users can't search any logs that they should have access through the new role created for the new LDAP Group. What's odd is that the /opt/splunk/etc/system/local/authentication.conf has the new role added to the new LDAP Group.
looking in splunkd.log there is this message:
02-06-2020 10:58:07.296 -0500 WARN UserManagerPro - Strategy="Splunk": the group="SPL_DIGITAL" was not found on the LDAP server. Suggest to remove it from the role map to save server loading time.
Not sure what to do. Not sure if this is a problem with AD or with splunk.
↧
Filtering out data (from a forwarder) on Indexer?
hi, i have several universal forwarders deployed, and im getting lots of events i want to filter out.
I understand from reading answers here i need to do this on the indexer (or else install heavy forwaders on my endpoints, which i dont want to do).
This is a raw entry that im trying to drop / filter out from my indexer (ie to keep it from using up lots of my license):
02/13/2020 10:19:09.016
event_status="(0)The operation completed successfully."
pid=1216
process_image="c:\Program Files\VMware\VMware Tools\vmtoolsd.exe"
registry_type="CreateKey"
key_path="HKLM\system\controlset001\services\tcpip\parameters"
data_type="REG_NONE"
data=""
This is the entry from the inputs.conf on the forwarders that is sending some of the events i want to filter out:
[WinRegMon://default]
disabled = 0
hive = .*
proc = .*
type = rename|set|delete|create
And i have added these lines on my indexer (and restarted), but im still seeing the events come in:
#on props.conf ( located in: C:\Program #Files\Splunk\etc\users\admin\search\local\props.conf):
[WinRegMon://default]
TRANSFORMS-set= setnull
#on transforms.conf ( located in: C:\Program #Files\Splunk\etc\users\admin\search\local\transforms.conf):
[setnull]
REGEX = process_image=.+vmtoolsd.exe"
DEST_KEY = queue
FORMAT = nullQueue
Thanks!
(ive been referencing many answers, including this good one):
(h)ttps:// answers.splunk.com/answers/37423/how-to-configure-a-forwarder-to-filter-and-send-the-specific-events-i-want.html
↧
Filtering out data (from a forwarder) on Indexer?
hi, i have several universal forwarders deployed, and im getting lots of events i want to filter out.
I understand from reading answers here i need to do this on the indexer (or else install heavy forwaders on my endpoints, which i dont want to do).
This is a raw entry that im trying to drop / filter out from my indexer (ie to keep it from using up lots of my license):
02/13/2020 10:19:09.016
event_status="(0)The operation completed successfully."
pid=1216
process_image="c:\Program Files\VMware\VMware Tools\vmtoolsd.exe"
registry_type="CreateKey"
key_path="HKLM\system\controlset001\services\tcpip\parameters"
data_type="REG_NONE"
data=""
This is the entry from the inputs.conf on the forwarders that is sending some of the events i want to filter out:
[WinRegMon://default]
disabled = 0
hive = .*
proc = .*
type = rename|set|delete|create
And i have added these lines on my indexer (and restarted), but im still seeing the events come in:
#on props.conf ( located in: C:\Program #Files\Splunk\etc\users\admin\search\local\props.conf):
[WinRegMon://default]
TRANSFORMS-set= setnull
#on transforms.conf ( located in: C:\Program #Files\Splunk\etc\users\admin\search\local\transforms.conf):
[setnull]
REGEX = process_image=.+vmtoolsd.exe"
DEST_KEY = queue
FORMAT = nullQueue
Thanks!
(ive been referencing many answers, including this good one):
(h)ttps:// answers.splunk.com/answers/37423/how-to-configure-a-forwarder-to-filter-and-send-the-specific-events-i-want.html
↧
CSV report not showing data correctly
Hi, I have a daily scheduled report which goes to sftp server in a csv format. I am getting complaints that the data is not coming properly. I investigated and suspect that it may be because of the multi valued fields in the table but I am not sure. In Splunk it shows something like I have attached and in the CSV which is delivered on the server it is seen something like this very weird with column name deviceDescription![alt text][1]
app,"serviceName","2020-02-12 23:34:01","2020-02-12 23:34:01",34567,ANA,C,,51228586,"HD BOX (CISCO),,,,,,,,,,,,
TIVO 500GB BOX (CISCO),,,,,,,,,,,,,,,,,,,,,,
TIVO 1TB BOX (ARRIS),,,,,,,,,,,,,,,,,,,,,,
TIVO 1TB BOX (ARRIS)",456,Agent,,,,5678997,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Total columns in the table is 23 but it seems in CSV they are more than 23 commas coming.
Any help is appreciated.
[1]: /storage/temp/282634-csv-report-issue.jpg
↧
↧
splunk crashing on lookup command
We have simple csv lookup like:
network,descr
192.168.0.0/24,network_name
Lookup description in transforms.conf:
[networklist_allocs_all]
filename = networklist_allocs_all.csv
max_matches = 1
min_matches = 1
default_match = OK
match_type = CIDR(network)
Any search command like:
...
| lookup networklist_allocs_all network AS src_ip
...
too often crashing splunk on the indexers. It is about 5-10 crashlogs on the each indexers per day.
Part of crashlog at the end of question.
How can we resolve this situation? Seems that splunk began to crash after update from 7 to 8 version. We did't any changes in lookup format or definition.
Unfortunately we can't open support case for some reason, so ask for community help.
crash-xx.log:
[build 6db836e2fb9e] 2020-02-13 17:00:56
Received fatal signal 11 (Segmentation fault).
Cause:
No memory mapped at address [0x0000000000000058].
Crashing thread: BatchSearch
Registers:
RIP: [0x0000564E2F44470F] _ZN14LookupMatchMap16mergeDestructiveERS_ + 31 (splunkd + 0x223670F)
RDI: [0x00007FC9C01FCA80]
RSI: [0x0000000000000000]
RBP: [0x0000000000000000]
RSP: [0x00007FC9C01FC9A0]
RAX: [0x00007FC9CDB3AE08]
RBX: [0x0000000000000000]
RCX: [0x0000000000000001]
RDX: [0x00007FC9BFB7F608]
R8: [0x00007FC9C01FC9C0]
R9: [0x00007FC9C01FC9BF]
R10: [0x0000000000000010]
R11: [0x0000000000000080]
R12: [0x00007FC9928E33C0]
R13: [0x00007FC9C01FCA80]
R14: [0xAAAAAAAAAAAAAAAB]
R15: [0x00007FC9BFB7F600]
EFL: [0x0000000000010246]
TRAPNO: [0x000000000000000E]
ERR: [0x0000000000000004]
CSGSFS: [0x0000000000000033]
OLDMASK: [0x0000000000000000]
OS: Linux
Arch: x86-64
Backtrace (PIC build):
[0x0000564E2F44470F] _ZN14LookupMatchMap16mergeDestructiveERS_ + 31 (splunkd + 0x223670F)
[0x0000564E2F444EE8] _ZN14UnpackedResult8finalizeEv + 168 (splunkd + 0x2236EE8)
[0x0000564E2F445E26] _ZN18LookupDataProvider6lookupERSt6vectorIP15SearchResultMemSaIS2_EERK17SearchResultsInfoR16LookupDefinitionPK22LookupProcessorOptions + 2118 (splunkd + 0x2237E26)
[0x0000564E2F451B7F] _ZN12LookupDriver13executeLookupEP29IFieldAwareLookupDataProviderP15SearchResultMemR17SearchResultsInfoPK22LookupProcessorOptions + 367 (splunkd + 0x2243B7F)
[0x0000564E2F451C22] _ZN18SingleLookupDriver7executeER18SearchResultsFilesR17SearchResultsInfoPK22LookupProcessorOptions + 98 (splunkd + 0x2243C22)
[0x0000564E2F43BDFF] _ZN15LookupProcessor7executeER18SearchResultsFilesR17SearchResultsInfo + 79 (splunkd + 0x222DDFF)
[0x0000564E2F08FC7D] _ZN15SearchProcessor16execute_dispatchER18SearchResultsFilesR17SearchResultsInfoRK3Str + 749 (splunkd + 0x1E81C7D)
[0x0000564E2F07F528] _ZN14SearchPipeline7executeER18SearchResultsFilesR17SearchResultsInfo + 344 (splunkd + 0x1E71528)
[0x0000564E2F1931D9] _ZN16MapPhaseExecutor15executePipelineER18SearchResultsFilesb + 153 (splunkd + 0x1F851D9)
[0x0000564E2F193901] _ZN25BatchSearchExecutorThread13executeSearchEv + 385 (splunkd + 0x1F85901)
[0x0000564E2DEDCB8F] _ZN20SearchExecutorThread4mainEv + 47 (splunkd + 0xCCEB8F)
[0x0000564E2EC41AC8] _ZN6Thread8callMainEPv + 120 (splunkd + 0x1A33AC8)
[0x00007FC9CC7B76BA] ? (libpthread.so.0 + 0x76BA)
[0x00007FC9CC4EC41D] clone + 109 (libc.so.6 + 0x10741D)
Linux / c-6.index.splunk / 4.4.0-21-generic / #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 / x86_64
/etc/debian_version: stretch/sid
...
Last errno: 0
Threads running: 13
Runtime: 436566.363455s
argv: [splunkd -p 8089 start]
Process renamed: [splunkd pid=8371] splunkd -p 8089 start [process-runner]
Process renamed: [splunkd pid=8371] search --id=remote_d-0.search.splunk_scheduler__gots_eWFuZGV4LWFsZXJ0cw__RMD5d361bb2fcae608a1_at_1581602460_31361_374F9CE8-43DB-48C4-9F22-7982EC4B6AD5 --maxbuckets=0 --ttl=60 --maxout=0 --maxtime=0 --lookups=1 --streaming --s
idtype=normal --outCsv=true --acceptSrsLevel=1 --user=gots --pro --roles=admin:power:user
Regex JIT enabled
RE2 regex engine enabled
using CLOCK_MONOTONIC
Preforked process=0/59436: process_runtime_msec=54987, search=0/188869, search_runtime_msec=1240, new_user=Y, export_search=Y, args_size=356, completed_searches=3, user_changes=2, cache_rotations=3
Thread: "BatchSearch", did_join=0, ready_to_run=Y, main_thread=N
First 8 bytes of Thread token @0x7fc9c70845e8:
00000000 00 e7 1f c0 c9 7f 00 00 |........|
00000008
SearchExecutor Thread ID: 1
Search Result Work Unit Queue: 0x7fc9b77ff000
Search Result Work Unit Queue Crash Reporting
Type: NON_REDISTRIBUTE
Number of Active Pipelines: 2
Max Count of Results: 0
Current Results Count: 352
Queue Current Size: 0
Queue Max Size: 200, Queue Drain Size: 120
FoundLast=N
Terminate=N
Total Bucket Finished: 1.0012863152522, Total Bucket Count: 16
===============Search Processor Information===============
Search Processor: "lookup"
type="SP_STREAM"
search_string=" networklist_allocs_all network AS dest_ip "
normalized_search_string="networklist_allocs_all network AS dest_ip"
litsearch="networklist_allocs_all network AS dest_ip "
raw_search_string_set=1 raw_search_string=" networklist_allocs_all network AS dest_ip "
args={StringArg: {string="networklist_allocs_all" raw="networklist_allocs_all" quoted=0 parsed=1 isopt=0 optname="" optval=""},StringArg: {string="network" raw="network" quoted=0 parsed=1 isopt=0 optname="" optval=""},StringArg: {string="AS" raw="AS"
quoted=0 parsed=1 isopt=0 optname="" optval=""},StringArg: {string="dest_ip" raw="dest_ip" quoted=0 parsed=1 isopt=0 optname="" optval=""}}
input_count=885
output_count=0
directive_args=
==========================================================
↧
Splunk Platform Upgrade Readiness App does not show up under apps even after installing
Hi All ,
I am trying to install the app on search head and even after installing(Manage app->install from a file->upload) and restart of search head the app is not appearing . i have even check the apps directory on the server and cant find . could you please suggest what might be going wrong in this case
↧
Guidance needed on how to display current waiting time by shift
I am really struggling on how to frame the question.
In essence I need to display the duration trucks are spends waiting in a carpark and display the average waiting time. But this must further be split down by shifts
So early is say 6am - 2pm, Late is 2pm to 10pm and Nights are 10pm to 6am
So I have used this code to determine what current shift is based on hour of the day:-
|eval iHour=strftime(strptime(TIMESTAMP,"%Y-%m-%d %H:%M:%S"),"%H")
|eval iDay=strftime(strptime(TIMESTAMP,"%Y-%m-%d %H:%M:%S"),"%Y-%m-%d")
|eval iDay=round(strptime(iDay,"%Y-%m-%d"),0)
|eval iDay=if(iHour>=22 AND iHour <24,iDay+86400,iDay)
|eval shift=if(iHour >= 6 AND iHour < 14,"Early",if(iHour >= 14 AND iHour < 22,"Late","Night"))
And this for working out average queue times but for a week
|dedup MANIFESTID
|search STATE=6 AND LOADTYPE="L"
|eval iTrkConfirmed=strptime(TIMEPARK,"%Y-%m-%d %H:%M:%S")
|eval iTrkCallForward=strptime(TIMEDPLY,"%Y-%m-%d %H:%M:%S")
|eval iTrkQueueTime = round((iTrkCallForward - iTrkConfirmed)/3600,2)
|timechart span=1d avg(iTrkQueueTime) as Avg_QueueTime
|timewrap 1w
| foreach * [eval <>=round('<>',2)]
Both from different searches but I just cannot for the life of me work out how to take the salient pieces from each search to allow me to display the average wait time by shift.
Any help or pointers would be greatly appreciated..
Thank you
↧
Service level agreement on data loss
Splunk as product what is the percentage that splunk assures on no data loss.
Is there anything like 99 % or 99.99%
Any document for reference would be helpful
↧
↧
Sum multiple indvidual columns into flat row
I have a search that based on a lookup that is pulling names and totals over the course of a 24 hour period or week based on time. How can I sum each column without having to sum every field individually?
`cdr_events` duration>0
( (callingPartyGroup="00581" OR originalCalledPartyGroup="00581" OR finalCalledPartyGroup="00581") )
| `calculate_all_internal_parties`
| lookup groups number as number output name group subgroup
| search ( group="00581" )
| timechart dc(callId) by name
I could get it by running a | sum("Tony Freeman") as "Tony Freeman" sum("Andrea Cook" as "Andrea Cook" etc etc but is there an easier way to do that?
↧
How to get data from an external source machine
What would be a way to get data from an external machine which is not part of our environment .Correct me if I am wrong .I was assuming to install UF on the external machine , create an HTTP token on a HF in our environment and give the token , URL and port details to get the data from the external machine .
Is this the way to get data through external machine through HTTP token.The data in a custom path and the data is in csv format.
Thanks in Advance
↧
what is the best way to forward k8s cluster logs/status etc to indexers?
indexers + SH setup on perm.
What is the best way for splunk to monitor a k8s cluster deployed on one box / 3 nodes setup (HA) / 6 nodes setup (HA DR)?
Thanks in advance!
↧
Heavy Forwarders stopped receiving some logs
Hi,
I have a new HF once accepted logs for about a week, then stopped receiving on almost all logs at a same time.
I compared this HF with the old working one and I don't see rotated logs created on the new HF.
For instance, in log1 directory, I see log1.log and several other copies like log1.log-date1.gz and log1.log-date2.gz and so on, but on the new HF I only see log1.log.
I think not creating rotated logs on the HF could be the issue, but not sure and how to have these rotated logs created.
Anyone can help, I appreciate it.
Thanks,
↧
↧
Is the AMD Rome EPYC architecture a valid option now?
I've been poking around the interwebs trying to figure out if there is a benefit/downside to going with the new AMD Rome EPYC architecture for our Splunk servers.
I don't find anything specific. I "think" we would be good to go, but I was hoping for a more definitive answer.
Cheers.
Rich Hickey
↧
Not Like function !Like
I am trying to search for a server which is named differently than all the others in our network. Commonly servers are named with Location followed by 4 digits and then some string in the end (Eg: Flra2209php_ua).
If one of the machines is not following this naming convention, how do I search for it? I was hoping there would be a "not like" function which might help with this?
↧
Can the Subscription-based inputs use a list of subscriptions rather than one input per subscription
Azure Security Center Alerts and Tasks,
Azure Resource Groups,
Azure Virtual Networks,
Azure Compute,
Azure Billing Consumption,
Azure Reservation Recomendation,
and others
all require a subscription ID to be specified. What I would like is the option to specify "all subscriptions" as an option and/or a list of subscriptions defined in the configuration options. Is that something that can be achieved? If I pick "all subscriptions", the code would get the list of subscriptions first and then use the API for each one.
↧