Hello,
I've noticed that the `addcoltotals` command doesn't display decimals if the total contains a decimal. Run anywhere code:
| makeresults
| eval decimal = 1.5
| eval whole = 1.5
| append [ | makeresults
| eval decimal = 1
| eval whole = 1.5]
| addcoltotals
I'm using Splunk 6.4.1 and my results are:
decimal,whole
1.5,1.5
1,1.5
2,3.0 (here it truncates to a single decimal place for the decimal field)
Is there something I'm missing?
Regards,
Andrew
↧
Why does addcoltotals not display decimal totals?
↧
BackBase monitoring
Can anyone share details on how to monitor BackBase (https://backbase.com/) with Splunk? What are you doing today and are there any limitations with what the platform gives you access to monitor?
↧
↧
Execute Splunk CLI command
Hi All,
I would like execute CLI command (to start/stop Splunkd) using a button (onclick) in a XML/HTML Splunk Dashboard.
How can do it?
Many thanks for the support.
Best regards
Antonio
↧
Lookup File Editor - Is there a way to remove the options to remove column and remove row
Hi Everyone, @LukeMurphey,
Is there a way to remove the options - remove column and remove row, or restrict them only to an admin and not make them available to users ? Any help will be appreciated.
Thanks!
↧
Why datamodel shows name of lookup definition in its fields
Hello, in the past few weeks we run at the strange behavior of datamodel, it is somehow connected to geofence. We named our lookup definition for it like ld_geoContEurope and use results in datamodel. But somehow the name "ld_geoContEurope" appears in fields values, so we got values like "outOfEurope", "inEurope", and "ld_geoContEurope". And this "ld_geoContEurope" also appeared in other fields of datamodel.
But it only appears when we use tstats with summerizeonly=t and we try to show respective fields and these are not defined in raw events. For example `| tstats summarizeonly=t count by datamodel.speed` shows values like
`datamodel.speed count
20 3
30 5
ld_geoContEurope 2`
as we can see 2 events has not defined attribute speed as it is optional in the event.
when we use command ` | from datamodel | stats count by speed` it shows only
`speed count
20 3
30 5`
as events have defined only those values.
Splunk version 6.5.8
Can someone help?
Thanks for any advice.
↧
↧
How to parse the logs in splunk as we parsed in ELK as below ?
Hello,
we have integrated ELK with our application(DNS Firewall) previously for forensics .Now we want to replace it with SPLUNK .But don't know how to pars the logs in the form as we have parse in ELK. Otherwise can we forward parsed logs in ELK to splunk?
In the following format we have parse the logs-
Time _source
September 26th 2018, 19:04:56.097 log_category:info dnsfw_method:QNAME @timestamp:September 26th 2018, 19:04:56.097 rpz:rpz timestamp:26-Sep-2018 19:04:55.243 path:/var/lib/bind/rpz.log clientipaddr:172.16.6.69 quried_domain:ssp.adriver.ru client:client qdomain:ssp.adriver.ru method:PASSTHRU rewritten:ssp.adriver.ru.whitelist.allow src_port:64707 @version:1 rewrite:rewrite tags:_grokparsefailure message:26-Sep-2018 19:04:55.243 rpz: info: client 172.16.6.69#64707 (ssp.adriver.ru): rpz QNAME PASSTHRU rewrite ssp.adriver.ru via ssp.adriver.ru.whitelist.allow via:via host:dnsfw01 rpz2:rpz _id:PwMWFmYB8oYdXOCf-GXG _type:doc _index:logstash-rpzlog-20
JSON
@timestamp September 26th 2018, 19:04:56.097
t @version 1
t _id PwMWFmYB8oYdXOCf-GXG
t _index logstash-rpzlog-2018.09.26
# _score -
t _type doc
t client client
t clientipaddr 172.16.6.69
t dnsfw_method QNAME
t host dnsfw01
t log_category info
t message 26-Sep-2018 19:04:55.243 rpz: info: client 172.16.6.69#64707 (ssp.adriver.ru): rpz QNAME PASSTHRU rewrite ssp.adriver.ru via ssp.adriver.ru.whitelist.allow
t method PASSTHRU
t path /var/lib/bind/rpz.log
t qdomain ssp.adriver.ru
t quried_domain ssp.adriver.ru
t rewrite rewrite
t rewritten ssp.adriver.ru.whitelist.allow
t rpz rpz
t rpz2 rpz
t src_port 64707
t tags _grokparsefailure
t timestamp 26-Sep-2018 19:04:55.243
t via via
↧
have you ever made an Splunk-Slack integration?
I'm currently indexing events from a slack team, i am indexing data from different channels... But not all channels, I saw if the channels that i want to Index are private on slack, but they ate not private. I am indexing from 375 channels but not from the one that i want. I guess that this is a slack restriction... is here someone that has suffered this case?
↧
Is there a way to forward parsed logs in ELK Stack to Splunk?
Hello,
We have integrated ELK Stack with our application(DNS Firewall) previously for forensics.Now, we want to replace it with Splunk. But we don't know how to parse the logs in the form as we have parsed them in ELK Stack. Is there a way to forward parsed logs in ELK to Splunk?
We have parsed the logs In the following format:
Time _source
September 26th 2018, 19:04:56.097 log_category:info dnsfw_method:QNAME @timestamp:September 26th 2018, 19:04:56.097 rpz:rpz timestamp:26-Sep-2018 19:04:55.243 path:/var/lib/bind/rpz.log clientipaddr:172.16.6.69 quried_domain:ssp.adriver.ru client:client qdomain:ssp.adriver.ru method:PASSTHRU rewritten:ssp.adriver.ru.whitelist.allow src_port:64707 @version:1 rewrite:rewrite tags:_grokparsefailure message:26-Sep-2018 19:04:55.243 rpz: info: client 172.16.6.69#64707 (ssp.adriver.ru): rpz QNAME PASSTHRU rewrite ssp.adriver.ru via ssp.adriver.ru.whitelist.allow via:via host:dnsfw01 rpz2:rpz _id:PwMWFmYB8oYdXOCf-GXG _type:doc _index:logstash-rpzlog-20
JSON
@timestamp September 26th 2018, 19:04:56.097
t @version 1
t _id PwMWFmYB8oYdXOCf-GXG
t _index logstash-rpzlog-2018.09.26
# _score -
t _type doc
t client client
t clientipaddr 172.16.6.69
t dnsfw_method QNAME
t host dnsfw01
t log_category info
t message 26-Sep-2018 19:04:55.243 rpz: info: client 172.16.6.69#64707 (ssp.adriver.ru): rpz QNAME PASSTHRU rewrite ssp.adriver.ru via ssp.adriver.ru.whitelist.allow
t method PASSTHRU
t path /var/lib/bind/rpz.log
t qdomain ssp.adriver.ru
t quried_domain ssp.adriver.ru
t rewrite rewrite
t rewritten ssp.adriver.ru.whitelist.allow
t rpz rpz
t rpz2 rpz
t src_port 64707
t tags _grokparsefailure
t timestamp 26-Sep-2018 19:04:55.243
t via via
↧
Can anyone share details on how to monitor BackBase (https://backbase.com/) with Splunk?
Can anyone share details on how to monitor BackBase (https://backbase.com/) with Splunk? What are you doing today and are there any limitations with what the platform gives you access to monitor?
↧
↧
Can you help me trouble shoot a Splunk-Slack integration?
I'm currently indexing events from a Slack team. i am indexing data from different channels... But not all channels. I saw if the channels that i want to Index are private on slack, but they ate not private. I am indexing from 375 channels but not from the one that i want. I guess that this is a slack restriction... is here someone that has suffered this case?
↧
How do I execute Splunk CLI command using a button (onclick) in a XML/HTML Splunk Dashboard?
Hi All,
I would like to execute a CLI command (to start/stop Splunkd) using a button (onclick) in a XML/HTML Splunk Dashboard.
How can do it?
Many thanks for the support.
Best regards
Antonio
↧
Lookup File Editor: Is there a way to remove the options to remove column and remove row?
Hi Everyone, @LukeMurphey,
Is there a way to remove the options/remove column and remove row, or restrict them only to an admin and not make them available to users ? Any help will be appreciated.
Thanks!
↧
Search Factory: Unknown search command error
I have created a custom generating command on the Search Head. I also want to execute this command on the Search Head. I don't want this command to be sent to the indexers. This is why in have set distributed = False and local = True in the commands.conf as below.
[generatepaths]
distributed = False
chunked = true
local = True
enableheader = true
outputheader = true
requires_srinfo = true
supports_getinfo = true
supports_multivalues = true
supports_rawargs = true
filename = system_python.path
command.arg.1 = sankey.py
Sometimes you have to set the same parameters in multiple places. So I have also configured the following in my python script to force the command to be executed locally:
@Configuration(local=True)
Still no luck. I get "Search Factory: Unknown search command 'generatepaths'" error from every indexer. What should I do to execute custom command locally on the Search Head. Is there some other hidden undocumented setting i have to look for or this is simpy a bug?
↧
↧
After cloning and renaming a dashboard, is there a way to get the original dashboard back?
How do you change the URL of a Splunk dashboard?
I originally had a dashboard named "System". I cloned it and named the new dashboard "System." (note the period at the end). Now, on the dashboard page, both dashboards have the same URL "10.1.1.1/en-US/app/search/system". Clicking on either one brings me to the cloned dashboard.
Is there a way to get the original dashboard back?
↧
Why does data model show name of lookup definition in its fields?
Hello,
in the past few weeks, we have run into some strange behavior with a data model. It is somehow connected to geofence. We named our lookup definition for it as ld_geoContEurope and used the results in data model. But somehow, the name "ld_geoContEurope" appears in fields values, so we get values like "outOfEurope", "inEurope", and "ld_geoContEurope". And this "ld_geoContEurope" also appeared in other fields of the data model.
But, it only appears when we use `tstats`with summerizeonly=t and we try to show respective fields and these are not defined in raw events. For example `| tstats summarizeonly=t count by datamodel.speed` shows values like
datamodel.speed count
20 3
30 5
ld_geoContEurope 2
as we can see, 2 events don't have a defined attribute speed as it is optional in the event.
When we use command ` | from datamodel | stats count by speed` , it shows only:
speed count
20 3
30 5
as events have defined only those values.
Splunk version 6.5.8
Can someone help?
Thanks for any advice.
↧
Can you help me troubleshoot a Splunk-Slack integration?
I'm currently indexing events from a Slack team. i am indexing data from different channels... But not all channels. I saw if the channels that i want to Index are private on slack, but they ate not private. I am indexing from 375 channels but not from the one that i want. I guess that this is a slack restriction... is here someone that has suffered this case?
↧
Why am I getting an "unknown search command error" when trying to execute a custom command on the Search Head?
I have created a custom generating command on the search head. I also want to execute this command on the search head. I don't want this command to be sent to the indexers. This is why I have set distributed = False and local = True in the commands.conf as below.
[generatepaths]
distributed = False
chunked = true
local = True
enableheader = true
outputheader = true
requires_srinfo = true
supports_getinfo = true
supports_multivalues = true
supports_rawargs = true
filename = system_python.path
command.arg.1 = sankey.py
Sometimes, you have to set the same parameters in multiple places. So I have also configured the following in my python script to force the command to be executed locally:
@Configuration(local=True)
Still no luck. I get a "Search Factory: Unknown search command 'generatepaths'" error from every indexer. What should I do to execute custom command locally on the Search Head. Is there some other hidden undocumented setting i have to look for or this is simpy a bug?
↧
↧
Can you help me with the following search using a lookup?
Hi,
We are frequently required to validate that data is being received by Splunk from multiple servers. The lists of IPs/hosts can be quite long. I am trying to come up with a search that will make this easier, like putting the entries into lookup files and then running a search against the entries in the lookups. So far, I have a lookup with a hostname, IP, and potentially, a wildcard for that host (sometimes the hosts are fully qualified and sometimes they are not). The IPs are reported as hosts, not as a separate "ip" field.
By using this search, I can retrieve data for hosts:
index=* [|inputlookup testSVB2.csv|table host ]
Is there anyway to expand this so it it will run a search against matching hosts OR IPs OR wildcards? When I table out host or IP, it seems to be running an "AND", rather than an "OR".
Finally, is there anyway to limit the number of events returned per host?
↧
How do I fetch data from an existing field?
I have a field in my log which contains a huge text data with two different formats. I tried to catch a few parts in a new field but was unable to get all the data.
First type
------------------------------------
Timestamp=26/SEP/2018 16:37:38 UTC|DBA_GROUP=X2Oracle_NSS|TOWER=NSS|DB_INSTANCE_NAME=lsrprod|DB_HOST_NAME=lsrdbp1|UAID=0|TABLE_OWNER=STAGE|TABLE_NAME=NFMDAT|PARTITION_POSITION=6|PARTITION=P2018|HIGH_VALUE=TTO_DATE(' 2019-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')|PREV_HIGH_VALUE=TTO_DATE(' 2018-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
Second type
-------------------------------------------------
Timestamp=26/SEP/2018 16:01:06 UTC|DBA_GROUP=X2Oracle_GFS|TOWER=GFS|DB_INSTANCE_NAME=ecs02prd|DB_HOST_NAME=ecsdbp3|UAID=UDBID-15360|TABLE_OWNER=ECSREFRESH_EXCEPTION|TABLE_NAME=ECS_TRAN_AUDIT_HSTR_BKP|PARTITION_POSITION=27|PARTITION=ECSTRANAUDTHSTR_20170430|HIGH_VALUE=TIMESTAMP' 2017-05-01 00:00:00'|PREV_HIGH_VALUE=TIMESTAMP' 2017-04-01 00:00:00'
partition_check_En_Time=12:01:07 PM
End_Time: Wed Sep 26 12:01:07 EDT 2018
I used the below query to get the new field from above log.
base search | eval Current_High_Value=substr(HIGH_VALUE, 11, 20) | eval Previous_High_Value=substr(PREV_HIGH_VALUE, 11, 20)
I am getting value properly for Current_High_Value field but not getting complete data in Previous_High_Value. Its not picking data for second type of log.
↧
What is the best way to count events and calculate the disk space these events use?
So, the first part of this is really easy.
index=active_dir
| stats count by EventCode
This will give me the a list of all the event codes, and the number of times they appear. What I am needing to do, is also report on the total drive space those events are taking up. This is where I am stuck. Anyone have any ideas?
↧