Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How to sort the contents of a list

$
0
0
I'm currently querying source="log" | stats list by Id Which gives me nicely grouped data. However I would like the content of those groups sorted by `Timestamp`. That is to say I do not want the groups themselves sorted but the records inside each group.

How to count the results of a rex that returns multiple matches as a single group of matches?

$
0
0
I have results from a rex statement that looks something like the first set of results. The rex returns multiple matches per row. I am trying to use the *stats* function to group multiple matches as a single group (see ***Desired***). However, my *stats* statement currently sees each match as a separate group (see ***Not Desired***). Is there a way to return the ***Desired*** result? ***Multi-match rex results*** **namespace** ......................................................... System.ServiceModel.Channels System.ServiceModel.Dispatcher .......................................................... System.ServiceModel.Channels System.ServiceModel.Dispatcher .......................................................... ***Statement*** ... |stats count by namespace ***Desired*** **namespace** **count** ......................................................................................... System.ServiceModel.Channels 2 System.ServiceModel.Dispatcher ......................................................................................... ***Not Desired*** **namespace** **count** ......................................................................................... System.ServiceModel.Channels 1 System.ServiceModel.Dispatcher 1 .........................................................................................

Does splunk have an option for reading data via http get request?

$
0
0
I've been looking for a way to import contents from an http get request with splunk without success. At first, I thought I could do this by using Rest Api section that build-in Splunk. But after I give it a url to do an http get request, my search return no event. I thought this is all I have to do to get content from the page to Splunk. The documentation for this section is very confusing and I don't know where to start. At this point, I don't know if Rest Api is the answer to my question. Does anyone know of a way I could get content with an http get request in Splunk?

Why am I getting the following error after updating from 6.6.0 to 6.6.3: Invalid key in stanza [auditTrail] in /opt/splunk/etc/system/local/audit.conf

$
0
0
I'm getting this error: Invalid key in stanza [auditTrail] in /opt/splunk/etc/system/local/audit.conf Looking at the audit.conf.spec, that key is no longer mentioned. In earlier versions it was. I couldn't find anything in the release notes about this.

Can 1 master node be used to manage 50 indexer cluster?

$
0
0
Can 1 master node be used to manage 50 indexer cluster? As Splunk doc specifies 30 indexer cluster per master node . Will having 2 master cluster nodes imply 2 sets of clusters? What is the best way to manage 50 indexer cluster? Thanks.

Brute Force Access Behavior Detected Tuning

$
0
0
Seeing lots of "Brute Force Access Behavior Detected" notable events coming from Microsoft domain controllers. The correlation search triggers when successful authentication >0 and failures_by_src_count_1h is above medium. The source is domain controllers which handle authentication requests from thousands of users. Any recommendations on safely tuning this correlation search.

How to show different panel based on the user input from the textbox

$
0
0
Hi. I have a dashboard with a textbox allow users search a specific host or IP which is set to "*" by default. Due to the limitation from max result from a subsearch, I am unable to get all the results from the default value. I want to create two different panels (panel A and B) with different search queries: When the users input "*" only from the textbox, it shows "panel A", and "panel B" is hidden; otherwise, it shows "panel B" and hides "panel A". Does anyone have any suggest how to handle this? Thanks.

How to compare field values from this year vs last year by date and calculate the percentage change?

$
0
0
Hi, I have data in 2 fields in table: one is date and the other is some value, for each year respectively. Now I want to perform an action like compare date_1 from 2015 vs date_1 from 2016, then perform some evals on the data. For example: 01-01-2015 1234567 02-01-2015 1234578 01-01-2016 1234563 02-01-2016 1234577 Now I want to compare 01-01-2015 with 01-01-2016, see if they are equal greater less, and do a percentage change based on the data. Please recommend I have pulled all the dates using stats values by date and appended by a similar fashion for each year by adding the where clause in the date range.

Help with writing a join command that joins a security breach to the previous login

$
0
0
This is the requirement. I need to join two events based on a common field “User”. The Event with EventType “Security Breach” should be joined with Eventtype “Login”. The condition is User1 who have a “Security Breach” at 10:55 AM should be joined to the login at 10:54 AM, not with the login at 10:57 AM and login at 10:49 AM. Similarly the User1 who have a “Security Breach” Event at 10:50 AM should be joined to Login event at 10:49AM, not with 10:54 AM. Hope this clarifies. _time User EventType 10:55 AM User1 Security Breach 10:53 AM User2 Security Breach 10:50 AM User1 Security Breach 10:48 AM User1 Security Breach _time User EventType 10:57 AM User1 Login 10:55 AM User2 Login 10:54 AM User1 Login 10:53 AM User2 Login 10:49 AM User1 Login Any one can help me in Writing query for this. I tried using join with earlier=true option. But that doesnt give me the right result.

Help with installing two universal forwarders on the same Windows box - service shutting down on second install

$
0
0
I need to install 2 separate universal forwarders on the same Windows box. I have the install built, one via msi and the other via scripted process. On one install the service shuts down. I connected both services to 1 deployment server and that seemed fine, when I change the deployment client to point to the other deployment server the service also shuts down. Here is the log where you see it removing the and then splunkd restarting. ?? 09-20-2017 12:29:54.412 -0400 INFO DeployedApplication - Removing app=Splunk_TA_windows at='C:\program files\splunk-PI\etc\apps\Splunk_TA_windows' 09-20-2017 12:29:54.537 -0400 WARN BundlesUtil - C:\program files\splunk-PI\etc\apps\SplunkUniversalForwarder\metadata\local.meta already exists but with different casing: C:\Program Files\splunk-PI\etc\apps\SplunkUniversalForwarder\metadata\local.meta 09-20-2017 12:29:54.537 -0400 WARN BundlesUtil - C:\program files\splunk-PI\etc\system\metadata\local.meta already exists but with different casing: C:\Program Files\splunk-PI\etc\system\metadata\local.meta 09-20-2017 12:29:54.552 -0400 WARN BundlesUtil - C:\program files\splunk-PI\etc\apps\learned\metadata\local.meta already exists but with different casing: C:\Program Files\splunk-PI\etc\apps\learned\metadata\local.meta 09-20-2017 12:29:54.552 -0400 WARN DC:DeploymentClient - Restarting Splunkd... 09-20-2017 12:29:54.552 -0400 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.210.147.150_8090_L74B00-PC0ETLVM.prod.travp.net_L74B00-PC0ETLVM_22D4D347-CE64-48BC-A1F0-352E78032799 09-20-2017 12:29:55.956 -0400 INFO PipelineComponent - Performing early shutdown tasks 09-20-2017 12:29:55.956 -0400 INFO loader - Shutdown HTTPDispatchThread 09-20-2017 12:29:55.956 -0400 INFO ShutdownHandler - Shutting down splunkd 09-20-2017 12:29:55.956 -0400 INFO ShutdownHandler - shutting down level "ShutdownLevel_Begin" 09-20-2017 12:29:55.956 -0400 INFO ShutdownHandler - shutting down level "ShutdownLevel_FileIntegrityChecker"

Help extracting information from JSON file

$
0
0
Json Format ↓ { "device":"A123", "data":"28745637", "time":"1505924687", } "2874" = 28.74 , means tempurature , and "5637" = 56.37% humidity . How to display as below↓ if ( tempurature > 25 & humidity >50) { display matching data ; }

How do I change the label of the x-axis on a chart?

$
0
0
![alt text][1] [1]: /storage/temp/217592-test.png index="all_eqt" Plant=15 ProcessCode=T DefectCode="*" MachineNumber<26 | stats sum(TotalSquareYards) as "Total Square Yards" by DefectCode How do I change the x-axis "TA" label to display "styles" instead?

Creating a correlation search using "guided mode" -- error -- type object 'DataModels' has no attribute 'build_id'

$
0
0
When attempting to create a correlation search using "guided mode" I get this error and am unable to continue making the search. type object 'DataModels' has no attribute 'build_id' Any ideas as to why?

Disk alerts need help

$
0
0
Hi , I am using following( default) query for near critical disk alert on Indexer nodes. The daily results are showing 99% where as actual disk usage is much lower. Can you help clarify. I will submit the actual support contract later. Thanks, Naren | rest splunk_server_group=dmc_group_* /services/server/status/partitions-space | eval free = if(isnotnull(available), available, free) | eval usage = capacity - free | eval pct_usage = floor(usage / capacity * 100) | where pct_usage > 92 | stats first(fs_type) as fs_type first(capacity) AS capacity first(usage) AS usage first(pct_usage) AS pct_usage by splunk_server, mount_point | eval usage = round(usage / 1024, 2) | eval capacity = round(capacity / 1024, 2) | rename splunk_server AS Instance mount_point as "Mount Point", fs_type as "File System Type", usage as "Usage (GB)", capacity as "Capacity (GB)", pct_usage as "Usage (%)" Alert search Results: Instance Mount Point File System Type Capacity (GB) Usage (GB) Usage (%) prd-sjc-splunk-indexer-1 /opt/colddb ext4 14881.80 14239.33 95 prd-sjc-splunk-indexer-2 /opt/colddb ext4 14881.80 14523.47 97 prd-sjc-splunk-indexer-3 /opt/colddb ext4 14881.80 14664.28 98 prd-sjc-splunk-indexer-4 /opt/colddb ext4 14881.80 14845.24 99 prd-sjc-splunk-indexer-5 /opt/colddb ext4 14881.80 14612.96 98 prd-sjc-splunk-indexer-6 /opt/colddb ext4 14881.80 14744.09 99 Actual Disk space: Processing on prd-sjc-splunk-indexer-2 /dev/mapper/hot-hot 10403135808 7814205760 2064642256 80% /opt/splunk /dev/mapper/cold-cold 15604702004 10252296568 4565973644 70% /opt/colddb Processing on prd-sjc-splunk-indexer-1: /dev/mapper/hot-hot 10403135808 7956960136 1921887880 81% /opt/splunk /dev/mapper/cold-cold 15604702004 9749420004 5068850208 66% /opt/colddb Processing on prd-sjc-splunk-indexer-5 /dev/xvdg 10403139904 7912240516 1966611388 81% /opt/splunk /dev/mapper/colddb-colddb 15604697908 9752163196 5066103124 66% /opt/colddb Processing on prd-sjc-splunk-indexer-3: /dev/mapper/hot-hot 10403135808 7865249624 2013598392 80% /opt/splunk /dev/mapper/cold-cold 15604702004 9997688028 4820582184 68% /opt/colddb Processing on prd-sjc-splunk-indexer-4 /dev/mapper/cold-colddb 15604697908 10681093532 4137236996 73% /opt/colddb /dev/mapper/hot-hotdb 10403135808 7779513904 2099334112 79% /opt/splunk Processing on prd-sjc-splunk-indexer-6 /dev/xvdg 10321219904 7766226312 2030705592 80% /opt/splunk /dev/mapper/colddb-colddb 15604697908 10138947556 4679318764 69% /opt/colddb Processing on prd-sjc-splunk-indexer-7 /dev/xvdg 10403139904 7783278992 2095572912 79% /opt/splunk /dev/xvdh 17111506844 1754748380 14497765008 11% /opt/colddb

Out of 3 clusters why are 2 showing similar results and the third is missing results?

$
0
0
Hi , Rest API Splunk query results difference We have a query running with JDK REST API. We have 3 spunk clusters. The result on 2 clusters is showing full results. where as one cluster is showing only 10 results. The configuration files look same. Is there any parameter I need to adjust to give complete results. Thanks, NP

Question about pipeline parallelization

$
0
0
How can I achieve pipeline parallelization in standalone Splunk indexer to optimize my CPU usage? In Splunk 2016 .conf, it is mentioned to use above method if CPU is underutilized. For this, server.conf requires below changes: parallelIngestionPipelines = 2 are there any other configuration changes required? Also , do we need to configure in inputs.conf or in other configuration to bind these to use a specific pipeline processors or splunk takes care of this by own?

How can I run a search if a field contains the "|" character?

$
0
0
Hello, I need to count the event log line contains AAA|Y|42 but "|" is the pipeline command so that I got error as the following search: I tried to use " double quote at two sides of the string but no return result. index=transaction sourcetype=transaction_270 *AAA|Y|42* | chart count by region_id, partner_id Splunk will treat Y is the command and got this error: Search Factory: Unknown search command 'y'. Please help me with solution. Thank you very much.

Why am I near critical disk alert on Indexer nodes?

$
0
0
Hi , I am using following( default) query for near critical disk alert on Indexer nodes. The daily results are showing 99% where as actual disk usage is much lower. Can you help clarify. I will submit the actual support contract later. Thanks, Naren | rest splunk_server_group=dmc_group_* /services/server/status/partitions-space | eval free = if(isnotnull(available), available, free) | eval usage = capacity - free | eval pct_usage = floor(usage / capacity * 100) | where pct_usage > 92 | stats first(fs_type) as fs_type first(capacity) AS capacity first(usage) AS usage first(pct_usage) AS pct_usage by splunk_server, mount_point | eval usage = round(usage / 1024, 2) | eval capacity = round(capacity / 1024, 2) | rename splunk_server AS Instance mount_point as "Mount Point", fs_type as "File System Type", usage as "Usage (GB)", capacity as "Capacity (GB)", pct_usage as "Usage (%)" Alert search Results: Instance Mount Point File System Type Capacity (GB) Usage (GB) Usage (%) prd-sjc-splunk-indexer-1 /opt/colddb ext4 14881.80 14239.33 95 prd-sjc-splunk-indexer-2 /opt/colddb ext4 14881.80 14523.47 97 prd-sjc-splunk-indexer-3 /opt/colddb ext4 14881.80 14664.28 98 prd-sjc-splunk-indexer-4 /opt/colddb ext4 14881.80 14845.24 99 prd-sjc-splunk-indexer-5 /opt/colddb ext4 14881.80 14612.96 98 prd-sjc-splunk-indexer-6 /opt/colddb ext4 14881.80 14744.09 99 Actual Disk space: Processing on prd-sjc-splunk-indexer-2 /dev/mapper/hot-hot 10403135808 7814205760 2064642256 80% /opt/splunk /dev/mapper/cold-cold 15604702004 10252296568 4565973644 70% /opt/colddb Processing on prd-sjc-splunk-indexer-1: /dev/mapper/hot-hot 10403135808 7956960136 1921887880 81% /opt/splunk /dev/mapper/cold-cold 15604702004 9749420004 5068850208 66% /opt/colddb Processing on prd-sjc-splunk-indexer-5 /dev/xvdg 10403139904 7912240516 1966611388 81% /opt/splunk /dev/mapper/colddb-colddb 15604697908 9752163196 5066103124 66% /opt/colddb Processing on prd-sjc-splunk-indexer-3: /dev/mapper/hot-hot 10403135808 7865249624 2013598392 80% /opt/splunk /dev/mapper/cold-cold 15604702004 9997688028 4820582184 68% /opt/colddb Processing on prd-sjc-splunk-indexer-4 /dev/mapper/cold-colddb 15604697908 10681093532 4137236996 73% /opt/colddb /dev/mapper/hot-hotdb 10403135808 7779513904 2099334112 79% /opt/splunk Processing on prd-sjc-splunk-indexer-6 /dev/xvdg 10321219904 7766226312 2030705592 80% /opt/splunk /dev/mapper/colddb-colddb 15604697908 10138947556 4679318764 69% /opt/colddb Processing on prd-sjc-splunk-indexer-7 /dev/xvdg 10403139904 7783278992 2095572912 79% /opt/splunk /dev/xvdh 17111506844 1754748380 14497765008 11% /opt/colddb

Speeding up a stats by command

$
0
0
I'm working on some statistics related queries. I'm trying to get the security id, date and count of hosts connected to. index=wineventlog sourcetype="WinEventLog:Security" 4624 | fields host,Security_ID,_time | bucket _time span=1d | stats dc(host) by Security_ID, _time They work perfectly until I start adding Security_ID. With no `by` command or only based on time it's fast. I also tried to do a `dedup Security_ID, _time, host` before the stats dc command but it didn't help the overall speed. It takes well over 10 minutes to complete this search for a week, and I'd like to be able to run this for 30 60 or 90 days. What do I need to do for that to be viable?

On a HEF, can I forward a subset of data to syslog and drop everything else?

$
0
0
Here is my situation: I have a Windows HF that is collecting a lot of different data. Some via powershell scripts, some via WMI, some via file monitoring locally and over UNC paths. All of that data is being forwarded to two indexes. A few weeks ago I configured one of the file monitoring inputs to send a copy of the data it collected to a syslog server. I now need to send that data (collected via file monitoring) to the syslog server and NOT to the indexers. IOW, I want all data collected by this HF to go to the indexers, EXCEPT this data which should be sent to the syslog server ONLY. How do I do that? I've read through this which helped me get the current configuration: http://docs.splunk.com/Documentation/SplunkCloud/6.6.1/Forwarding/Forwarddatatothird-partysystemsd Here are my config files: .../etc/apps/myapp/local/props.conf: [WinDNS] SHOULD_LINEMERGE = True BREAK_ONLY_BEFORE_DATE = True MAX_EVENTS = 1000 EXTRACT-Domain = (?i) .*? \.(?P[-a-zA-Z0-9@:%_\+.~#?;//=]{2,256}\.[a-z]{2,6}) EXTRACT-src = (?i) [Rcv|Snd] (?P\d+\.\d+\.\d+\.\d+) EXTRACT-Threat_ID,Context,Int_packet_ID,proto,mode,Xid,type,Opcode,Flags_Hex,char_code,ResponseCode,question_type = .+?[AM|PM]\s+(?\w+)\s+(?\w+)\s+(?\w+)\s+(?\w+)\s+(?\w+)\s+\d+\.\d+\.\d+\.\d+\s+(?\w+)\s(?(?:R)?)\s+(?\w+)\s+\[(?\w+)\s(?.+?)(?[A-Z]+)\]\s+(?\w+)\s EXTRACT-Authoritative_Answer,TrunCation,Recursion_Desired,Recursion_Available = (?m) .+?Message:\W.+\W.+\W.+\W.+\W.+AA\s+(?\d)\W.+TC\s+(?\d)\W.+RD\s+(?\d)\W.+RA\s+(?\d) TRANSFORMS-droplocal2 = droplocal2 TRANSFORMS-dropbach = dropbach #TRANSFORMS-dropall = dropall SEDCMD-win_dns = s/\(\d+\)/./g TRANSFORMS-dns = send_to_syslog .../etc/apps/myapp/local/transforms.conf [dropbach] REGEX = \[.+?\]\s+\w+\s+.+?BACH DEST_KEY=queue FORMAT=nullQueue [droplocal2] REGEX = \[.+?\]\s+\w+\s+.+?local DEST_KEY=queue FORMAT=nullQueue [send_to_syslog] REGEX = . DEST_KEY = _SYSLOG_ROUTING FORMAT = my_syslog_group #[dropall] #REGEX = . #DEST_KEY=queue #FORMAT=nullQueue .../etc/system/local/outputs.conf [tcpout] defaultGroup = default-autolb-group indexAndForward = 0 [tcpout-server://splunk-01:9997] [tcpout:default-autolb-group] disabled = false server = splunk-01:9997,splunk-02:9997 [tcpout-server://splunk-02:9997] # not sure why this is here.... [syslog:my_syslog_group] server = 1.1.1.5:514 As you can tell, I tried to add a 'dropall', but that just dropped everything without sending a copy to the syslog server first. I then found this forum post: https://answers.splunk.com/answers/4083/can-i-route-some-data-as-syslog-output-to-multiple-destinations.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev which seems to imply that to do what I want to do, I need to modify the outputs.conf so the defaultgroup=nothing and then modify all the props.conf and transforms.conf files for all my inputs to point to the "default-autolb-group" in outputs.conf that sends to the indexers, and then for this app have the ONLY output reference pointing to the "my_syslog_group" in outputs.conf. Is that correct or something else?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>