Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

DBConnect 3.1 query with JOINS, Alias', non-* and Rising Column

$
0
0
Struggling a little bit with this, essentially I'm trying to pull some specific fields from multiple tables to form "log messages" however I'm having issues when my SQL statements become "Advanced". Essentially what I'm looking for is: SELECT T1.id, T2.name FROM Catalog.Table1 AS T1 LEFT OUTER JOIN Catalog.Table2 AS T2 ON T1.id = T2.id WHERE T1.id > ? ORDER BY T1.id ASC Here's what I've tried: The most basic query I can think of works fine: SELECT * FROM table WHERE id> ? ORDER BY id ASC cool, so moving forward let's try making a join: SELECT * FROM Catalog.Table1 LEFT OUTER JOIN Catalog.Table2 ON Catalog.Table1.id = Catalog.Table2.id WHERE Catalog.Table1.id > ? ORDER BY Catalog.Table1.id ASC This will result in "java.lang.IllegalStateException: Column name conflicted, please set shortnames option to false and retry" So to alleviate this, we can either do some alias' or we can be specific and do some renames: SELECT * FROM Catalog.Table1 AS T1 LEFT OUTER JOIN Catalog.Table2 AS T2 ON T1.id = T2.id WHERE T1.id > ? ORDER BY T1.id ASC SELECT Catalog.Table1.id FROM Catalog.Table1 LEFT OUTER JOIN Catalog.Table2 ON Catalog.Table1.id = Catalog.Table2.id WHERE Catalog.Table1.id > ? ORDER BY Catalog.Table1.id ASC Both of which result in "java.sql.SQLException: Parameter #1 has not been set." I'm hoping not to create a view or a stored procedure as the DB is not mine and I don't have access. I'd also like to avoid using Splunk Search or Datamodels to do the joins for ease of support. Any ideas how to get this to work?

Load Balancing at Universal Forwarders as intermediate layer

$
0
0
In current design, we proposed two load balanced HFs to collect the data from 200+ end-points and pass it to next level of heavy forwarders at Splunk hosted environment. However, with concerns around cooking of data at HF (due to parsing), we are thinking of replacing intermediate HFs with UFs as there is no planned indexing or filtering at intermediate layer. **While we proceed on this approach, can any one advice if it is possible to have two dedicated machines with load balanced UFs as intermediate layer to receive data from 200+ UFs at end-points?** These UFs are to be horizontally load balanced by adding Splunk LB feature (adding IP of both of UFs to output.conf at end-points) as below: #################### -- outputs.conf at UF1 - UF200 [tcpout] server = UF1:9997, UF2:9997 autoLB = true autoLBFrequency = 30 ###################

Eval with an If Statement

$
0
0
Hello, I am trying to use and eval and if statement to calculate a percentage and I am not sure if I am doing something wrong or possible using the wrong spl or functions for this calculation. Basically I have multiple agencies that have a total number of Splunk servers... AGENCY ------- COUNT OF SPLUNK_SERVERS Agency A -------- 30 Agency B --------- 20 Agency C -------- 15 Agency D -------- 12 I am using a rest spl to get the active Servers and want to divide by the absolute numbers above, so I was trying something like this base search yields X number per Agency===we will say it's called the "count" field |eval "Percentage of Available Servers"=if(Agency=Agency A, count/30)*100 As I researched, I know I was not doing the right thing and I know there are probably multiple ways that would be much easier so I thought I would ask for help. *I have created a lookup** but not quite sure how to make it work with what I want to do* Thanks in Advance.

how can i get the total count of payments and total amount of payments?

$
0
0
index = elm-retail-rws source="/opt/app/jboss/current/standalone/log/PosMultipaymentProfile.log"

Timerange picker: Change the value from _time to Reported date

$
0
0
Hi All, Thanks in advance. By default time range picker is using _time. I want to change the value of time range picker value from _time to reported_date. So, please help me out.

About Debug of File monitoring

$
0
0
I want to get debug log of file monitoring. So I configured the following settings in "log-local.cfg". category.TailingProcessor=DEBUG category.WatchedFile=DEBUG category.FileTracker=DEBUG category.FileInputTracker=DEBUG But there is one thing I worried about that. Previous version there was "category.BatchReader", but now it was gone. Now there is "category.TailReader" instead, is this a substitute for "category.BatchReader"? Even if I look for a document, I can not find a description about it, Please someone tell me.

Sybase ASE(jConnect) Connection with additional JDBC Driver properties

$
0
0
Hi Experts, how can we configure a Sybase ASE(jConnect) connection with additional JDBC Driver properties like ENCRYPT_PASSWORD=true RETRY_WITH_NO_ENCRYPTION=true JCE_PROVIDER_CLASS=org.bouncycastle.jce.provider.BouncyCastleProvider in Splunk DB connect? Environment: Splunk Enterprise : 6.6.3 Splunk DB Connect : 3.1.1, App Build : 34 jconn4.jar bcprov-jdk16-143.jar Adaptive Server Enterprise (15.0.2) Thanks a lot!

Reslut miss from same search

$
0
0
Hi, I use the below search to get the row with max value; (index="indexa" OR index="indexb") sourcetype="sourceA" | table _time,money,user | eventstats max(_time) as mtime by user |where _time=mtime but some user can not find in result. And When I add user in below search, it exists in result. (index="indexa" OR index="indexb") sourcetype="sourceA" user="XXX" | table _time,money,user | eventstats max(_time) as mtime by user |where _time=mtime How can I know what different in above search? Thanks ps.above search has 1 million row in first phase and the final result should has 220000 row output

Inputlookup and match only whole word in field text

$
0
0
I want to use a keyword list (inputlookup) to find a keyword (**whole word only !**) in the event text. Sample Event text (field name is 'data'): Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam pretium urna vel auctor tempus. Integer velit libero, faucibus id ex. I've imported a csv file containing keywords. Keyword adipiscing faucibus The inputlookup works fine: | imputlookup keywords.csv Searching for just a keyword works fine: index=lorum adipiscing Using inputlookup with the csv file doesn't work (no matches): index=lorum [| imputlookup keywords.csv] Any help writing my query is highly appreciated.

How can I fix my outputcsv to a particular IP in search head clustering ?

$
0
0
We are generating 4 reports from Splunk SHC. We want to append all the results of a search query into one particular CSV. I Tried with append in Splunk command but it is failing as the queries are too complex and output of the append query is producing few rows. What we have done to mitigate this issue is we are generating 4 different files and a linux script to append them. but results are getting generated into different IP's.

Making sense of data

$
0
0
Hi guys, more like a generic question: how do you make sense of events which are not necessarily linked by a common field? For example, one of our applications produces logs that generate many events/lines such as: [08/Sep/2017:09:20:20 +0200] Logon request from 10.10.10.3 [08/Sep/2017:09:20:21 +0200] Object 662737354 deleted [08/Sep/2017:09:20:21 +0200] User X77262 trying to connect ... [08/Sep/2017:09:20:22 +0200] Logon Denied: Bad password So lines 1, 3 and 4 represent a logon request but I cannot "transact" them as there is no common field. Or can I? In a perfect world session IDs would be introduced in the logs OR more complete log entries, but changing code is a massive undertaking ... How do you guys deal with scenarios such this one? Thanks.

Option _TCP_ROUTING is not compatible with this Modular Input ?

$
0
0
Hi All I have an error message saying : ERROR The input stanza 'file_meta_data://Valo_indus_Import_test' is invalid: The parameter '_TCP_ROUTING' is not a valid argument how can i force the output ? Regards

Garbage collection logs field extraction from log file

$
0
0
Would like to extract fields from the below log by using reqular expressions. Can some one help me 28820.220: [Full GC (System.gc()) 8832K->8624K(37888K), 0.0261704 secs] 29372.500: [GC (Allocation Failure) 23984K->8816K(37888K), 0.0013546 secs] 29932.500: [GC (Allocation Failure) 24176K->8808K(37888K), 0.0017082 secs] 30492.500: [GC (Allocation Failure) 24168K->8960K(37888K), 0.0017122 secs] 31047.500: [GC (Allocation Failure) 24320K->8944K(37888K), 0.0020634 secs] 31602.500: [GC (Allocation Failure) 24304K->8992K(37888K), 0.0017542 secs] 32157.500: [GC (Allocation Failure) 24352K->8968K(37888K), 0.0018971 secs] 32420.247: [GC (System.gc()) 16160K->8944K(37888K), 0.0012816 secs] 32420.248: [Full GC (System.gc()) 8944K->8624K(37888K), 0.0205035 secs] Would like to extract Full GC --- 8944K->8624K(37888K) Field1: 8944 --- what ever comes throughout the multiple entries of Full GC Field2: 8624 -- what ever comes throughout the multiple entries of Full GC Field3: 37888 -- what ever comes throughout the multiple entries of Full GC similarly for GC Early help would be appreciate as my organization not allowing me to install field extractor app to extract easily these fields

How to prevent injection from field in a dashboard?

$
0
0
I create a simple dashboard and put a text field (token: field1) and a panel with shows result search query.
*
index=main "$field1$"
If user input the following keyword in the field " OR index=_internal earliest=-365d@d sourcetype="* (it should start with an orphaned double quote and end with an asterisk), the dashboard displayed the result from _internal log. Does someone have any idea to prevent SPL injections?

How set severals request in one input ?

$
0
0
How set severals request in one input ? i must firsty authenticate to the rest api, then pass the query and at end close the session Regards

Is it expected : Workflow action visible under action for notable events on incident review on enterprise security

$
0
0
1. I had a add-on created with preffix TA-XYZ(having Adaptive response action) and one app say "ABC", which has workflow action defined. 2. When I merged TA-XYZ code to ABC I am now seeing the workflow actions under actions for notable events in incident review page as well. 3. I don't want my workflow actions to be visible under incident review on enterprise security. Is there any way to disable them on incident review ? Note - While merging I renamed ABC to TA-ABC as i was not able to see Adaptive response action created in the merged code and after renaming ABC to TA-ABC I was able to see my adaptive response action.

Index-time fields extraction issue

$
0
0
Hello all, I'm a bit stuck with my issue. I do have this splunk infra : Sources ==> UF ==> Indexer cluster (3 + master) Search head cluster. I'm trying to extract fields at index time to transform it in a future. My props.conf and transfroms.conf are deployed in indexers throught the master. log line look like : date="2017-09-08",time="08:08:00",s-ip="8.8.8.8",time-taken="8",c-ip="9.9.9.9",c-port="45687",s-action="TCP_DENIED",cs-user="foobar" **transforms.conf** [fieldtestextract] WRITE_META = true REGEX=cs-user="([^"]+) FORMAT=csuser::$1 **props.conf** [web:access:file] TRANSFORMS-csuser = fieldtestextract TZ = utc SEDCMD-username = s/,cs-user=\"[^\"]+\",/,cs-user="xxxx",/g The SEDCMD is working like a charm but the tranforms won't work... **fields.conf** on search heads : [csuser] INDEXED = true INDEXED_VALUE = true I don't see my field on search head and obsiously i'm not able to execute query against it. Could you help me figuring out what's wrong with my configuration ? Many thanks in advance.

How do I debug 400 error between Search Head and Peer?

$
0
0
Hello, currently I have 3 vms on the same data center same RHEL version and same splunk*.rpm installed on them, one is supposed to act as Master, a SH and an Indexer. On the SH I get this on the Search Peer list: ![alt text][1] [1]: /storage/temp/211659-screen-shot-2017-09-08-at-111847.png The log in question has these interesting lines: 10.74.55.14 - - [08/Sep/2017:11:03:33.150 +0100] "??? / HTTP/1.0" 400 207 - - - 0ms 10.74.55.14 - - [08/Sep/2017:11:03:33.151 +0100] "??? / HTTP/1.0" 400 207 - - - 0ms 10.74.55.14 - - [08/Sep/2017:11:04:33.159 +0100] "??? / HTTP/1.0" 400 207 - - - 0ms 10.74.55.14 - - [08/Sep/2017:11:04:33.160 +0100] "??? / HTTP/1.0" 400 207 - - - 0ms Here's My SH server.conf: [general] serverName = isearchhead pass4SymmKey = REDACTED [sslConfig] sslPassword = REDACTED enableSplunkdSSL = false supportSSLV3Only = false sslVerifyServerCert = false [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial quota = MAX slaves = * stack_id = download-trial [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free quota = MAX slaves = * stack_id = free [lmpool:auto_generated_pool_enterprise] description = auto_generated_pool_enterprise quota = MAX slaves = * stack_id = enterprise [license] active_group = Enterprise [clustering] master_uri = clustermaster:REDACTED:8089 mode = searchhead [clustermaster:REDACTED:8089] master_uri = http://REDACTED:8089 multisite = false pass4SymmKey = REDACTED site = default ~ And the distsearch.conf: [distributedSearch] servers = https://[Search Peer]:8089/ trySSLFirst = false #this was a shot in the dark for the 5 second thing [general] serverName = iindexer1 pass4SymmKey = REDACTED [sslConfig] sslPassword = REDACTED enableSplunkdSSL = false supportSSLV3Only = false sslVerifyServerCert = false [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial quota = MAX slaves = * stack_id = download-trial [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free quota = MAX slaves = * stack_id = free [license] master_uri = https://[SEARCH HEAD]:8089 [replication_port://9887] on etc/auth/distServerKeys/isearchhead/trusted.pem of the Search Peer there's the file I pulled from SH according to instructions. What am I missing here? Thank you very much.

Splunk wont open on localhost:8000.

$
0
0
Im getting a "not found" error. On trying to start splunk in the 'bin' folder I am getting am error. Any help appreciated! C:\Program Files\Splunk\bin>splunk start Splunk> Like an F-18, bro. Checking prerequisites... Checking http port [8001]: not available ERROR: http port [8001] - port is already bound. Splunk needs to use this port. Would you like to change ports? [y/n]: y Enter a new http port: 9000 Setting http to port: 9000 Failed to open splunk.secret 'C:\Program Files\Splunk\etc\auth\splunk.secret' file. Some passwords will not work. errno=Access is denied. Unable to read 'C:\Program Files\Splunk\etc\auth\splunk.secret' file. Operation "ospath_fopen" failed in C:\wrangler-2.0\build-src\kimono\src\libzero\conf-mutator-locking.c:313, conf_mutator_lock(); No error

kvstore, inputlookup and time-bounds

$
0
0
I'm trying to set up a kvstore lookup where the results from inputlookup can be filtered using the regular time-pickers available on the web GUI or with the latest= and earliest= modifiers. $ collections.conf [testkv] enforceTypes = true field.action = string field.ts = time $ transforms.conf [testkv] external_type = kvstore fields_list = action, ts time_field = ts ;time_format = %s.%3N ;time_format = %s.%Q The ts field contains a UNIX epoch with milliseconds so 10+3 digits. Regardless what I select "Last 15 minutes", "Last 4 hours" I always get the whole kvstore content. First of all, is that doable in general and, if yes, any ideas on what's wrong? :)
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>