Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Can you help me make a regex for URL having different types of parameters?

$
0
0
I have below 2 log sets which have different activities. i want two different regex for Set1 and Set2 separately in 2 different panels Set1 log1: index="abc_xyz"|activity=GET->/cirrus/v2.0/payloads/96a-d3f-4fb/HELLO_WORLD|eventEndTime=2018-09-26 log2: index="abc_xyz"|activity=GET->/cirrus/v2.0/payloads/f4a-8ef-8cb/abcpayld|eventEndTime=2018-09-26 Set2 log3: index="abc_xyz"|activity=GET->/cirrus/v2.0/payloads/96a-d3f-4fb/HELLO_WORLD/fd078jkkj24342kljlce989dadc7abc56c28|eventEndTime=2018-09-26 log4: index="abc_xyz"|activity=GET->/cirrus/v2.0/payloads/f4a-8ef-8cb/abcpayld/thfd078jkkj24342kljlce989dadc7vfc56c28|eventEndTime=2018-09-26 I have tried with below , but No luck index="abc_xyz" |regex "GET->\/cirrus\/v2.0\/payloads\/([[:alnum:]-]{10,40})\/([[:alpha:]_]{10,40})" Could you please resolve my query

Question about configuring the Master Node to forward OS logs

$
0
0
Reading OS logs from a cluster indexer node is controlled by the master node $SPLUNK_HOME/etc/master-apps/_cluster/local/inputs.conf , but that only affects the indexer nodes, not the master node itself. If I configure outputs.conf in $SPLUNK_HOME/etc/system/local/ on the master node, will it then forward everything from the master node or only the monitored paths specified in inputs.conf ? The thing is that I only want to forward OS logs (under /var/log or any other specified file), not the internal stuff from Splunk on the master node itself.

Can you help me troubleshoot my problem bringing data to Splunk using DBConnect with Rising Column?

$
0
0
I´m new using DBConnect and I’m trying to create a new input to collect Data from a DataBase Oracle. I installed the driver 12.2 and DBConnect detected the installation. I created the Identities “sitef_db” and the Connection “sitefweb” using the identities “sitef_db” in DBConnect , Both OK. I created a new input called “sitefweb_test” using the connection “sitefweb” with the following settings: In the tab “Choose and Preview Table” I selected in Input Type “Advanced” Max Rows 100 Created the Query: SELECT * FROM table WHERE abc > ? ORDER BY abc asc In the Checkpoint Column I selected “abc” and Checkpoint Value. Then, I completed with date format abc column “yyyy-mm-dd HH:MM:SS.0” When I execute the query, DBConnect returns the Data with the correct checkpoint applied. In the tab “SET PARAMETERS” the options are: Input Type Advanced Max Rows to Retrieve 200.000 Fetch Size 1000 Timestamp Choose Column Specify TimeStamp Column XYZ Output Timestamp Format yyyy-MM-dd HH:mm:ss Execution Frequency */10 * * * * In the tab “Metadata” Source sitefweb_source Sourcetype sitefweb_sourcetype Index idx_sitef Select Resource Pool Local After this I saved the input to begin collect data to Splunk but appear a diferente error from this input: [ERROR] [dbxquery.py], line 235: action=dbxquery_execute_query_failure query="SELECT * FROM (SELECT * FROM "SITEFWEB"."INTERFACECMP_OUT" WHERE DAT_VENDA > ? ORDER BY DAT_VENDA ASC) t" params="None" cause="Exception(' java.sql.SQLException: Missing IN or OUT parameter at index:: 1.',)" Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbxquery.py", line 228, in _execute_rpc_query self.timeout, self._handle_chunked_query_results, user) File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbx2\ws.py", line 94, in executeQuery self.ws.run_forever(timeout=self.timeout) File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbx2\websocket.py", line 841, in run_forever self._callback(self.on_error, e) File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbx2\websocket.py", line 852, in _callback callback(self, *args) File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbx2\ws.py", line 130, in on_error self.callback(None, error) File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbxquery.py", line 210, in _handle_chunked_query_results raise error Exception: java.sql.SQLException: Missing IN or OUT parameter at index:: 1. [WARNING] [health_logger.py], line 162: Failed to run query: "SELECT * FROM (SELECT * FROM "SITEFWEB"."INTERFACECMP_OUT" WHERE DAT_VENDA > ? ORDER BY DAT_VENDA ASC) t", params: "None", caused by: Exception(' java.sql.SQLException: Missing IN or OUT parameter at index:: 1.',). Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbx2\health_logger.py", line 160, in do_log return func(*args, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbxquery.py", line 236, in _execute_rpc_query raise RuntimeError('Failed to run query: "%s", params: "%s", caused by: %s.' % (query, repr(params), repr(e))) RuntimeError: Failed to run query: "SELECT * FROM (SELECT * FROM "SITEFWEB"."INTERFACECMP_OUT" WHERE DAT_VENDA > ? ORDER BY DAT_VENDA ASC) t", params: "None", caused by: Exception(' java.sql.SQLException: Missing IN or OUT parameter at index:: 1.',). WARN HttpListener - Socket error from 127.0.0.1 while accessing /servicesNS/nobody/splunk_app_db_connect/data/inputs/mi_input/sitefweb_test/disable: Winsock error 10054 ERROR DBX2Proxy:46 - Exception encountered for entity-name = mi_input://sitefweb_test and type = input java.lang.NumberFormatException: For input string: "2018-09-25 00:00:00" at java.lang.NumberFormatException.forInputString(Unknown Source) at java.lang.Long.parseLong(Unknown Source) at java.lang.Long.valueOf(Unknown Source) [CRITICAL] [mi_base.py], line 195: action=modular_input_exited_after_maximum_failed_retries modular_input=mi_input://sitefweb_test max_retries=6 error=ERROR: java.lang.NumberFormatException: For input string: "2018-09-25 00:00:00". Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbx2\mi_base.py", line 183, in run checkpoint_value=checkpoint_value) File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbx2\health_logger.py", line 283, in wrapper return get_mdc(MDC_LOGGER).do_log(func, *args, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbx2\health_logger.py", line 160, in do_log return func(*args, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\mi_input.py", line 207, in run _do_advanced_mode(input_name, inputws, self.db, params, self.user_name, splunk_service, output_stream) File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\mi_input.py", line 92, in _do_advanced_mode inputws.doAdvanced(db, params, user, stanza, callback=callback) File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbx2\ws.py", line 284, in doAdvanced self.doInput("dbinputAdvancedIterator", database, params, user, entityName, callback) File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbx2\ws.py", line 275, in doInput self.ws.run_forever(timeout=self.timeout) File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbx2\websocket.py", line 841, in run_forever self._callback(self.on_error, e) File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbx2\websocket.py", line 852, in _callback callback(self, *args) File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbx2\ws.py", line 328, in on_error raise Exception ("%s" % error) Exception: ERROR: java.lang.NumberFormatException: For input string: "2018-09-25 00:00:00". [CRITICAL] [mi_input.py], line 96 : action=loading_input_data_failed input_mode=advanced dbinput="mi_input://sitefweb_test" error="ERROR: java.lang.NumberFormatException: For input string: "2018-09-25 00:00:00"." [CRITICAL] [ws.py], line 327: [DBInput Service] Exception encountered for entity-name = mi_input://sitefweb_test and type = input with error = ERROR: java.lang.NumberFormatException: For input string: "2018-09-25 00:00:00".. [ERROR] [ws.py], line 318: [DBInput Service] Exception encountered from server on_message for entity-name = mi_input://sitefweb_test and type = input with error = ERROR: java.lang.NumberFormatException: For input string: "2018-09-25 00:00:00".. Why do I get these errors when I start these input?

Can you help me understand the documentation around the delta replication from a deployer to the search heads cluster?

$
0
0
I'm not fully understanding the documentation around the delta replication from a deployer to the search heads(or cluster). Is it correct in saying that when the bundles are being copied from the deployer to the captain, that only "changes" are being created in the bundle and that is where the .delta files come in? Those .deltas are then replicated out to the cluster members? Apologize if the question is not clear. I really just want to completely understand the process outlined here http://docs.splunk.com/Documentation/Splunk/7.1.3/DistSearch/PropagateSHCconfigurationchanges How the cluster applies the configuration bundle

Lookup Over Time with Row Totals for Each Row

$
0
0
Let's say I have a lookup table and I have it formatted and "searched" down to: _time | Cat_1 | Cat_2 | Cat_3 | Cat_4 | totalCount 2018-04. 1 1 0 5 7 2018-05 2 3 1 0 6 2018-06 3 1 0 0 4 using: | inputlookup File.csv | eval _time=strptime(Date, "%m/%d/%Y") | where _time>relative_time(now(), "-5mon@m") | timechart span=1mon count by "other_field" | addtotals fieldname=totalCount Cat_1 Cat_2 Cat_3 Cat_4 What I want is a stacked column chart over time by month. Essentially a way to count the TOTAL number of ALL events from month to month, displaying it as an overlay. I have gotten it working with non-lookups but in this case, it requires a lookup. I can modify the lookup as needed. I'd like to be able to show the categories per month and then an overlaid line showing a positive upwards trend with the totals. I tried bucket but am not getting the results I desire.

Working on snoweventstream in a subsearch - Any thoughts?

$
0
0
Hi guys, I'm trying to control whenever I have to send an event to ServiceNow or not, and that's what I've done so far. Basically, I need to check if the query results exceed a threshold. If it does I need to update a lookup with that value and run a snoweventstream command with severity > 0. If the result is below the threshold, I need to do the same update in the lookup with that value and run a snoweventstream command with severity = 0. Does have any of you guys already done something similar or have some ideas on how can I perform that? This is basically a draft of what it would be (in a high level): eval lastStatus=(subsearch inputlookup x.csv | get status where alert_name = something) MyQuery if fieldA > 10 then if (lastStatus == 0) then append x.csv fieldB, fieldC, 1 AND eval alerted=(subsearch that eval some fields and trigger snoweventstream command with severity 1) else if (lastStatus == 1) then append x.csv fieldB, fieldC, 0 AND eval alerted=(subsearch that eval some fields and trigger snoweventstream command with severity 0) Thank you in advance!

Multiselect dynamic delimiter with token not updated on token element change

$
0
0
Dear All, I have put a token in place of a multi-select's delimiter in order to realize an AND/OR switch for the options. ALLALLSecAccOra $field3$ OrAndOR ....................................................... index=* .... | search ... ($field2$) The problem I have is: that only way to have the change of the dropdown field3 effective, is to change something in the multiselect field2! How can I have the change of field3 set the right delimiter on field2 and run the search? best regards Altin

Why is my multiselect dynamic delimiter with token not updating on token element change?

$
0
0
Dear All, I have put a token in place of a multi-select's delimiter in order to realize an AND/OR switch for the options. ALLALLSecAccOra $field3$ OrAndOR ....................................................... index=* .... | search ... ($field2$) The problem I have is: that only way to have the change of the dropdown field3 effective, is to change something in the multiselect field2! How can I have the change of field3 set the right delimiter on field2 and run the search? best regards Altin

How do I remove gaps in charts?

$
0
0
Hi splunkers, I was able to plot a graph that, whilst it shows all the info I need, it also contains massive gaps that make it less appealing. Is it possible to eliminate those gaps? I'm not concerned about keeping the timeframe consistent. MY search is as follows: > index=crypto CurrencyB="CND" OR> CurrencyS="CND" | timechart> sum(eval(if(CurrencyB="CND",Buy,Sell> *-1))) as Total, sum(eval(if(CurrencyB="CND",round(Sell/Buy,8),null)))> as UnitPrice span=d cont=FALSE |> streamstats sum(Total) as Gtotal Cheers

When trying to do a CSV data upload and create an index, how come no lines and events are found in the event summary?

$
0
0
Hi I am trying to do a .csv file upload and create an index. I am afraid I can not upload the file due to lack of karma points. So here is the start of it: a002,a003,a005,a006,a007,a008,a101,a001,a009,a010,a012,a034,a046,a054,a055,a058,a072,a073,a075,a077,a078,a098,a100,a102,a103,a104,a105,d003,d004,d005,d006,d041,d042,d043,d044,d046,d047,d048,d049,d050,d051,d052,d053,d059,d115,time,mac_nwpm 46.599998,48.299999,49.400002,32.900002,-999.900024,-999.900024,0,16.6,0,0,0,5,20,31.4,21.4,52,157.100006,156,0,113,0,0,0,0,11,0,0,0,0,0,0,0,1,1,0,0,0,0,1,0,1,0,0,1,0,27.09.2018 10:44,00:0a:5c:10:70:ca 39.700001,50,39.099998,15.9,13.2,37.400002,10.8,12.1,32,-999.900024,0,7,20,29.9,22.700001,50,204.300003,0,0,71.099998,0,29.9,17.1,32.700001,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,1,1,27.09.2018 10:44,00:0a:5c:1f:86:3b 27.6,51.299999,27.700001,-999.900024,-999.900024,-999.900024,0,13.6,0,0,0,29.299999,20,22.299999,22.299999,50,254.600006,172.199997,7.4,55,15,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,1,0,27.09.2018 10:44,00:0a:5c:10:70:f0 29.9,52.900002,29.1,12.9,12.5,36.900002,10.5,14.8,29.799999,29.9,0,7,20,29.799999,29.799999,50,165,164.800003,26,27.5,44.5,19,7.1,32.700001,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,27.09.2018 10:44,00:0a:5c:10:71:20 25.4,46.700001,25.700001,0,0,10.2,10.2,11.2,25.4,0,0,50,20,26.1,23,49,251.600006,0,6.6,24.4,0,22.200001,11.3,32.900002,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,1,0,27.09.2018 10:44,00:0a:5c:10:8d:ca 24.5,48.400002,22.6,19.1,19.299999,-999.900024,0,13.3,0,0,0,99,20,22.299999,22.299999,50,2.7,0,0,0.9,1.6,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,27.09.2018 10:44,00:0a:5c:10:f7:99 22.200001,54.599998,22,58.400002,14.3,43.099998,0,9.7,-999.900024,-999.900024,0,7,20,23.6,23.6,56,255.100006,202.399994,0.6,256.600006,52.599998,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,27.09.2018 10:44,00:0a:5c:10:f6:75 35.200001,48.200001,32.900002,32.5,11.5,32.299999,0,10.1,0,0,0,99,20,31,31,50,222.800003,0,7.1,62.700001,77.5,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,27.09.2018 10:44,00:0a:5c:10:92:3d Basically, it as a header line with the column names, consisting of register names (axyzs), a time field called "time", and an identifier field called "macnwpm". I can upload the file, but in the 2nd step, "set source type" seems to not be able to identify the file's structure. I also set parameter that "header" is in in line 1 and that the "time format" is %d.%m.%Y %H:%M and that the time field is called "time". But still, it seems it can not identify the structure correctly because the "event summary" says there are 0 lines and 0 events found. Are there any other parameters I need to do define? Think the .csv itself is ok because I can do a .csv-import in excel and the columns look ok there. Best, Florian

Is it normal to receive multiple "invalid key in stanza" errors in the configuration file of the db_connect app?

$
0
0
good morning    Splunk reports too many errors in the configuration file of the db_connect app. These configurations are made via web and they validate that the connections are working correctly.    Does anyone know if this is normal or if there really is a problem defining the connection to the database? Invalid key in stanza [conex] in /home/splunk/splunk/etc/apps/splunk_app_db_connect/local/db_connections.conf, line 563: jdbcUrlFormat (value: jdbc:oracle:thin:@::). Invalid key in stanza [conex] in /home/splunk/splunk/etc/apps/splunk_app_db_connect/local/db_connections.conf, line 564: jdbcUrlSSLFormat (value: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=)(PORT=))(CONNECT_DATA=(SERVICE_NAME=)))). [conex] connection_type = oracle cwallet_location = /home/oracle/cwallet.sso database = database_conex host = 1.1.1.1 identity = conex_user jdbcUrlFormat = jdbc:oracle:thin:@:: jdbcUrlSSLFormat = jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=)(PORT=))(CONNECT_DATA=(SERVICE_NAME=))) port = 1521

Can you help me make a stacked column chart over time with totals for each row?

$
0
0
Let's say I have a lookup table and I have it formatted and "searched" down to: _time | Cat_1 | Cat_2 | Cat_3 | Cat_4 | totalCount 2018-04. 1 1 0 5 7 2018-05 2 3 1 0 6 2018-06 3 1 0 0 4 using: | inputlookup File.csv | eval _time=strptime(Date, "%m/%d/%Y") | where _time>relative_time(now(), "-5mon@m") | timechart span=1mon count by "other_field" | addtotals fieldname=totalCount Cat_1 Cat_2 Cat_3 Cat_4 What I want is a stacked column chart over time by month. Essentially a way to count the TOTAL number of ALL events from month to month, displaying it as an overlay. I have gotten it working with non-lookups but in this case, it requires a lookup. I can modify the lookup as needed. I'd like to be able to show the categories per month and then an overlaid line showing a positive upwards trend with the totals. I tried bucket but am not getting the results I desire.

Working on "snoweventstream" in a subsearch - Any thoughts?

$
0
0
Hi guys, I'm trying to control whenever I have to send an event to ServiceNow or not, and that's what I've done so far. Basically, I need to check if the query results exceed a threshold. If it does, I need to update a lookup with that value and run a snoweventstream command with severity > 0. If the result is below the threshold, I need to do the same update in the lookup with that value and run a snoweventstream command with severity = 0. Have any of you guys already done something similar and do you have some ideas on how I can perform that? This is basically a draft of what it would be (in a high level): eval lastStatus=(subsearch inputlookup x.csv | get status where alert_name = something) MyQuery if fieldA > 10 then if (lastStatus == 0) then append x.csv fieldB, fieldC, 1 AND eval alerted=(subsearch that eval some fields and trigger snoweventstream command with severity 1) else if (lastStatus == 1) then append x.csv fieldB, fieldC, 0 AND eval alerted=(subsearch that eval some fields and trigger snoweventstream command with severity 0) Thank you in advance!

How do I use the stats command on a field value that has duplicate entries?

$
0
0
I'm trying to table sales data and would like to have my quantity field values to calculate the total number that the item_id shows in a single transaction. Example : Let's say a customer buys 3 apples with the same item_id, 2 oranges w/ the same item_id and one banana. I would like to know how to blend a stats for Quantity of each item_id in the transaction so they don't show all three apples in the transaction on their own line in my stats list or values(item_id). ![index=businesstrans customer=* | stats list(item_id) as item_id, list(description) as Description, list(quantity) as quantity, list(unit_price) as unit_price, sum(unit_price) as Transaction_Total by customer, _date | table _date, customer, item_id, Description, quantity, unit_price, Transaction_Total | sort - Transaction_Total][1] [1]: /storage/temp/255060-transaction-panel.png

How do I cut a string after a certain text and count the results of the string before the cut?

$
0
0
Here is my current search in Jboss Logs: index=jboss_app CLASS="foo.bar.bas.classname" MESSAGE="Error doing the thing bob wants to do" OR MESSAGE="Error doing the thing joe wants to do"|stats count by MESSAGE |sort - count Results show Error doing the thing for **bob** :user1@company.com AccountNumber01: 4920406079372 13 Error doing the thing for **bob** :user2@company.com AccountNumber01: 4079379507040 12 Error doing the thing for **joe** :user3@company.com AccountNumber01: 1040683729965 11 Error doing the thing for **joe** :user4@company.com AccountNumber01: 60284967030205 10 The results I want are to simply count how many results show "Error doing the thing for bob" and "Error doing the thing for joe" and list it as such. Just need to count how many for each. Thanks!

Authentication for searches using REST API

$
0
0
Does the REST Api based search support only username/password based authentication?? If I am developing an app which does REST API searches, the customer may not be willing to give username/password. Is there any other way of auth possible?

Splunk Enterprise installation fails using official docker image on kubernetes with "Login failed"

$
0
0
We are trying to run Splunk Enterprise on Kubernetes. We have a Helm chart that uses the official docker image (currently 7.1.2). We are using the following env vars to initialize Splunk: env: - name: SPLUNK_START_ARGS value: "--accept-license --answer-yes --seed-passwd ourpassword" - name: SPLUNK_USER value: root - name: SPLUNK_ENABLE_LISTEN value: "9997" - name: SPLUNK_ADD value: tcp 1514 Splunk appears to start and displays the message Waiting for web server at http://127.0.0.1:8000 to be available..... Done If you get stuck, we're here to help. Look for answers here: http://docs.splunk.com The Splunk web interface is at http://container-name:8000 and then a moment later we get the message Login failed Stopping splunkd... Shutting down. Please wait, as this may take a few minutes. ... Stopping splunk helpers... Done. What login is failing? What do we need to do to correct this?

Best way to number each event in decending time

$
0
0
I need to assign number each event sorted in decending _time order. Ex Event. _time Count Event1. 11:54:51. 1 Event2. 11:53:57 2 Event3. 11:53:52. 3 I can use |streamstats count But does this guarantee events in decending order for historical searches on clustered indexers. Using sorting on _time is effecting query performance. So Is there any way to assign a increment number count based on decending order of _time.

RHEL 7.5 Start Splunk Forwarder

$
0
0
What is the command to start the splunk service. Or better what is the splunk service name? Tried splunk and splunkd This is RHEL 7.5 and Splunk Fowarder splunkforwarder-7.1.3-51d9cac7b837-linux-2.6-x86_64.rpm

Email alerts - Not receiving all emails

$
0
0
I have configured triggered alerts & email alert for an alert which runs every hour with custom count >0 with trigger for each result. I see triggered alerts for every hour and i dont see any emails triggered for every hour. I get only one email in the morning thats it.. can you please help me which configuration i should change so that i receive emails for every triggered alert
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>