Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Why "show shcluster-status" in my SHC only shows captain, not the members information?

$
0
0
After a recent bundle push from deployer to our SHC members running Splunk Enterprise version 7.2.4, SHC is in a broken state with missing member information: [splunk@SH1 bin]$ ./splunk show shcluster-status Captain: dynamic_captain : 1 elected_captain : Wed Feb 20 19:02:42 2019 id : 718F33BC-E8A5-4EDB-AFAE-279860226B84 initialized_flag : 0 label : SH1 mgmt_uri : https://SH1:8089 min_peers_joined_flag : 0 rolling_restart_flag : 0 service_ready_flag : 0 Members: [splunk@SH2 bin]$ ./splunk show shcluster-status Captain: dynamic_captain : 1 elected_captain : Wed Feb 20 19:02:42 2019 id : 718F33BC-E8A5-4EDB-AFAE-27986022 initialized_flag : 0 label : SH1 mgmt_uri : https://SH1:8089 min_peers_joined_flag : 0 rolling_restart_flag : 0 service_ready_flag : 0 [splunk@SH3 bin]$ ./splunk show shcluster-status Captain: dynamic_captain : 1 elected_captain : Wed Feb 20 19:02:42 2019 id : 718F33BC-E8A5-4EDB-AFAE-279860226B84 initialized_flag : 0 label : SH1 mgmt_uri : https://SH1:8089 min_peers_joined_flag : 0 rolling_restart_flag : 0 service_ready_flag : 0 Members: It appears the election had successfully done with all members voted SH1 to be the captain but member information just couldn't get updated. From SHC captain SH1's splunkd.log: 02-20-2019 19:02:53.796 -0600 ERROR SHCRaftConsensus - failed appendEntriesRequest err: uri=https://SH3:8089/services/shcluster/member/consensus/pseudoid/raft_append_entries?output_mode=json, socket_error=Connection refused to https://SH3:8089 - Tried below procedure to clean up RAFT then bootstrap a static captain but same result after doing so: https://docs.splunk.com/Documentation/Splunk/7.2.4/DistSearch/Handleraftissues#Fix_the_entire_cluster - Confirmed all members have their serverName defined properly to its own name. - Confirmed no network issue as each member can access each other's mgmt port 8089 through below curl cmd: curl -s -k https://hostname:8089/services/server/info - Also tried increasing the thread through below setting and restarted splunk on all members. server.conf [httpServer] maxSockets =1000000 maxThreads= 50000 Issue remains the same, none of the SHC members listed under "show shcluster-status" and SHC remains broken along with kvstore cluster not established.

Dedup search results not showing for a user

$
0
0
I have a user that is a doing a search that has | dedup in it. While I can see the results when I run the search (I'm an admin) she cannot. Is there a certain capability that needs to be enabled for her to see dedup results? This is NOT a RT search BTW.

Is it possible to set "Forwarder Management" as the default app on a Deployment Server?

$
0
0
We have a distributed deployment and the Deployment Server functionality resides on a single purpose machine. I was able to set the Monitoring Console as the default app for all users/roles on the Cluster Master (also a single purpose machine) by setting the following in etc/apps/user-prefs/local/user-prefs.conf on the Cluster Master: [general] default_namespace = splunk_monitoring_console Is it possible to do something similar for the Forwarder Management interface on the Deployment Server? The difference, as far as I can tell, is that the Monitoring Console is an app (etc/app/splunk_monitoring_console/) while the Forwarder Management functionality is not.

How to use Splunk rest api in secure mode

$
0
0
I started exploring splunk ReST API and I'm using postman to test them. Splunk Rest api is throwing error if SSL verification is turned on in postman. **Settings** ![alt text][1] **Error** ![alt text][2] **If i turn SSL certificate verification off, i'm getting the response from splunkd** **Request** https://localhost:8089/services/auth/login **Response** "{ "sessionKey": "ezeocaLe6jyO4BiVwBsKDhDxEvmXi10rg9L0jYJrTzTx_XdFM_4Xsd3zupypHZn8QxCHVtffg9^Mt05dcl_lyIzE7puXrP9DbXSNriYp", "message": "", "code": "" }" [1]: /storage/temp/270633-settings-ssl.png [2]: https://answers.splunk.com/storage/temp/270634-capture.png

Alert when "raises by" doesn't work

$
0
0
I am trying to raise an alert when the number of results raises by 1. Each result represents a device going offline and I need to send an email every time a device goes offline. I have a scheduled search every 5 minutes because I want to remember the current state in case one device goes online and another one goes offline. When I configured the alert I had 0 results. 10 minutes later one device went offline and then 10 minutes after another one went offline. I didn't receive any email. I'm not sure if I've misinterpreted the "rises by" setting, or it's not working. What are your thoughts?

Compare current and last one hour event value in same search.

$
0
0
Hi All, I have to monitor the queues. And for that I have made the basic dashboard where it shows the details. Details are like : *queueName, inTotalMsgs, outTotalMsgs, pendingMsgCount* and dedup the queueName. **Now, what I want is (another search [new]):** ***"If the current pendingMsg count is greater than or equal to the last one hour count, then display the queueName with label - 'Queue with no processing since last one hour' "*** Example : My basic new search [no dedup applied] (will have lot of different queues, and I want the above statement for all of them), but currently I have written only one queueName : ..... | xmlkv | table _time, qName, pendingMsgCount, inTotalMsgs, outTotalMsgs **Timestamp** (last 60 minutes) - (22/02/2019 **06:58**:00.000 to 22/02/2019 **07:58**:13.000) **Results** : only one queueName (124 events) - first two : ![alt text][1] - last two : ![alt text][2] So, for this queueName, the pendingMsg count is same and hence it should be displayed in results for dashboard - 'Queue with no processing since last one hour'. I am not able to achieve this, please help! Thanks in advance! [1]: /storage/temp/270636-ssss.png [2]: /storage/temp/270637-f2.png

Summary Indexing

$
0
0
Hi, I wonder whether someone may be able to help me please. I'm using the following query: (`company_wmf(Login)` authentication=Success) OR (`login-frontend_wmf(Login)` authentication=Success) OR | eval "X-sessionId"=coalesce('tags.X-Session-ID', sessionId) | eval time=strftime(earliest_time, "%d/%m/%Y %H:%M:%S") | eval endtime=strftime(_time, "%d/%m/%Y %H:%M:%S") | eval PTA=if('tags.path'="/account",1,"") | stats earliest(time) as time latest(endtime) as endtime values(test) as test by X-sessionId | search login=PTA login=G test!="" I'm now wanting to incorporate this into extracting the data into a Summary Index. I've read a lot of documentation and posts which do seem to contradict each other, so could someone tell me please, would I need to change the query so I can then use the stats portion of the query in a dashboard panel, but pulling the data from the SI? Many thanks and kind regards Chris

Relation between mongod and scheduled searches

$
0
0
Hi to all, is there some relation with mongod and scheduled searches? In our environment we always had mongod disabled, recently we enable it and after enabling it we found out that some scheduled searches started to be scheduled (for example) every seconds instead every 5 minutes (as cron schedule). This not happens always but specially in some range time. Below you can see a table with columns _time and scheduled time for a particular search: _time scheduled_time 2019-02-22 11:23:59.224 02/22/2019 11:20:00 2019-02-22 11:23:58.217 02/22/2019 11:20:00 2019-02-22 11:23:57.212 02/22/2019 11:20:00 2019-02-22 11:23:56.206 02/22/2019 11:20:00 2019-02-22 11:23:55.201 02/22/2019 11:20:00 2019-02-22 11:23:54.195 02/22/2019 11:20:00 2019-02-22 11:23:53.188 02/22/2019 11:20:00 2019-02-22 11:23:52.184 02/22/2019 11:20:00 2019-02-22 11:23:51.177 02/22/2019 11:20:00 2019-02-22 11:23:50.171 02/22/2019 11:20:00 2019-02-22 11:23:49.165 02/22/2019 11:20:00 2019-02-22 11:23:48.159 02/22/2019 11:20:00 2019-02-22 11:23:47.155 02/22/2019 11:20:00 2019-02-22 11:23:46.149 02/22/2019 11:20:00 2019-02-22 11:23:45.143 02/22/2019 11:20:00 2019-02-22 11:23:44.136 02/22/2019 11:20:00 2019-02-22 11:23:43.130 02/22/2019 11:20:00 2019-02-22 11:23:42.124 02/22/2019 11:20:00 2019-02-22 11:23:41.116 02/22/2019 11:20:00 2019-02-22 11:23:40.109 02/22/2019 11:20:00 2019-02-22 11:23:39.104 02/22/2019 11:20:00 As you can see every second the search was scheduled. The search is not made by us but by third part: what kind of information you need about the search in order to help me understand this issue? Thanks and regards.

How we can access splunk dashboard within network or outside network?

$
0
0
I have created one dashboard which I want access from another system .I have tried this **http://192.168.27.45:8000/en-US/app/launcher/home** I have just replaced **localhost** with my **IP** but I am getting error *Network Error (tcp_error) A communication error occurred: "Operation timed out" The Web Server may be down, too busy, or experiencing other problems preventing it from responding to requests. You may wish to try again at a later time.*.So could you please provide the solution and where I am wrong.* Also provide how we can access this dashboard outside the network also.

Splunk role without permissions and capabillities can access jobs and can see the search

$
0
0
Hello, to let you understand the problem easier I'll explain quickly the requirements for my role: - create a role which only have access to one specific dashboard which contains about 5 searches. - The role has no permissions to any index. - users of the roles need to be able to change their own password. we have created a splunk role with almost no capabillities (change_own_password, list_inputs, rest_properties_get, run_collect, run_mcollect, search). The dashboard is configured within an app which grants the user read permissions, but the user is not allowed to search against the indexs. All of this stuff is already working. To keep the ability to see reports without having permissois to execute searches on the corresponding indexes, we configured on the report settigs that it is getting executed by another user. The problem we have is that if you want to keep the ability to change its own password you need to grant the user "read" permissions for the search&reporting app. But this will also enable to navigate to "activity" -> "jobs" which gives the user the permissions to see scheduled searches and so on. When he clicks on one of those searches the user is able to see the search querys. But we want to avoid that the user is able to see the search query. Anyone know a best way to solve this?

help on join command please

$
0
0
hi I use the search below index =* sourcetype=* | dedup host | stats count This search returns 87 events I try to combine this results with another search in order to match the events of the first search with the events of the second search So I have to have also 87 events but it doesnt works could you help me please? index=* sourcetype=* | dedup host | stats count | join type="outer" [ search eventtype=OSBuild | eval OS=if(........) Build=if(...........) | stats latest(OS) as OS latest(Build) as Build by host] | stats values(OS) as OS values(Build) as Build by host | stats count as Total by OS Build

Splunk Addon for Tenable/ Nessus Pro 8

$
0
0
I am trying to setup the splunk addon for tenable to pull scan reports from our nessus pro box. I have setup the addon on a heavy forwarder with the information needed but i never see anything come over. My fear in researching is that this functionality doesn't work as smoothly based on issues i have seen others have. I wondered if anyone has successfully gotten this working and how? My settings are: (please note that my heavy forwarder performs no indexing functionality, so the "nessus" index is only created on my actual indexer. Hoping this isn't the problem.) Metrics Nessus Host Scans Nessus Server URL https://nessuspro:8834 Start Date 1999/01/01 Batch Size 100000 Interval 43200 Index nessus Status Enabled

rex and sed with automatic lookups

$
0
0
Hi, This is basically a question of when automatic lookups are applied to data. I have a field `url` i need to sed and then use an automatic lookup to assert whether the sed-ed url is in the list. What are the steps I need to take? Is it easier to use the `| lookup` command after the sed pipe? Ideally i have a search that runs the `rex` on `url` and then look for a lookup value that exists in the row for the value of that url in the lookup. If this is found, I know that the automatic lookup matched my rexed field.

why is my prebuilt panel included with Splunk add-on for Symantec DLP returning no results?

$
0
0
I make sure the search results can return the results which is within 24h period as expected. ![alt text][1] I am trying to use the prebuilt panel included with Splunk add-on for Symantec DLP - "symantec_dlp_top_10_incident_senders_in_last_24h" to show the particular intertesed senders who caused the incidents. The following is the context of prebuilt panel of "symantec_dlp_top_10_incident_senders_in_last_24h". I expect they shall be correct, without having any further modification? sourcetype="symantec:dlp:syslog" earliest=-24h | top limit=10 showperc=false sender Then i added the prebuilt panel to dashboards in order to view the results, but no luck. ![alt text][2] [1]: /storage/temp/270629-3.png [2]: /storage/temp/270630-4.png In fact, I tried all the prebuilt panels included with Splunk add-on for Symantec DLP as follows. symantec_dlp_activities_by_action_in_last_24h symantec_dlp_severity_distribution_in_last_24h antec_dlp_top_10_incident_senders_in_last_24h antec_dlp__severity_distribution_in_last_24h The above panels are found in > Splunk Web > Settings > User interface > Prebuilt panels. Again I expect they shall be correct, without having any further modification? FYI: As per the official instructions, I have specified the following variables to extract from my Symantec DLP system and send them to Splunk. Message = ID: $INCIDENT_ID$, Policy Violated: $POLICY$, Rules: $POLICY_RULES$, Count: $MATCH_COUNT$, Protocol: $PROTOCOL$, Recipient: $RECIPIENTS$, Sender: $SENDER$, Severity: $SEVERITY$, Subject: $SUBJECT$, Target: $TARGET$, Filename: $FILE_NAME$, Blocked: $BLOCKED$, Endpoint: $ENDPOINT_MACHINE$

how can i add prebuilt panels to Splunk add-on for Symantec DLP's dedicated webpage?

$
0
0
i would like to add prebuilt panels to Splunk add-on for Symantec DLP's dedicated webpage. This is my current Splunk add-on for Symantec DLP's dedicated webpage. ![alt text][1] I would like to have all the prebuilt panels shown in Symantec DLP's dedicated webpage as follows; that is what is shown in "Splunk Add-on for Symantec DLP from Splunkbase at http://splunkbase.splunk.com/app/3029." ![alt text][2] [1]: /storage/temp/270631-5.png [2]: /storage/temp/270632-1.png In fact, i already followed every step in https://docs.splunk.com/Documentation/AddOns/released/Overview/Prebuiltpanels, but it seems not the same as the second pic. or what steps else shall i make?

accessing splunk installed on a server

$
0
0
I have splunk enterprise version installed on a windows server.How to use that server from various windows client without having splunk installed in them .?

Compare current and last one hour event value in same search.

$
0
0
Hi All, I have to monitor the queues. And for that I have made the basic dashboard where it shows the details. Details are like : *queueName, inTotalMsgs, outTotalMsgs, pendingMsgCount* and dedup the queueName. **Now, what I want is (another search [new]):** ***"If the current pendingMsg count is greater than or equal to the last one hour count, then display the queueName with label - 'Queue with no processing since last one hour' "*** (OR we can say the outTotalMsgs is same for now and last one hour event) Example : My basic new search [no dedup applied] (will have lot of different queues, and I want the above statement for all of them), but currently I have written only one queueName : ..... | xmlkv | table _time, qName, pendingMsgCount, inTotalMsgs, outTotalMsgs **Timestamp** (last 60 minutes) - (22/02/2019 **06:58**:00.000 to 22/02/2019 **07:58**:13.000) **Results** : only one queueName (124 events) - first two : ![alt text][1] - last two : ![alt text][2] So, for this queueName, the pendingMsg count is same and hence it should be displayed in results for dashboard - 'Queue with no processing since last one hour'. I am not able to achieve this, please help! Thanks in advance! [1]: /storage/temp/270636-ssss.png [2]: /storage/temp/270637-f2.png

ARM treasuredata integration with Splunk

$
0
0
hello experts, I am in the process of integrating ARM treasuredata with splunkis there any standard way of integration provided by splunk. I am having 2 options, by using APLI connect or by creating some workflows and import them. Can anyone please guide me. thanks

Error while using Add-on developed using add-on builder on ipv6

$
0
0
Hi all, We have one add-on which is developed using add-on builder. We have configured the input in add-on and getting below error in logs. 2019-02-07 19:37:33,760 ERROR pid=30575 tid=MainThread file=base_modinput.py:log_error:307 | Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-xxx/bin/xxx/modinput_wrapper/base_modinput.py", line 113, in stream_events self.parse_input_args(input_definition) File "/opt/splunk/etc/apps/TA-xxx/bin/xxx/modinput_wrapper/base_modinput.py", line 152, in parse_input_args self._parse_input_args_from_global_config(inputs) File "/opt/splunk/etc/apps/TA-xxx/bin/xxx/modinput_wrapper/base_modinput.py", line 170, in _parse_input_args_from_global_config global_config = GlobalConfig(uri, session_key, global_schema) File "/opt/splunk/etc/apps/TA-xxx/bin/xxx/splunktaucclib/global_config/__init__.py", line 51, in __init__ port=splunkd_info.port, File "/opt/splunk/etc/apps/TA-xxx/bin/xxx/solnlib/net_utils.py", line 129, in wrapper 'Illegal argument: {}={}'.format(arg, value)) ValueError: Illegal argument: host=::1 I'm using Splunk version 7.2.0 which is a standalone instance. /opt/splunk/system/local/server.conf [sslConfig] sslPassword = xxxxxxxxxxx [general] pass4SymmKey = xxxxxxxxx listenOnIPv6 = yes connectUsingIpVersion = 6-first [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial quota = MAX slaves = * stack_id = download-trial [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free quota = MAX slaves = * stack_id = free /opt/splunk/system/local/web.conf [settings] listenOnIPv6 = yes We have tried to debug this issue and it looks like the method is_valid_hostname() inside net_utils.py file is not able to parse ipv6 address. below is the code of that method. def is_valid_hostname(hostname): '''Validate a host name. :param hostname: host name to validate. :type hostname: ``string`` :returns: True if is valid else False :rtype: ``bool`` ''' if len(hostname) > 255: return False if hostname[-1:] == '.': hostname = hostname[:-1] allowed = re.compile('(?!-)[A-Z\d-]{1,63}(?

How to create a table in which mandatory and optional fields are correctly aligned

$
0
0
Hello, I have a problem extracting data from a log with format not fixed. I explain: each row of my log contains a mandatory tag (always present) followed by some other tags optionals (so they may be present or not). For example: father="A"; sun1="A1"; sun2="A2"; sun3="A3" father="B"; sun1="B1"; sun3="B3" father="C"; sun2="C2"; sun3="C3" I need a query returning a table where all values are correctly aligned under the respective tag father sun1 sun2 sun3 A A1 A2 A3 B B1 - B3 C - C2 C3 Unfortunatly, using rex syntax, I can obtain a table where the values alignment is lost: father sun1 sun2 sun3 A A1 A2 A3 B B1 B3 C C2 C3 Can someone help me?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>