Indexer was running normally yesterday. We offlined it, and after maintenance, rebooted it. When it came back up, it had a new IP because *reasons*, and joined the cluster with the new IP. After realizing what happened, and much troubleshooting with my NOC, they got the right IP in place and I offlined/rebooted again. Everything looked normal, but I'm seeing this error today:
Search peer dc1prsplixap08 has the following message: Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=DC1PRSPLDP01:8089 rv=0 gotConnectionError=0 gotUnexpectedStatusCode=1 actual_response_code=500 expected_response_code=2xx status_line="Internal Server Error" socket_error="No error" remote_error=Cannot add peer=11.1.136.166 mgmtport=8089 (reason: Peer with guid=C395587E-CB3A-4492-8662-71AFD3002A89 is already registered and UP). Make sure pass4SymmKey is matching if the peer is running well. [ event=addPeer status=retrying AddPeerRequest: { _id= active_bundle_id=F24FD19BC912B3FE530FB3917ED1B287 add_type=Initial-Add base_generation_id=0 batch_serialno=1 batch_size=20 forwarderdata_rcv_port=9997 forwarderdata_use_ssl=0 last_complete_generation_id=0 latest_bundle_id=F24FD19BC912B3FE530FB3917ED1B287 mgmt_port=8089 name=C395587E-CB3A-4492-8662-71AFD3002A89 register_forwarder_address= register_replication_address= register_search_address= replication_port=9000 replication_use_ssl=0 replications= server_name=dc1prsplixap08 site=default splunk_version=6.6.0 splunkd_build_number=e21ee54bc796 status=Up } ].
Linux, Splunk version 6.6.3
↧