UFW has ports required opened for Consul (also dropped UFW same issue remains)
Consul Members
Node Address Status Type Build Protocol DC Partition Segment
PG1-UBUNTU 192.168.90.137:8301 alive server 1.15.3 2 james default
PG2-UBUNTU 192.168.90.123:8301 alive server 1.15.3 2 james default
PG3-UBUNTU 192.168.90.125:8301 alive server 1.15.3 2 james default
192.168.200.40 is not part of the cluster, yet this node seems to think it could be.
Jul 15 16:00:19 PG3-UBUNTU consul[31171]: 2023-07-15T16:00:19.639Z [ERROR] agent.anti_entropy: failed to sync remote state: error="Raft leader not found in server>
Jul 15 16:00:19 PG3-UBUNTU consul[31171]: agent.anti_entropy: failed to sync remote state: error=“Raft leader not found in server lookup mapping”
How is this possible? How to stop this from happening?
I am unfamiliar with the format of the information you have shown above.
In order to understand the current status of your cluster, it would be helpful if you could share the current Raft configuration from each of your nodes.
Normally this would be obtainable via consul operator raft list-peers -stale -http-addr 192.168.x.y:8500, directing the query to each of your nodes in turn.
If you are unable to get it that way, it is also logged during server startup, in a log line mentioning [INFO] agent.server.raft: initial configuration:
Understanding how your cluster got into its current state may not be possible unless you are willing to share historical logs, that show the time the problem node began to interact with an unexpected host.
2023-07-15T18:22:14.480Z [DEBUG] agent.server.raft: accepted connection: local-address=192.168.90.125:8300 remote-address=192.168.200.40:50945
2023-07-15T18:22:14.480Z [DEBUG] agent.server.raft: lost leadership because received a requestVote with a newer term
created new encrypt key and ran the build again… error seems to be cleared I was not using an encrypt key from the other consul. But it appears if you build the cluster multiple times with the same key - this happens.