Errors in new Consul cluster

I created a new consul cluster for testing along with Nomad . Installed both server and clients . While Nomad looks fine Consul has been constantly throwing these errors

2023-02-24T13:33:02.312-0500 [ERROR] agent: Coordinate update error: error="No cluster leader"

2023-02-24T13:33:04.053-0500 [WARN] agent.cache: handling error in Cache.Notify: cache-type=connect-ca-leaf error="No cluster leader" index=0

2023-02-24T13:33:04.053-0500 [ERROR] agent.server.cert-manager: failed to handle cache update event: error="leaf cert watch returned an error: No cluster leader"

2023-02-24T13:33:06.814-0500 [ERROR] agent.anti_entropy: failed to sync remote state: error="No cluster leader"

I also tried this command

consul operator autopilot get-config
Error querying Autopilot configuration: Unexpected response code: 500 (No cluster leader)
consul operator autopilot state

Error querying Autopilot state: Unexpected response code: 500 (No cluster leader)

If there are any solutions or how to cleanup Consul config and restart let me know . Also I have disabled all things ACL . No firewall or any other network issues . My nomad seems to be working fine or at least I can see Server and client communicating . When I try consul UI there also it says no Leader .

So I would assume this is because your consul cluster has no cluster leader. Can you show your consul server config?

It would also be helpful if you could show a recent portion of the log messages from at least one of your Consul servers, and the Raft peer set on each node. You can display that using:

consul operator raft list-peers -stale -http-addr=...

The -stale flag allows it to work without a cluster leader.

You need to run it multiple times, specifying each of your servers as the -http-addr. During normal operation, the Raft peer set should be identical on all nodes, but having the nodes disagree about it can be one source of inability to elect a leader.

I kept tweaking a few things in the config and then I stopped everything and only started one server manually not using systemctl and it became a leader . Then I stopped it and then used systemctl but again only one server and it became the leader and then I started rest of the nodes . This has solved my cluster state .