How does consul achieve HA

I’m confused with achieving high availability for cluster on docker.

On GitHub, there’s a demo cluster compose file:

version: ‘3’


consul-agent-1: &consul-agent
image: consul:latest
- consul-demo
command: “agent -retry-join consul-server-bootstrap -client”

<<: *consul-agent

<<: *consul-agent

consul-server-1: &consul-server
<<: *consul-agent
command: “agent -server -retry-join consul-server-bootstrap -client”

<<: *consul-server

<<: *consul-agent
- “8400:8400”
- “8500:8500”
- “8600:8600”
- “8600:8600/udp”
command: “agent -server -bootstrap-expect 3 -ui -client”


So what happens when consul-server-bootstrap container goes down for what ever reason?

I’m using .net core which registers services with http and I’m going to use consul-server-bootstrap:8500

Hi @murugaratham,

Welcome to the forums!

In this example, there are 3 Consul Server Agents and 3 Consul Client Agents. When the cluster gets bootstrapped, one of the Server Agents will become the Leader of the cluster. In a cluster with 3 Server nodes, you can at most lose 1 Server and still have the cluster healthy. So to answer your question your cluster will be fine even if the consul-server-bootstrap container goes down.

To get a full understanding of how this works, you will have to start reading from the architecture documentation, followed by the most important pieces, the Consensus Protocol and Gossip Protocol.

1 Like

Hi @Ranjandas,

However in the sample yml file, the ports being mapped is limited on consul-server-bootstrap container only, if it does down, how would the other containers be able to connect to the ports though

In the example compose file, the expectation is that all Consul clients are attached to the same network consul-demo. As a result, all Consul agents will be able to talk to each other without any port mapping.

The port mapping you are seeing for consul-server-bootstrap container is just for you to interact with the cluster over 8500 (Consul HTTP API) and 8600 (Consul DNS Port) (btw 8400 is deprecated and not required) from your docker host.

After all, this compose file is only intended for a development environment. And while developing, if the consul-server-bootstrap container fails for some reason, you won’t be able to communicate with the cluster, unless you exec into one of the rest of the consul nodes and run Consul CLI.

Hope this helps.

I see, as i was planning to create a HA consul, and have application containers connect to consul for service discovery and was confused by the port mapping on 1 (point of failure) bootstrap agent, well thanks for the clarifications though!

One thing to note is consul-server-bootstrap container is not a point of failure once the cluster is formed. There is no guarantee that it would become the leader. This is perfectly ok as in Consul you don’t need to explicitly talk to the leader to talk to the Cluster (RPC requests will get forward to the leader as required).

The reason why only one container has the port mapping is that once you add for other two containers, you will have to randomize the host ports and this, in turn, will require you to expose CONSUL_HTTP_ADDR every time on the host to talk to the right <host>:<port> combination when the container fails.

so from docker perspective, how does application containers connect to consul? Right now i’m providing bootstrap agent’s address to containers

In production scenario, you would list all the server nodes in the -retry-join list to avoid the case of single point of failure (in this case only consul-server-bootstrap is given). This could be IP Addresses, Hostnames or even dynamic lookups (ref: Cloud Auto-join | Consul by HashiCorp).

Again this docker-compose file is aimed at dev setup, I wouldn’t consider this as production ready.