Clarification on multi-account/datacenter strategy

I’ve been reading about using consul on multiple-datacenters, most of the docs refer to each datacenter/account having their own servers and clients: Consul Architecture | Consul by HashiCorp

Is there a way to have a central consul server cluster, and multiple clients across AWS accounts talk to the central cluster?

datacenter in Consul is just a data-seperation term. You can have as many servers/members you want in a single datacenter even though they’re seperate across actual data centers or regions … as long as they have network connectivity.

1 Like

Yes, its possible to support this deployment architecture. You will need to peer the client VPCs with the central cluster’s VPC––either through VPC peering or transit gateway–in order to allow communication between the Consul client and server agents.

Consul actually has constraints on latency between clients and servers. We recommend servers have sub-8ms latency between them (see Consul Reference Architecture: Network connectivity). Clients should have an average of 50ms round-trip latency to the servers.

One thing to note is that even though the clients may exist across multiple AWS accounts, from the perspective of Consul, it is still assumed that these agents will be under common administrative control. That is to say, cluster-level configuration will apply to all users of the system.

If you need each AWS account to operate more autonomously, and have administrative level separation between different tenants on the system, the new Administrative Partitions feature in Consul Enterprise 1.11 allows for this. Check out the Consul 1.11 GA blog post for more info.

2 Likes

Awesome, thanks both for the answers, this is helpful!