High Availability Installation

I’m a little bit confused after looking at the official Boundary High Availability installation diagram and reading the “Infrastructure Breakdown” that accompanies it.

Quote from the Breakdown Section:

We recommend placing all Boundary servers in private networks and using load balancing techniques to expose services such as the API and administrative console to public networks

However, the diagram CLEARLY shows that all boundary servers are just placed directly in the public subnets.

What’s happening here? I’m very confused on how exactly to implement this.

Doesn’t the orange dotted box imply a private subnet? The ALB would have an interface into each subnet to provide load balancing and additional layers of security implemented.

I think the digram just show the architecture of the deployment of GitHub repo GitHub - hashicorp/boundary-reference-architecture: Example reference architecture for a high availability Boundary deployment on AWS..

The repo use terraform “connect” and “remote-exec” to install the controller and worker instance through public ip, so the controller and worker instance are created in public subnet.

As I understand, to achieve the best practice, you may use user-data feature to install EC2 instances or make images to create an auto-scale launch template.
And additional network load balancer is needed in front of the workers.
Then I think you can remove the public IP and move instances into private subnet.

Orange dotted box is just the autoscaling group

I have 2 more questions:

  1. As the document Server Configuration | Boundary by HashiCorp,
    “controllers” - A list of hosts/IP addresses and optionally ports for reaching controllers. The port will default to :9201 if not specified.
    If I put controller and worker in separate auto scale group, how to config the “controller” property in worker config file?
    Because the private ip would be changed during the scale out and scale in, can I set the property “controller” with the endpoint of internal ALB in front of the controllers? Need the workers to know how many controllers are behind the ALB?

  2. The property “name” in controller and worker must be unique, in auto scale group, I had to use some metadata (for example instance id) as a suffix.
    For example, the first 2 controllers is ControllerA on ServerA and ControllerB on ServerB. When ServerB crashes, auto scale group would create a new ServerC and start the controller with the name ControllerC on it.
    Is it OK the name of controller and worker changes and could anyone tell me the usage of the name of controller and worker? Would the name of controller be saved in db or somewhere?

Thanks.

1 Like