Using AWS ECS or good old' EC2 instances in setting up Hashicorp Boundary

Hey all. I am looking into incorporating Hashicorp Vault and Boundary onto my architecture. I am currently testing Vault but I would also like to have Boundary since it can help with replacing Bastion hosts. However, I am having a hard time finding articles on steps of configuring Boundary and Vault onto an AWS architecture.

I have looked at this deployment in AWS and was wondering is it best practice to have EC2 instances or AWS ECS (which we currently use) for Hashicorp Boundary?

Would appreciate any help!

Speaking only for myself, I would say that since there are similar concerns around the security considerations for Boundary as there are with Vault (with regard to things like protecting sensitive data from being written to disk, which requires additional process capabilities to enforce), then the answer to that question will be similar for both: to wit, quoting from the Vault docs, “Vault should be the only main process running on a machine. This reduces the risk that another process running on the same machine is compromised and can interact with Vault. Similarly, running on bare metal should be preferred to a VM, and running in a VM should be preferred to running in a container.”

I think if you replace “Vault” with “Boundary” that quoted sentence is equally true; that doesn’t mean you can’t under any circumstances run it in a container, it just means there are additional exposures and mitigations you have to take into account if you do – and I think a lot of the advice on that from the Vault on Kubernetes security considerations guide will apply equally to Boundary, as well, although specifically in the case of ECS you might have to just trust that the platform is handling some things without being able to enforce it explicitly yourself because the underlying VM is abstracted away from you. (You definitely can, and should, explicitly enforce capabilities and limits in your ECS task definitions, though.)

Note that with Vault, we (and by extension, with Boundary, I) still recommend single-tenancy even when running in containers, i.e. it should be the only application workload present on the node. It’s up to you whether that and mitigating the container security concerns is outweighed by the convenience of running in a container (and in fairness, that’s pretty massively convenient in my view).

Thank you for replying @omkensey! I absolutely agree. I have two instances running constantly (with a container of Vault currently) on two EC2 instances. I will have to do the same with Boundary on four separate EC2 instances 2 being controllers and the other 2 being workers.

Would you know of any documentation of Docker files of Boundary being implemented with configs? Would like to see how I can achieve this.

We have a Docker Compose ref arch that should help get you started. Note that if you break it apart across multiple nodes, you’ll probably need to change the Boundary config file since Compose’s service DNS will no longer be in effect.