Boundary Management different AWS accounts and VPC's

Hi,

We are currently manage a lot of different accounts of backed servers and on different vpc’s.
Do I need to install on each env’ Boundary server or is it possible to manage different AWS accounts\vpc’s in one manage server scope?

1 Like

As far as I know, the main thing you need to worry about is whether clients can connect to controllers/workers over the network and whether workers can connect to targets.

To expand on that, one way you can manage it is to have workers in each AWS account/VPC to handle the traffic, and use worker tags and target worker filters to ensure the right workers handle the right traffic.

Hi,

I have implemented the AWS deployment boundary-reference-architecture/deployment/aws at main · hashicorp/boundary-reference-architecture · GitHub

I have the worker on one vpc work fine and great
I was trying to install another worker on another vpc (2) ( not route connection betweeb using worker.hcl & install.sh worker command & boundary bin manually on the new worker server.
I have edit the worker.hcl file so it can communicate with the controllers using there external IP’s but it still tries to connect via there own private IP:
image


The first status was the it was connected to the controller but then it says about his private IP
Do I need to configure something on controller side?
because when I’m trying to boundary connect it still wants to connect to the worker on VPC one

If the workers need to connect to the controller on an IP/hostname different from its private IP/hostname, you’ll need to configure public_cluster_addr in the controller config. Note that all workers talking to that controller will then try to connect on that public address, even the ones within the same VPC, so you may need to make additional changes to your security groups in the controller’s VPC to allow that.

Hi Omkensey, thanks a lot for your help. the worker is now connected as you tell me to configure public_cluster_addr. I’m trying to figure out worker_filter - Is it only be configured via terraform and not from admin UI? Can you give an example of how do I put it in the resource?
image
This is the worker tags that I have added to the worker config:
image

Do I have to use a key like region? or this is optional?

Currently there’s no interface exposed for configuring worker filters in the admin UI; you can configure it via the Terraform provider in the boundary_target resource, or you can configure it on the CLI using the -worker-filter option when creating or updating a target (or using API calls directly, of course).

As for worker tags, they can be any K/V pair at all; region and type are some of the more common keys people commonly tag resources with in cloud environments, but you could tag them with info about who administers that portion of Boundary or even what you had for lunch the day you created that worker – it’s completely up to you. You can also leave workers completely untagged if you don’t need tags on them – I run a demo environment with an untagged worker running on a VM and a tagged worker running inside a Kubernetes cluster.

I have tried to use the name of that worker and put it on worker_filter:


and tried using cli update:

How can I do it correct ?
the worker name and tag are here:
image

Another issue after adding public_cluster_addr im not able to connect with the worker to the private machine that was created 10.0.100.* in private subnet (via terraform deploy).
I am uploading (left controller) right(worker) config files:

I have only added the public address of the controller and edited the controllers in workers for there public and not private IPs.
The ssh between the worker and the target machine is work fine:


But why do I get the error for the previous worker?

Is it because now I have to workers connected ? (demo-worker-0\1)

For worker filters, you’re not just giving a name, you’re defining filter operations based on k/v pairs. Check that link in my previous response, it has some examples of worker filters.

For the connect error you’re getting, it looks like your client is trying to connect to the controller on 127.0.0.1 and being refused. Your controller has a specific IP defined in the listener stanza (I assume its private IP), so it won’t bind to 127.0.0.1 by default. Try connecting to it on the private IP.

Hi, For the connection error im just using the same BOUNDARY_ADDR using the alb path created via terraform. I even used connection via the external IP of the controller itself. I have completely destroy and redeploy the all aws resources and boundary resources using terraform and Im still facing the issue and created the all infra from scratch again. the weird is I’m facing the same issue


I’m not trying to connect to the controller on 127.0.0.1. I even tried with other machine using boundary desktop client and still refusing me…

When you set an environment variable assignment before a command, it’s set only for that command. In your screenshot you’re setting BOUNDARY_ADDR for boundary authenticate but then it’s left unset for boundary targets, so the CLI defaults to trying to connect to localhost. You either need to export it (then it’s set persistently for the environment in that shell), or pass it to the Boundary CLI with each command using the -addr option.