Setting up Boundary Controller and Workers on EKS

Hello all,

I have been exploring what Boundary has to offer in Kubernetes world and currently set up 3 boundary controllers and 3 workers in EKS as pods within same namespace. My goal is to simply access a DB in a separate VPC from the VPC that EKS and boundary deployed on, though they are connected through a transit gateway. I want to be able to connect to this DB through the Boundary desktop app which should show the target and eventually establish the connection from my local computer. I setup Boundary controller API as the ingress for accessing the UI but currently stuck on setting up workers to establish connection to the DB itself.
My question is; how to setup workers in the eks which will have establish the connection to the Database I am trying to connect. I guess I don`t fully understand how the controller really works and most of the material out there is showing the opposite connection( connecting boundary from a vm or ec2 to the Kubernetes or database etc). So
If you have any insights on how to make this work or any documentation to share, that would be very helpful!
Oh and I have used this GitHub - ugns/boundary-chart: Hashicorp Boundary Helm Chart helm chart for the deployment. In this setup, workers are self managed.

Thanks!

Hey @lucardcoder I am also trying to set it up. Also using the same helm chart with a few modifications.
I also have the same doubts while my controller is in front of ALB. Which worked well, but when it comes to workers, how to manage those.

As in k8s workers are running inside pods. below is my worker config file

disable_mlock = true
log_format    = "standard"

worker {
  name        = "kubernetes-worker"
  description = "Boundary kubernetes-worker"
  controllers = ["boundary.default:9201"]
  public_addr = "localhost:9202"
}

listener "tcp" {
  address     = "0.0.0.0"
  purpose     = "proxy"
  tls_disable = true
}

kms "aead" {
    purpose   = "worker-auth"
    key_id    = "global_worker-auth"
    aead_type = "aes-gcm"
    key       = "somekey"
}

Here, what does public_addr mean? The address of the workers? For multiple workers, should we also make it in front of LB, and then there will be public DNS?

How workers will be knowing the controllers? If we enable autoscaling, it will not be fixed. Where do we specify those in configs?

Also, even if I have three workers, in admin panel, it only shows the one in the UI