Boundary cluster with workers intergrated in the controllers possible?

Hi,

I am still trying to set up a prod boundary cluster with a vault cluster as the credential store with docker.
So far I have a working vault cluster, a working boundary instance and connected and configured them so that I can log in into boundary desktop with a test-account, and successfully create sessions.

My problem is the following:
I wanted to step up the game and configure this setup to also have boundary HA.
But I’m again confused because the whole documentation sounds like the worker and the controller being two completely separated things (in my case containers) but on the other hand there is the proxy listener that starts a worker in the same container?

So my question is:

Is it possible to just combine like 3 controllers in a cluster with default workers like

listener "tcp" {
  address = "0.0.0.0"
  purpose       = "proxy"
  tls_disable   = true 
}

so that the KMS/PKI worker and further worker configuration is unnecessary and can be left out?

This would make the whole setup so much easier to handle.

What could an example configuration look like, if I am not completely wrong on this?

You can specify both controller and worker config stanzas in the same file, but (as far as I know) you can’t have a worker proxy listener defined without a worker config stanza, or have the worker authenticate implicitly in that setup, so you still have to provide worker auth config in the worker stanza.

One thing this does is make it easier to migrate later to having actual separate worker instances; also, as I understand it, the worker activities are separate execution threads from the controller activities, so this also helps keep the communication between those threads secure.

This is correct – you can run the worker within the same container by specifying its configuration, but they currently use the same mechanism for connecting to each other as when they are run separately. But you can then just specify the worker’s upstream to be 127.0.0.1 so it’s quite simple.

Note that for KMS you can specify multiple values for purpose so if you want to use the same KMS for multiple you can just put e.g. purpose = ["root", "worker-auth", etc.]

(Some history here: when originally creating this we wanted to ensure we didn’t accidentally rely on the properties of some internal communication mechanism and break things when people wanted to actually run the workers/controllers independently, which is how most people run it in production. We could probably switch to using an internal transport at this point but we hadn’t to this point had any requests to do so, that I can recall at least.)

1 Like