Boundary Reverse Proxy

Are there any plans to allow proxying connections to Boundary via a reverse proxy?

Ideally, I would like to be able to expose the API and worker port via the standard HTTPS port, however, based on what I can see, this is not currently possible.

It seems like trying to proxy the client-to-worker connection in any manner results in an error such as:

Error dialing the worker: failed to WebSocket dial: failed to send handshake request: Get "https://boundaryw.removed.com:443/v1/proxy": x509: certificate is valid for default, not s_XsSU4VHSW1

Current setup is to have the controller exposed at https://boundary.removed.com and the worker exposed via https://boundaryw.removed.com and using SNI matching in Traefik to route between the two.

Even though Traefik is set to pass-through the TLS session, it seems to present the wrong certificate.

Is there a recommended method of load balancing the worker connections?

1 Like

@Just-Insane did you ever get this to work?

I was successful at getting Boundary to work using HAProxy through port 443. Initially, I did use the default 9202/tcp port through my firewall but discovered that some networks actively block uncommon ports.

So now, all my HTTPs (ingress/web/boundary/etc) traffic goes through 443/tcp. :slight_smile: Very happy with that.

This works by using HAProxy tcp inspection and SNI routing. I’m not sure if Traeffic does this.

My HAProxy config snippet:

frontend SSL_PassThrough
    bind *:{{ env "NOMAD_PORT_https" }}
    mode tcp

    tcp-request inspect-delay 5s
    tcp-request content accept if { req_ssl_hello_type 1 }

    use_backend boundary_worker if { req_ssl_sni -m beg s_ }

    default_backend TCP_to_Frontend_SSL_Termination

backend TCP_to_Frontend_SSL_Termination
    mode tcp
    server haproxy-https 127.0.0.1:8443 verify none

frontend SSL_Termination
    bind 127.0.0.1:8443 ssl crt "${NOMAD_SECRETS_DIR}/certs"
    mode http
...
1 Like

A more complete picture of my HAProxy config:

frontend SSL_PassThrough
    bind *:{{ env "NOMAD_PORT_https" }}
    mode tcp

    tcp-request inspect-delay 5s
    tcp-request content accept if { req_ssl_hello_type 1 }

    use_backend boundary_worker if { req_ssl_sni -m beg s_ }

    default_backend TCP_to_Frontend_SSL_Termination

backend TCP_to_Frontend_SSL_Termination
    mode tcp
    server haproxy-https 127.0.0.1:8443 verify none

frontend SSL_Termination
    bind 127.0.0.1:8443 ssl crt "${NOMAD_SECRETS_DIR}/certs"
    mode http

...

backend boundary_worker
    mode tcp
    balance roundrobin
    server-template boundary_worker 4 _boundary-worker-internal._tcp.service.consul resolvers consul resolve-opts allow-dup-ip resolve-prefer ipv4 check verify none

I would like to know from the Boundary devs if the partial host string s_ is safe to use for the SNI routing I utilized with HAProxy. I really couldn’t conceive another way since the internal Worker TLS stack seems to choose s_ consistently for mTLS.

Intrigued to know if you guys had success with this, as I’m currently working on a boundary install sitting behind the existing Nginx Reverse Proxy. Ideally I’d like to keep with just an nginx server, but willing to give something like HAProxy or Traefik a go if that helps get things working as an initial PoC.

I have my setup behind traefik, has been stable, I imagine nginx would be just as capable

I would like to know from the Boundary devs if the partial host string s_ is safe to use for the SNI routing I utilized with HAProxy. I really couldn’t conceive another way since the internal Worker TLS stack seems to choose s_ consistently for mTLS.

I’m not a Boundary dev myself, but in my opinion this is probably “good enough”. The failure case would be if something else ever uses a hostname starting with s_, you might end up connecting to the Boundary worker instead. That seems unlikely, and if it did happen it would just be a spurious and probably rejected connection to the worker, so you might see an HTTP error code in a log or an error page in a browser.

You might be able to tighten it up though – for example by telling the worker the external hostname of the load balancer in its config with the public_addr parameter.

1 Like

After long research, get it working with ingress-nginx just enabling ssl-passthrough annotation for worker ingress. Also do not forget to include —enable-ssl-passthrough argument in the ingress-controller pod to make the ssl-passthrough annotation work.

my worker config:

worker {
  name = "worker"
  description = "Boundary Worker Configuration"
  initial_upstreams = ["controller-cluster.<namespace>.svc.cluster.local:9201"]
  public_addr = "<public-domain>:443"
}

worker ingress annotations:

 nginx.ingress.kubernetes.io/ssl-redirect: "true"
 nginx.ingress.kubernetes.io/ssl-passthrough: "true"

Ingress Nginx Pod: if using helm chart add to

extraArgs: 
  enable-ssl-passthrough: true