Are there any plans to allow proxying connections to Boundary via a reverse proxy?
Ideally, I would like to be able to expose the API and worker port via the standard HTTPS port, however, based on what I can see, this is not currently possible.
It seems like trying to proxy the client-to-worker connection in any manner results in an error such as:
Error dialing the worker: failed to WebSocket dial: failed to send handshake request: Get "https://boundaryw.removed.com:443/v1/proxy": x509: certificate is valid for default, not s_XsSU4VHSW1
Current setup is to have the controller exposed at https://boundary.removed.com and the worker exposed via https://boundaryw.removed.com and using SNI matching in Traefik to route between the two.
Even though Traefik is set to pass-through the TLS session, it seems to present the wrong certificate.
Is there a recommended method of load balancing the worker connections?
1 Like
@Just-Insane did you ever get this to work?
I was successful at getting Boundary to work using HAProxy through port 443. Initially, I did use the default 9202/tcp port through my firewall but discovered that some networks actively block uncommon ports.
So now, all my HTTPs (ingress/web/boundary/etc) traffic goes through 443/tcp.
Very happy with that.
This works by using HAProxy tcp inspection and SNI routing. I’m not sure if Traeffic does this.
My HAProxy config snippet:
frontend SSL_PassThrough
bind *:{{ env "NOMAD_PORT_https" }}
mode tcp
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend boundary_worker if { req_ssl_sni -m beg s_ }
default_backend TCP_to_Frontend_SSL_Termination
backend TCP_to_Frontend_SSL_Termination
mode tcp
server haproxy-https 127.0.0.1:8443 verify none
frontend SSL_Termination
bind 127.0.0.1:8443 ssl crt "${NOMAD_SECRETS_DIR}/certs"
mode http
...
1 Like
A more complete picture of my HAProxy config:
frontend SSL_PassThrough
bind *:{{ env "NOMAD_PORT_https" }}
mode tcp
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend boundary_worker if { req_ssl_sni -m beg s_ }
default_backend TCP_to_Frontend_SSL_Termination
backend TCP_to_Frontend_SSL_Termination
mode tcp
server haproxy-https 127.0.0.1:8443 verify none
frontend SSL_Termination
bind 127.0.0.1:8443 ssl crt "${NOMAD_SECRETS_DIR}/certs"
mode http
...
backend boundary_worker
mode tcp
balance roundrobin
server-template boundary_worker 4 _boundary-worker-internal._tcp.service.consul resolvers consul resolve-opts allow-dup-ip resolve-prefer ipv4 check verify none
I would like to know from the Boundary devs if the partial host string s_
is safe to use for the SNI routing I utilized with HAProxy. I really couldn’t conceive another way since the internal Worker TLS stack seems to choose s_
consistently for mTLS.
Intrigued to know if you guys had success with this, as I’m currently working on a boundary install sitting behind the existing Nginx Reverse Proxy. Ideally I’d like to keep with just an nginx server, but willing to give something like HAProxy or Traefik a go if that helps get things working as an initial PoC.
I have my setup behind traefik, has been stable, I imagine nginx would be just as capable
I would like to know from the Boundary devs if the partial host string s_
is safe to use for the SNI routing I utilized with HAProxy. I really couldn’t conceive another way since the internal Worker TLS stack seems to choose s_
consistently for mTLS.
I’m not a Boundary dev myself, but in my opinion this is probably “good enough”. The failure case would be if something else ever uses a hostname starting with s_
, you might end up connecting to the Boundary worker instead. That seems unlikely, and if it did happen it would just be a spurious and probably rejected connection to the worker, so you might see an HTTP error code in a log or an error page in a browser.
You might be able to tighten it up though – for example by telling the worker the external hostname of the load balancer in its config with the public_addr
parameter.
1 Like
After long research, get it working with ingress-nginx just enabling ssl-passthrough annotation for worker ingress. Also do not forget to include —enable-ssl-passthrough argument in the ingress-controller pod to make the ssl-passthrough annotation work.
my worker config:
worker {
name = "worker"
description = "Boundary Worker Configuration"
initial_upstreams = ["controller-cluster.<namespace>.svc.cluster.local:9201"]
public_addr = "<public-domain>:443"
}
worker ingress annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
Ingress Nginx Pod: if using helm chart add to
extraArgs:
enable-ssl-passthrough: true
Hi! Can you share your config? I’m trying to get traefik with worker working using these settings:
Traefik labels:
labels:
- "traefik.enable=true"
- "traefik.tcp.routers.boundary-worker.entrypoints=websecure"
- "traefik.tcp.routers.boundary-worker.rule=HostSNI(`worker-1.domain.org`)"
- "traefik.tcp.routers.boundary-worker.tls.passthrough=true"
- "traefik.tcp.routers.boundary-worker.service=boundary-worker"
- "traefik.tcp.services.boundary-worker.loadbalancer.server.port=9202"
Worker config:
disable_mlock = true
listener "tcp" {
address = "0.0.0.0:9202"
purpose = "proxy"
}
worker {
public_addr = "worker-1.domain.org:443"
auth_storage_path = "/worker-storage"
initial_upstreams = [
"172.21.5.6:9201",
"172.21.5.7:9201",
]
tags {
type = ["worker1", "vault", "ssh"]
}
}
I see in the logs of traefik that tls passthrough is working, but ssh sessions don’t work:
user@Mac-mini ~ % ssh 127.0.0.1 -p 54535 -o NoHostAuthenticationForLocalhost=yes
kex_exchange_identification: read: Connection reset by peer
Connection reset by 127.0.0.1 port 54535```
I have it working by utilizing 80 port with:
"HostSNI(`*`)"
And in worker config: public_addr = "worker-1.domain.org:80" - works flawlessly, but with 443 port doesn't work.
I figure all out.
For those who wants to setup boundary workers through TLS passthrough with Traefik and use whatever port it listens to (443, 80 or anything else) here what you need.
In my example I use 443:
Worker config snippet:
worker {
public_addr = "your-worker1-domain.org:443"
auth_storage_path = "/worker-storage"
initial_upstreams = [
"controller1-domain.org:9201",
"controller2-domain.org:9201",
]
tags {
type = ["worker1", "vault", "ssh"]
}
}
Traefik has websecure
endpoint with :443
port in its config.
If you use labels put it in the docker-compose file of the worker container:
labels:
- "traefik.enable=true"
- "traefik.tcp.routers.boundary-worker.entrypoints=websecure"
- "traefik.tcp.routers.boundary-worker.rule=HostSNI(`your-worker1-domain.org`)"
- "traefik.tcp.routers.boundary-worker.tls=true"
- "traefik.tcp.routers.boundary-worker.tls.passthrough=true"
- "traefik.tcp.routers.boundary-worker.service=boundary-worker"
- "traefik.tcp.services.boundary-worker.loadbalancer.server.port=9202"
Or if you use dynamic config file:
tcp:
routers:
boundary-worker:
entrypoints: "websecure"
rule: "HostSNI(`your-worker1-domain.org`)"
service: "boundary-worker"
tls:
passthrough: true
services:
boundary-worker:
loadbalancer:
servers:
- address: x.x.x.x:9200
So, in this case the tls connection will be established from client to the worker itself directly through Traefik, using SNI record of your worker instance.
In my company I’m using port 80, cuz 443 port is being listening by firewall which puts it own certificate for decrypting all traffic, that’s why I didn’t work for me at the beginning, once I figure all out - jigsaw fell into place.