Target connection issues via Boundary Desktop [ server closed the connection unexpectedly ]

Hello,

I have deployed boundary in production mode. While trying to connect to the target via Boundary desktop app, I’m getting server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. error. However, I am able to connect to the target if I re-try a few times. What do you think would cause such an error?

boundary-workflow

boundary-controller-config

    disable_mlock = true
    log_format    = "standard"

    controller {
      name        = "kubernetes-controller"
      description = "Boundary kubernetes-controller"
      database {
        url = "file:///vault/secrets/boundary-database-creds"
      }
      public_cluster_addr = "boundary.hashicorp:9201"
    }

    listener "tcp" {
      address     = "0.0.0.0"
      purpose     = "api"
      tls_disable = true
    }
    listener "tcp" {
      address     = "0.0.0.0"
      purpose     = "cluster"
    }
    listener "tcp" {
      address     = "0.0.0.0"
      purpose     = "ops"
      tls_disable = true
    }

    kms "awskms" {
      purpose    = "worker-auth"
      region     = "{AWS_REGION}"
    }

    kms "awskms" {
      purpose    = "root"
      region     = "{AWS_REGION}"
    }

    kms "awskms" {
      purpose    = "recovery"
      region     = "{AWS_REGION}"
    }
    kms "awskms" {
      purpose    = "config"
      region     = "{AWS_REGION}"
    }

boundary-worker config

    disable_mlock = true
    log_format    = "standard"

    worker {
      name        = "kubernetes-worker"
      description = "Boundary kubernetes-worker"
      controllers = ["boundary.hashicorp:9201"]
      public_addr = "env://BOUNDARY_WORKER_LOAD_BALANCER"
    }

    listener "tcp" {
      address     = "0.0.0.0"
      purpose     = "proxy"
      tls_disable = true
    }

    kms "awskms" {
      purpose    = "worker-auth"
      region     = "{AWS_REGION}"
    }

    listener "tcp" {
      address = "0.0.0.0"
      purpose = "api"
      tls_disable = true
    }

    listener "tcp" {
      address = "0.0.0.0"
      purpose = "cluster"
      tls_disable = true
    }

Btw, I have no issues connecting with boundary cli at all. Any help on this issue will highly be appreciated. Thanks!

@ercindemir0 Did you find any root cause for this issue? I am also facing similar issue with Boundary

Seems like this issue

What version are you on?

@macmiranda I am on " Boundary 0.12.0"
Should I upgrade to “0.12.2”?

1 Like

@macmiranda I am still facing the same issue. I tried upgrading worker and controller to v0.12.2.

I am using private worker node to connect to the target inside the AWS VPC but we have VPN enabled to take care of this. But getting below error while using private worker

I have also tried using public facing worker node. I got below error while using public worker node

error fetching connection to send session teardown request to worker: Error dialing the worker: failed to WebSocket dial: failed to send handshake request: Get "http://157.175.49.80:9202/v1/proxy": context deadline exceeded

Still I am not able to connect to the PSQL target. Is there any other workaround?

Thanks!

I got it resolved. found this thread helpful

The issue was with my worker config file I was using public_addr in listener which is not needed for private worker nodes.

Thanks @macmiranda

1 Like

A couple of things:

The worker does not have api and cluster type ports but I’m guessing your config is different from @ercindemir0

You need to make sure the client machine can access the controller nodes on the api port (default 9200) and the worker nodes on the proxy port (default 9202). Besides, the workers need to reach the controllers on port 9201 (cluster port)

1 Like