Boundary Upgrade Path?

Hello,

We have upgraded our self-hosted boundary clusters in a K8s cluster from 0.11.0 to 0.13.1 by confirming there was no breaking changes at the CHANGELOG. Except for backing up the Postgres database, the upgrade workflow is as recommended here.

However, boundary is unable to establish the connections after the upgrade.

boundary connect -target-name=test-db-target --target-scope-name "Company Project" -listen-port 5432

Terminal Output:

Proxy listening information:
  Address:             127.0.0.1
  Connection Limit:    -1
  Expiration:          Tue, 09 Apr 2024 00:38:13 EDT
  Port:                5432
  Protocol:            tcp
  Session ID:          s_KX2oDoXcn2
Session credentials were not accepted, or session is unauthorized

When searched for Session credentials were not accepted, or session is unauthorized error, I see that there are some github issues (#2257) related in the previous versions. As it made me think that it might be correlated to boundary cli version. I updated to the latest version (0.15.3) and it didn’t work and tried to using the same boundary cli version as the boundary cluster (0.13.1), but it didn’t help either.

Version information:
  Build Date:          2023-07-14T13:03:00Z
  Git Revision:        db01791662a7126fbf4ea0a27b23b70acd20b17b
  Version Number:      0.13.1

Extra infro that may help:

Controller Config:

    disable_mlock = true
    log_format    = "standard"

    controller {
      name        = "kubernetes-controller"
      description = "Boundary kubernetes-controller"
      database {
        url = "file:///vault/secrets/boundary-database-creds"
      }
      public_cluster_addr = "boundary.hashicorp:9201"
      auth_token_time_to_live = "24h"
      max_idle_time = "2h"
    }

    listener "tcp" {
      address     = "0.0.0.0"
      purpose     = "api"
      tls_disable = true
    }
    listener "tcp" {
      address     = "0.0.0.0"
      purpose     = "cluster"
    }
    listener "tcp" {
      address     = "0.0.0.0"
      purpose     = "ops"
      tls_disable = true
    }

    kms "awskms" {
      purpose    = "worker-auth"
    }

    kms "awskms" {
      purpose    = "root"
    }

    kms "awskms" {
      purpose    = "recovery"
    }
    kms "awskms" {
      purpose    = "config"
    }

Worker Config:

    disable_mlock = true
    log_format    = "standard"

    worker {
      name        = "kubernetes-worker"
      description = "Boundary kubernetes-worker"
      controllers = ["boundary.hashicorp:9201"]
      public_addr = "env://BOUNDARY_WORKER_LOAD_BALANCER"
    }

    listener "tcp" {
      address     = "0.0.0.0"
      purpose     = "proxy"
      tls_disable = true
    }

    kms "awskms" {
      purpose    = "worker-auth"
    }

    listener "tcp" {
      address = "0.0.0.0"
      purpose = "api"
      tls_disable = true
    }

    listener "tcp" {
      address = "0.0.0.0"
      purpose = "cluster"
      tls_disable = true
    }

So did anyone experience this before? If anyone could share how to fix it, I’d highly appreciate it.