Kubernetes container is not passed config.env

I’m trying to deploy into Kubernetes, however the config.env variables are not being written to the deployment config and exposed to the container.

My app is as follows:

project = "my-project"
app "my-project-api" {
  config {
    env = {
      "DATABASE_URL" = dynamic("kubernetes", {
        name   = "production-database-creds"
        key    = "url"
        secret = true

  build {
    use "docker" {
      buildkit           = true
      platform           = "amd64"
      disable_entrypoint = true
      build_args = {
        VERSION = gitrefpretty()

    registry {
      use "aws-ecr" {
        region     = "us-west-2"
        repository = "my-project-api"
        tag        = gitrefpretty()

  deploy {
    use "kubernetes" {
      replicas   = 3
      pod {
        container {
          command = ["/bin/server", "server"]

Inspecting the deployment looks, the environment only contains Waypoint vars:

kubectl describe deployment/my-project-v19:

# Trimmed ...
      WAYPOINT_SERVER_ADDR:             XXX-XXXXX.us-west-2.elb.amazonaws.com:9701
      WAYPOINT_SERVER_TLS:              1
      PORT:                             3000

Any ideas here?

It’s probably worth pointing out that that setting container.static_environment does work. Is this a bug in Waypoint?

Did you create the k8s Secret with name production-database-creds manually?

Can you check if it exists in the same namespace as the deployed app and if there is a value with the url key in it?

I can confirm that the secret exists in that namespace.

Name:         production-database-creds
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

url:  117 bytes

I’ve tried without using dynamic() and providing only static variables in config.env it still doesn’t get exposed to the container.

I dug into the source a little bit (and I might have missed something obvious) but it doesn’t look like config.env is ever applied to the container, or sidecar containers for that matter:

I suspect part of the reason for this is that envVars is a simple map[string]string, whereas to support dynamic() it would need more keys to handle k8s concept of ValueFrom.

I’ve opened an issue for this Kubernetes containers should support environment variables from app config · Issue #4589 · hashicorp/waypoint · GitHub

1 Like

What does waypoint logs say?
Check out Dynamic Values - Application Configuration | Waypoint | HashiCorp Developer

If a configuration value is not available (the environment variable is not set or the value is not what you expect), then the best debugging tool is waypoint logs. The logs command will show entrypoint logs where you will see any errors related to configuration syncing.

There’s no output from logs, but here’s the debug log from waypoint logs:

waypoint logs -local -v
2023-03-27T14:21:48.966+1300 [INFO]  waypoint: waypoint version: full_string="v0.11.0-110-gc0f0e03b1 (c0f0e03b1+CHANGES)" version=v0.11.0-110-gc0f0e03b1 prerelease="" metadata="" revision="c0f0e03b1+CHANGES"
2023-03-27T14:21:48.971+1300 [INFO]  waypoint.server: attempting to source credentials and connect
2023-03-27T14:21:49.804+1300 [INFO]  waypoint: server version info: version=v0.11.0 api_min=1 api_current=1 entrypoint_min=1 entrypoint_current=1
2023-03-27T14:21:49.804+1300 [INFO]  waypoint: negotiated api version: version=1
2023-03-27T14:21:49.806+1300 [INFO]  waypoint.runner: generated a new runner ID: id=01GWGAGFQD351EHP5HEJVQE4FH
2023-03-27T14:21:49.806+1300 [WARN]  waypoint.runner: cookie not set for runner, will skip adoption process
2023-03-27T14:21:50.014+1300 [INFO]  waypoint.runner.config_recv: new configuration received
2023-03-27T14:21:50.517+1300 [INFO]  waypoint.runner: runner registered with server and ready
2023-03-27T14:21:50.518+1300 [INFO]  waypoint.logs: requesting log stream
2023-03-27T14:21:50.520+1300 [INFO]  waypoint.runner: waiting for job assignment