AWS environment variables are not being used when I tried to start a boundary controller process

I have the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in the environment variable. And when I tried to run boundary server -config controller.hcl with the following config I get the following error messages.

Error parsing KMS configuration: error setting configuration on the kms plugin: rpc error: code = Unknown desc = error fetching AWS KMS wrapping key information: InvalidSignatureException: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
	status code: 400, request id: ...

However, if I explicitly specified access_key and secret_key values in the controller.hcl file I am able to start the controller process without an issue. Is there something that I did wrong?

This is the configuration file.

# controller.hcl

# Disable memory lock: https://www.man7.org/linux/man-pages/man2/mlock.2.html
disable_mlock = true

# Controller configuration block
controller {
  # This name attr must be unique across all controller instances if running in HA mode
  name = "boundary-controller"
  description = "boundary-controller"

  # After receiving a shutdown signal, Boundary will wait 10s before initiating the shutdown process.
  graceful_shutdown_wait_duration = "10s"

  # Database URL for postgres. This can be a direct "postgres://"
  # URL, or it can be "file://" to read the contents of a file to
  # supply the url, or "env://" to name an environment variable
  # that contains the URL.
  database {
      url = "env://BASTION_POSTGRES_URL"
  }
}

# API listener configuration block
listener "tcp" {
  # Should be the address of the NIC that the controller server will be reached on
  address = "127.0.0.1"
  # The purpose of this listener block
  purpose = "api"

  tls_disable = true
}

# Data-plane listener configuration block (used for worker coordination)
listener "tcp" {
  # Should be the IP of the NIC that the worker will connect on
  address = "127.0.0.1"
  # The purpose of this listener
  purpose = "cluster"
}

listener "tcp" {
  # Should be the address of the NIC where your external systems'
  # (eg: Load-Balancer) will connect on.
  address = "127.0.0.1"
  # The purpose of this listener block
  purpose = "ops"

  tls_disable = true
}

# Root KMS configuration block: this is the root key for Boundary
# Use a production KMS such as AWS KMS in production installs
kms "awskms" {
  purpose = "root"
  # kms_key_id = ""
  # region     = ""
  # access_key = ""
  # secret_key = ""
}

# Worker authorization KMS
# Use a production KMS such as AWS KMS for production installs
# This key is the same key used in the worker configuration
kms "awskms" {
  purpose = "worker-auth"
  # kms_key_id = ""
  # region     = ""
  # access_key = ""
  # secret_key = ""
}

# Recovery KMS block: configures the recovery key for Boundary
# Use a production KMS such as AWS KMS for production installs
kms "awskms" {
  purpose = "recovery"
  # kms_key_id = ""
  # region     = ""
  # access_key = ""
  # secret_key = ""
}

p.s: I also have AWSKMS_WRAPPER_KEY_ID and BASTION_POSTGRES_URL environment variable set.

Try unsetting any AWS_* environment variables besides the two used for your credentials. It’s possible the AWSKMS_WRAPPER_KEY_ID variable or another AWS-related environment variable is out of sync with your credentials or another part of the config and is causing issues.

It appears that if I set kms_key_id explicitly instead of using AWSKMS_WRAPPER_KEY_ID env variable it works ok. Does this seem to be a bug?