Error on terraform init with AWS access_key secret_key as env vars

Dear all,

I faced an issue with Terraform:

│ Error: Failed to get existing workspaces: Unable to list objects in S3 bucket "tf-state-project2" with prefix "env:/": operation error S3: ListObjectsV2, https response error StatusCode: 400, RequestID: cd9a80e3b46b271e, HostID: , api error BadRequest: 400 BadRequest

In the first project I had a s3 backend, like:

terraform {
  backend "s3" {
    endpoints = {
      dynamodb = "<REDACTED>"
      s3 = "<REDACTED>"
    }
    bucket     = "tf-state-project1"
    region     = "<REDACTED>"
    key        = "state/infra.tfstate"
    access_key = "<REDACTED>"
    secret_key = "<REDACTED>"


    dynamodb_table = "<REDACTED>"

  }
}

I tried to cut out access_key and secret_key from file by moving AWS access_key / secret_key to the env vars (remove secrets from git):

export AWS_ACCESS_KEY_ID=$(vault kv get -field=AWS_ACCESS_KEY_ID <REDACTED>)
export AWS_SECRET_ACCESS_KEY=$(vault kv get -field=AWS_SECRET_ACCESS_KEY <REDACTED>)

In the first project, I am able to run terraform init -reconfigure. Works perfect.

In the same shell, when I do the same thing in the project2:

terraform {
  backend "s3" {
    endpoints = {
      dynamodb = "<REDACTED>"
      s3 = "<REDACTED>"
    }
    bucket     = "tf-state-project2"
    region     = "<REDACTED>"
    key        = "state/project2.tfstate"
    access_key = "<REDACTED>"
    secret_key = "<REDACTED>"


    dynamodb_table = "<REDACTED>"

  }
}

I’m facing the mentioned error:

│ Error: Failed to get existing workspaces: Unable to list objects in S3 bucket "tf-state-project2" with prefix "env:/": operation error S3: ListObjectsV2, https response error StatusCode: 400, RequestID: cd9a80e3b46b271e, HostID: , api error BadRequest: 400 BadRequest

I don’t use workspaces in any project.
My guess it occurs due to difference of s3 buckets I have, but this is a requirement.

It might be another way to manage AWS secrets, unfortunately I can’t use a CLI or any other tooling, I have only AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as variables.

Please suggest.

Terraform v1.7.5

Thanks,
P.

Hey P, welcome! :wave:

You are using workspaces, and your first project is taking advantage of the default one. (See the docs for more info.) I suspect that assigning a unique workspace to your second project might get you further along. Good luck!

Hi John,
appreciate your reply,

You’re absolute right - it was a wrong statement about workspaces :).

I meant I do not use any specific workspace, except the default one.
I’ve created an additional workspace in the project1:

$ t init -reconfigure 2>&1 > /dev/null
$ echo $?
0
$ t workspace list
  default
* project1-infra

Then I switched to the project2 and tried to init on it, but still getting an error:

$ export AWS_ACCESS_KEY_ID=$(vault kv get -format=json -field=access_key <REDACTED>)
$ export AWS_SECRET_ACCESS_KEY=$(vault kv get -format=json -field=secret_key <REDACTED>)
$ t init -reconfigure 2>&1 > /dev/null
╷
│ Error: Failed to get existing workspaces: Unable to list objects in S3 bucket "tf-state-project2" with prefix "env:/": operation error S3: ListObjectsV2, https response error StatusCode: 400, RequestID: 27eaf8e39dfd9ff0, HostID: , api error BadRequest: 400 BadRequest

$ echo $?
1

However, if I rollback changes in backend.tf (I removed access_key and secret_key) - terraform inits perfectly.

Is it just me, or are you using the same workspace prefix (i.e., env:/) for both workspaces? According to the docs (linked above), accessing workspaces other than default is slightly different:

… Other workspaces are stored using the path <workspace_key_prefix>/<workspace_name>/<key> . The default workspace key prefix is env: and it can be configured using the parameter workspace_key_prefix

I was also wondering whether it might an S3 bucket permissions issue… This GitHub issue seems to be similar, and might offer further insights.

Right, I’m using default workspace (and prefix accordingly) everywhere, in every project I have.

The main difference across all projects is s3 bucket name.

If secret_key and access_key exist in plain text in the backend.tf file - terraform inits well, for each project,
when I remove them from file and set the env vars - it fails.

I wonder how it could conflict if buckets are different…

That, to me, suggests that the problem is related to the various IAM roles and profiles, and the precedence rules around which is used, and under what circumstances. From the GitHub issue I linked to:

… my default AWS profile was configured to a different account than that which I used with Terraform… add to ~/.aws/credentials another AWS profile with the creds you need for the terraform backend, and switch to this profile with export AWS_PROFILE=profile-name.

I’m wondering whether Terraform variable precedence might be part of the problem as well…