`terraform plan -out` with temporary credentials

Hello there,

I have Terraform part of my CI/CD pipeline - we segregate the plan stage from the apply stage, with the output of the plan stage (terraform plan -out plan.tfplan) as the input to the apply stage.

This works great – if the backend and providers use credentials that is consistent between the stages. It appears that if each job in the pipeline has different credentials (because they are ephemeral and specific to that job), the plan includes the credentials and when applying – despite terraform init -reconfigure with the new job credentials – the credentials in the plan are used, resulting in a 401 from my HTTP backend.

I could probably work around this problem by writing fancy scripts that check that plans match, except the backend configuration – but I’m wondering if there’s a better way to accomplish this, or if the behaviour is expected (it definitely was not to me, despite having used Terraform for quite a while now.)

If necessary – I’m on Terraform 0.12.29.

Joel

Hi @lowjoel,

From what you’ve described, it sounds like you are passing credentials through input variables (e.g. -var or -var-file on the command line) or in the backend configuration (with -backend-config, which is essentially the backend-initialization equivalent of -var), which indeed is not ideal for credentials because both input variables and backend settings are captured as part of the plan and identical values are always used as part of applying so that Terraform can minimize what changes between creating the plan and applying it. This is a fundamental part of Terraform’s design.

It’s for that reason that we typically recommend setting context-specific settings like credentials via out-of-band mechanisms that are specific to each provider; most providers try to integrate with whatever is the conventional out-of-band mechanism for the remote system they represent, such as the AWS provider and the S3 backend both supporting the conventional AWS_ACCESS_KEY_ID environment variable and the conventional ~/.aws/credentials file that the AWS CLI also uses.

There isn’t such a strong convention for this sort of thing for HTTP, so the HTTP backend’s options are unfortunately a little more limited in current releases. However, a recent contribution added support for setting the credentials via environment variables: TF_HTTP_USERNAME and TF_HTTP_PASSWORD. I believe (though I’ve not tested) that setting those would be sufficiently “out of band” for Terraform to not consider it to be part of the backend configuration, but rather re-read it separately during the plan and apply phases.

That change seems to have been included in Terraform v0.13.2.

Thanks @apparentlymart.

That’s a fairly good read of the situation, yes. :slight_smile:

As you’ve said, the HTTP backend doesn’t have these conventions so I am indeed passing the credentials in using -backend-config, and setting up the data "terraform_remote_state" (on another project using the HTTP backend for state) using a -var.

I can try using TF_HTTP_USERNAME and TF_HTTP_PASSWORD and see how that goes; does that also apply to the terraform_remote_state data source? Or is that another contribution that’s needed?

Thanks again!

The terraform_remote_state data source runs the same state storage client code that Terraform itself would normally use, so it should pick up the same environment variables although such a solution would be appropriate only if all of your terraform_remote_state data resources that use the http backend are talking to the same server, and thus it’s safe to share the credentials. (If not, you risk sending the credentials from one server to another server, if you forget to override the settings in the terraform_remote_state block.)

Thanks for your advice. Using the TF_HTTP_* variables does remove data being embedded in the plan.