I’ve setup my plans to now run remotely via TFE. I am able to see plans executed successfully if initiated from the UI but when executed locally via terraform plan i get this error note i have my .terraformrc credentials setup correctly & added the TFE_TOKEN env var for good measure
Output
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
To view this run in a browser, visit:
https://app.terraform.io/app/*/sandbox/runs/run-gaWtmhk3aQrfY9TM
Waiting for the plan to start...
Terraform v0.12.3
Setup failed: Failed to copy tfVars file: scp: /terraform/aws/sandbox: No such file or directory
Waiting for the plan to start...
Terraform v0.12.18
Setup failed: Failed to copy tfVars file: scp: /terraform/packages/terraform/testing: No such file or directory
Unsure what part of this is what solved it, but after I merge my terraform cloud changes into my master branch and ran a plan/apply through the UI I as able to then run terraform plan on branches locally through the CLI.
This error message does seem to be exposing the implementation details a bit, but I believe what’s going on behind the scenes here is that the subsystem that manages the temporary execution environments for Terraform Cloud is trying to upload the .tfvars file it generates from your configuration workspace variables into the working directory so that Terraform CLI will then find it and use it.
Currently the implementation of that is to SSH into the target system and use scp to write it into place in your configured working directory. In order to succeed the target directory must already exist in the configuration snapshot.
If you have the target directory already present in your configuration then I’d suggest contacting the support team directly, because they can (with your permission) inspect your Terraform Cloud workspace settings directly and give individual help.