Running remote plans locally when using a working directory

Hi,

I’ve setup my plans to now run remotely via TFE. I am able to see plans executed successfully if initiated from the UI but when executed locally via terraform plan i get this error
note i have my .terraformrc credentials setup correctly & added the TFE_TOKEN env var for good measure

Output

Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
 
Preparing the remote plan...
 
To view this run in a browser, visit:
https://app.terraform.io/app/*/sandbox/runs/run-gaWtmhk3aQrfY9TM
 
Waiting for the plan to start...
 
Terraform v0.12.3
 
 
Setup failed: Failed to copy tfVars file: scp: /terraform/aws/sandbox: No such file or directory
3 Likes

I’m running into the same error as well. Did you figure out how to fix it?

I have also run into the same issue trying to run a terraform plan as the final step of migrating to Terraform Cloud: https://www.terraform.io/docs/cloud/migrate/workspaces.html#step-8-queue-runs-in-the-new-workspaces:

Waiting for the plan to start...

Terraform v0.12.18

Setup failed: Failed to copy tfVars file: scp: /terraform/packages/terraform/testing: No such file or directory

@vikramparth @kave were either of you able to solve this issue?

Unsure what part of this is what solved it, but after I merge my terraform cloud changes into my master branch and ran a plan/apply through the UI I as able to then run terraform plan on branches locally through the CLI.

I have the same issue . My working directory has been setup as environment/azure/xxxx

However the error is "Setup failed: Failed to copy tfVars file: scp: /terraform/environment/azure/xxxx: No such file or directory

I have these files in the GitHub. Do we need to have this in /terraform ?

Hi all!

This error message does seem to be exposing the implementation details a bit, but I believe what’s going on behind the scenes here is that the subsystem that manages the temporary execution environments for Terraform Cloud is trying to upload the .tfvars file it generates from your configuration workspace variables into the working directory so that Terraform CLI will then find it and use it.

Currently the implementation of that is to SSH into the target system and use scp to write it into place in your configured working directory. In order to succeed the target directory must already exist in the configuration snapshot.

If you have the target directory already present in your configuration then I’d suggest contacting the support team directly, because they can (with your permission) inspect your Terraform Cloud workspace settings directly and give individual help.