TF cloud migration - facing few issues

We are in process of migrating to the TF cloud and working on POC. To start with, migrated one sample workload to cloud with state which looks cool so far. But, facing below issues currently

  1. On TF Cloud UI - remote exec calls always goes into loading state with no logs/output on console, so not getting what’s happening behind the scene. Even with debug enabled, nothing comes out. However, all remote-exec calls getting successfully executed on target workloads.

  2. Nothing getting displayed on UI for file provisioner even though getting successful completion.

We need some best approach in below case -

  1. There are around many different environments where workload is common and variables/tfvars/config files are as per environment.
    eg.
    sit/sit.tfvars, json
    staging/staging.tfvars, json
    prod/prod.tfvars, json etc

For all above, workload is common
e.g. common/workload/main.tf, variable.tf etc.

What is the best approach to implement this into TF cloud. I dont find any option to provide such approach where files and workload can be loaded from diff locations. We don’t want to keep workload for every workspace replicated. I see -chdir option added new version, but how to use it in UI?
On workspace level, we are keeping all workload and tfvars/json in one place to load it from terraform-working-directory, which we don’t want.

Any better approach?

Hi @pravinksavant,

For your first questions about provisioners appearing in the Terraform Cloud UI, I think it would be better to contact the support team; if you mention in your ticket which specific workspace you are working in, they can look to see what exactly you are referring to and therefore give a more precise answer than I can give here, without access to your workspace.

For your question about variables, there are some different answers to this but my suggestion to start would be the following:

  1. Create a separate workspace for each of your environments.
  2. Choose the same configuration source and working directory in all of them, referring to your common/workload directory. (The “working directory” setting in the workspace options is the Terraform Cloud feature which corresponds with the Terraform CLI -chdir option, selecting which directory Terraform CLI will run in.)
  3. In the settings for each workspace, configure the values for input variables that you would previously have maintained in your .tfvars files. When using Terraform Cloud, the remote execution environment will automatically generate a suitable terraform.tfvars based on the values you configured before running Terraform CLI.

Thanks. On TF cloud issue, ticket has been raised on cloud team and we are working with them.

On the other part, as you mentioned we have similar implementation is in place where,

We are loading all configs and json from working directory where we are running tf runs as workoad is common which is referred by chdir. So far its working as expected.

Now, on TF cloud, we want to migrate above implementation with no or less modification/efforts as we have huge inputs/json/chef configuration variables etc.

On UI, I see only terraform working directory from where TF run will execute which I am guessing tfconfig/json in above case. Then, how come we should provide chdir on UI where it will pick up the common workload from different directory?

Hi @pravinksavant,

For Terraform CLI, the -chdir option specifies the working directory that Terraform should switch to before loading configuration and taking other actions.

The Working Directory setting in the Terraform Cloud workspace options is equivalent to that option: Terraform Cloud will switch to that directory before it runs Terraform CLI.

So for the Working Directory option you should specify the same directory you would’ve previously selected with -chdir, and then Terraform Cloud should run the same configuration that Terraform CLI would’ve run.

I got you and we are exactly using the same approach in current scenario. For loading common workload, we are using -chdir option and for all other variables/conf/json from different locations, we are using makefile customization.

So if we use the same approach, like considering terraform working directory as -chdir for common workload, then question remains same how can we load all our variables from other locations i.e. env specific locations/directories.

I know, these variables logically should be on workspace variables/variables set (we have added few those are required), but these are huge in number and in json format. All these variables are referred at multiple places in TF code across environments and it does chef post deployment activities. Hence, we are looking for better approach.

If we get some way to load these variables with workload as it is then it will be very easy to migrate without any effort or less effort.

Hi @pravinksavant,

Terraform Cloud’s design expects that you will represent differences in input variables between workspaces by configuring the variables that differ as part of the workspace settings. Only variable values that are common across workspaces should load directly from your source repository when using Terraform Cloud (using .auto.tfvars files).

However, if you need to configure the workspaces systematically rather than individually, you can use the hashicorp/tfe provider to manage Terraform Cloud with Terraform. That would then allow you to configure only one workspace manually – the one that uses this provider – and then configure the others using the Terraform configuration in that workspace, which can potentially use Terraform language features to pass the same settings systematically over multiple workspaces.