Setting up an infrastructure build pipeline in GitHub using Terraform Cloud

Use case: Using a GitHub workflow/pipeline to deploy changes to multiple cloud providers using Terraform Cloud as a remote backend. I would like to set up environment-based redundancy (dev, qa, performance, staging, prod, etc). I am trying to find the best way to structure my GitHub repo and Terraform workspaces to meet this need.

Doing a little research, my working hypothesis is to create a new Terraform Workspace for each environment (so infrastructure-dev, infrastructure-qa, etc). The GitHub repo itself would be broken up into folders to reflect each environment, with different config files and variable definitions for each environment. When calling a ‘plan and apply’ from the workflow file (.yml workflow) , all I would have to do is update the Terraform working directory based on which path was altered.

Does this sound reasonable? Or is there a more intuitive/efficient way to structure a build pipeline using Terraform Cloud/ GitHub? Looking for feedback from any engineers who have practical examples to share. Thanks for the support!

maybe the TFC VCS driven workflow is easier in this case? you just need to create one workspace per folder, and update the working directory in workspace settings. you can connect multiple workspaces to one repo.

That is a good point! Ultimately, we are pushing to go with a CLI based interface because there is more flexibility. The freedom to define input parameters, run pre-job task, regression tests, etc. What we are considering for the time being is one folder per environment (just like you said) and using shared modules to enforce configuration standards across environment. Still curious to hear what others have done though.