Recommended workflow for multi-team, multi-env Terraform Cloud Google Provider

I’d love feedback on a best-practices approach for using Terraform Cloud with Google (or AWS, replacing Google projects with AWS accounts) on a multi-team, multi-env setup. The goal is to define infrastructure components as code, with the ability to recreate an isolated set of infrastructure components responsible for running an application within the confines of a GCP project. In this way we can recreate the infrastructure components on a per team/per env basis.

Here’s what I’m envisioning:

  1. Use terraform to define infrastructure components (say, a Google Project, VPC, Cloud SQL database, PubSub topic, Kubernetes Cluster with LB, DNS)
    2a) I believe using a monorepo for all TF files makes the most sense, or,
    2b) optionally, one repo for standard modules, and another repo for the combination of modules to form an environment
  2. Teams/individuals operate in a Google Project, which is a variable for Terraform, and
  3. we can provision a Terraform Workspace populated with a project variable to provision an environment.
  4. Optionally there is a ‘production’ environment which hosts the main application.

For the sake of simplicity, let’s forego the deployment of an application and the management of configuration passed to the application (unless you want to tackle it).

In this setup I assume we’d want an central Google Project (say, an ‘ops’ project) that has a service account exposed to Terraform Cloud which allows the provisioning of other projects.

I know an environment variable GOOGLE_CREDENTIALS set in Terraform Cloud will let you pass GCP credentials to TF. In this way it appears we’d want a workspace linked to the OPS account which can create a service account, a GCP Project, and a TF Workspace and set the variables accordingly in the TF workspace. With this in place, we can then use the newly-generated TF Cloud Workspace to provision infrastructure in the new GCP environment.

There are a number of questions here:

  1. What’s the best manage the repo structure so it can map to various other workspaces? I know sub-sections of an entire application make sense from a TF workspace perspective. How does this map to repos which may contain multiple modules. Are modules discoverable in Terraform cloud?

  2. What is a recommendation to balance centralized resources (like, a DNS root domain, with sub-DNS entries mapping to other projects like, In this structure it appears some project-specific items (say, a dns name of would be done in a central account (which manages the entire dns entry).

  3. I feel there should be a more fluid experience mapping GCP (or AWS) credentials to TF workspaces. Specifically, some trust between an entire TF Cloud organization to a GCP Project, where the account privileges are handled on the GCP side and made available to workspaces, and a TF Cloud (or Sentinel) permission model handles finer-grained trust.

Know this is a lot! Want to start a discussion for community feedback.