Using Terraform to create remote state resources

Do you have best practices for automating the creation of remote state resources for the numerous, multivariant Terraform projects at your (big-ish) company?

Do you have best practices for sharing variables like I propose to do in a config module, and trying to DRY and create single source of truth in your Terraform projects?

I’ve got a basic terraform root module/project that I ultimately want to use remote S3 state with dynamodb locking. There will be numerous other tf projects following suit, and numerous teams/environments using them, so I want to automate not only the creation of the state but make it easy and automatic for tf projects to refer to the s3 backend automatically (using bucket/table naming conventions)…

I implemented a separate module to take care of the creation and configuration of the S3 bucket and dynamodb resources.

The root module project invokes the module to create the state resources.

Is there a way to have terraform try to use remote backend, and if it fails to use local as a fallback?

Obviously I have a chicken/egg issue, unless I can figure out a way to make terraform use local state long enough to run the project to create the remote state resources, then henceforth use the remote state backend.

The purpose of this approach was to provide an s3-backend module to standardize and automate how teams at my company create Terraform state resources. This module also served as the single source of truth for obtaining the names of the s3/dymanodb_table backend resources which follow a convention based on team/project/etc input variables. To my knowledge, Terraform doesn’t have anything like global or shared variables aside from module outputs, so it becomes hard to follow DRY - the answer is always “create a module”.

I had thought that I could invoke the s3-backend module from the various main tf projects (the ones creating infra targets), but after thinking this through some more, I see that any inclusion of these S3 backend resources (via my s3-backend module) in the main tf project will include them in the plan/state, and therefore put them potentially on the chopping block at destroy time. A prevent_destroy=true lifecycle clause would prevent that, but it would also prevent the destruction of everything else in the project most of the time, which would be problematic. Looking here I see that what I’m trying to do can’t be done with Terraform - by design the only way to create a resource using Terraform, then refer to it from Terraform while fully protecting it from being destroyed is to completely separate the tf project in question.

I’m but a veteran Software Engineer cum DevOps Engineer, trying to wrap my mind around the Terraform way of doing things and cringing a bit here.

I obviously could create the state resources manually and move on with my life. The only reason I’m trying to use terraform to create terraform state files is because this will be done many many times by many many teams at this company, and I wanted to have some automation in place so this wasn’t something a human has to do every time.

The other primary motivation here is to provide a mechanism to enable other modules to fetch the names of some common variables whose values derive from the account/team/project/etc which are input variables. I had planned on doing this using my s3-backend module, which not only calculates the output variables and question but also contains the resources themselves . This latter thing is the problem, because as soon as a main module uses this module then the S3 backend resources could potentially be on the chopping block at destroy time.

I can see now that the best way forward is to create a completely separate “S3 backend” tf project to automate creating the s3 backend resources (invoking my s3-backend module) and provide a new config module whose purpose in life is to calculate output variables based on account/team/project inputs. This module is used by the aforementioned standalone S3 backend project, as well as the main tf project(s) that are responsible for creating infra.

In the end, the documentation for teams will say (and their CI/CD pipelines will do) “apply the “S3 backend” project up front, once, then apply your other tf infra project (which will be automatically configured to use the created remote state resources) to create your infra”.


Hi @timblaktu

Unfortunately, I believe, Terraform doesn’t have a set of best practices for Enterprise Terraform configuration. Each company has to find its best practices, I have tried different approaches and this is what I currently use:

Data sources as source of true:

You can have many Terraform projects creating new “layers” of infrastructure on top of existing resources from other Terraform projects using data sources to access infrastructure resources:


  1. Layer/Terraform project BASE_NET: creates the VPC, some security rules and basic mandatory resources for other layers.
  2. Layer/Terraform project BASE_SRV: creates servers like DB that are shared and ready to use for other layers, the servers are created in a VPC and subnets created in the previous layer (Layer/Terraform project BASE_NET).
  3. Layer/Terraform project WEBSITE: creates specific servers like a Webserver, and uses resources from the other layers (BASE_NET and BASE_SRV).

The BASE_SRV layer includes a Terraform file that uses data sources to find resources created by BASE_NET, example:

Module to find a VPC:

data "aws_vpc" "default" {
  filter {
    name   = "tag:Name"
    values = []

Module to find a Subnet:

data "aws_subnet" "default" {
  vpc_id  = var.vpc_id
  filter {
    name   = "tag:Name"
    values = []

Module to find a DB Instance:

data "aws_db_instance" "default" {
  db_instance_identifier = var.db_instance_identifier

Module to find a DB subnet group:

data "aws_db_subnet_group" "default" {
  name    =

Call the modules to find the VPC and Subnets needed for this layer:

module "aws_network_vpc" {
  source  = "../modules/aws/data/network/vpc"
  name    = var.base_net["aws_network_vpc_name"]
output "aws_network_vpc" {value =}

#Zone: A, Env: PRO, Type: PUBLIC, Code: 00 aws_sn_za_pro_pub_00
module "aws_sn_za_pro_pub_00" {
  source  = "../modules/aws/data/network/subnet"
  vpc_id  =
  name    = var.base_net["aws_sn_za_pro_pub_00_name"]
#output "aws_sn_za_pro_pub_00" {value =}

In layer WEBSITE, find the DB and a security group created by a previous layer:

module "aws_rds_mariadb_pro_pub_01" {
  source  = "../modules/aws/data/rds/instance"
  db_instance_identifier = var.base_srvs["aws_rds_mariadb_pro_pub_01_identifier"]
output "aws_rds_mariadb_pro_pub_01" {value = module.aws_rds_mariadb_pro_pub_01.resource_id}

module "aws_sg_rds_mariadb_pro_pub_01" {
  source  = "../modules/aws/data/security/group"
  vpc_id  =
  name    = var.base_srvs["aws_sg_rds_mariadb_pro_pub_01_name"]
output "aws_sg_rds_mariadb_pro_pub_01" {value =}

So the idea is that teams limit their Terraform code to new infrastructure that they are allowed to create and maintain, and use existing infrastructure created by a different team trough data sources.

What do you think?

1 Like

Thanks @javierruizjimenez, that all makes sense and I agree what you describe should be a best practice. Terraform actually supports this, managing the dependencies for us, so it knows what’s dependent on what. What you describe is simply providing further delineation between infrastructure layers by separating layers of infra into separate tf projects that can be run independently. I agree this is a good idea and find it best practice.

Conversely, what I’m asking about is NOT something terraform supports, and is specifically about Managing Terraform Remote State resources, using Terraform. I have since run into this road block since Terraform does not allow interpolating variables in backend configuration files.

Ultimately, there is no workaround inside Teraform, all of the workarounds require one to “wrap” terraform in scripts, makefiles, or something like Terragrunt.