How do you manage the configuration variables if you separate terraform projects into multiple configurations?

So I was suggested to take a global, multi-region, single terraform configuration with thousands of resources, and break it into global, then region + environment isolated sub-configurations. And then run those independently.

But how do you plugin and manage the configuration variables between the configurations? Like say my global configuration sets up a Global Accelerator in AWS, and my region ones set up individual regions. I then want my Global Accelerator to be configured with each region’s load balancer. How do I efficiently manage passing these configuration variables around if they are now separate terraform projects?

I first would run the region code, to generate a region. Then I would run the global code with the region load balancer ID to set up the global environment. But how do you effectively do this?

just saw this before logging out so thought I’d mention a quick piece.
First, for these type of questions another great community is the slack sweetops community. Fantastic for talking about this type of stuff.

If you aren’t looking into Terragrunt, then maybe someone here or in sweetops can help.

I started writing something longer, but deleted. Let me just point to this and ask what is blocking you from using the endpoint group resource already included?

Can you add the endpoint in each project without issue just putting your endpoint group there? What configuration values are you referring to specifically as I’ve not worked with Global Acceleretor yet. Normally, I’d expect any values you need to be included in the data resource, but I’m guessing you have something more complex going on?

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/globalaccelerator_endpoint_group

Hi @lancejpollard

You can have a shared “outputs.tf” with all the external resources or you can also use a data source to find out the ID or the ARN

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/vpcs

Unfortunately all the Terraform documentation is currently in transition and it is difficult to find and follow.

I don’t quite follow, can you show an example with commands and such?

Hi @lancejpollard

I will be writing a tutorial but on the meantime and without a better answer I would try to explain better the “theory” with a short example.

A company can either have ALL their infrastructure in a single Terraform project, it is convenient but dangerous, so the general best practice is to have different projects limiting the scope of changes.

And the problem arises, How do you access existing Terraform resources created in a different project?

I see at least 3 options:

a. Search in AWS for the resources
b. Access TF State files from other projects to use its ouputs
c. Write a tf file with hardcoded ARNs (bad idea I think)

I will write an example for the first option, and will consider other options for a blog post.

0. Set up the VPC and other shared “basic” resources that we will reused by other Terraform projects company wide.

Project path \T0
File: Terraform_VPC.tf
Creates a VPC and a Subnet

terraform {
  required_version = "~> 0.12" 
}

provider "aws" {
  shared_credentials_file = pathexpand("~/keys/ditwl_kp_infradmin.pem")
  profile                 = "ditwl_infradmin"
  region                  = "us-east-1"
  version                 = "~> 2.0"
}

resource "aws_vpc" "ditwl-vpc" {
  cidr_block           = "172.17.32.0/19"
  enable_dns_support   = true
  enable_dns_hostnames = true

  tags = {
    Name = "ditwl-vpc"
  }
}

resource "aws_subnet" "ditwl-sn-za-pro-pub-32" {
  vpc_id                  = aws_vpc.ditwl-vpc.id
  cidr_block              = "172.17.32.0/23"
  availability_zone       = "us-east-1a"
  map_public_ip_on_launch = "true"

  tags = {
    Name = "ditwl-sn-za-pro-pub-32"
  }
}

1. Search in AWS for the resources

A different project that uses credentials with access to the infrastructure created by the “company wide project” shown in step 0 (called T0). It doesn’t need access to project T0.

It will search for the resources like the VPC or a Subnet using a Terraform Data Source.
See available Data Sources, like subnet at https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/subnet

Project path \T1
File: Terraform_EC2.tf

terraform {
  required_version = "~> 0.12" 
}

provider "aws" {
  shared_credentials_file = pathexpand("~/keys/ditwl_kp_infradmin.pem")
  profile                 = "ditwl_infradmin"
  region                  = "us-east-1"
  version                 = "~> 2.0"
}

data "aws_ami" "ubuntu" {
  most_recent = true

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  owners = ["099720109477"] # Canonical
}

#Find a VPC named "ditwl-vpc"
data "aws_vpc" "ditwl-vpc" {  
  filter {
    name = "tag:Name"
    values = ["ditwl-vpc"]
  }  
}

#Find a Subnet located at the VPC named "ditwl-vpc" with tag Name="ditwl-sn-za-pro-pub-32"
data "aws_subnet" "ditwl-sn-za-pro-pub-32" {
  vpc_id = data.aws_vpc.ditwl-vpc.id
  tags = {
    Name = "ditwl-sn-za-pro-pub-32"
  }
}

# Create an AWS instance in the Subnet "ditwl-sn-za-pro-pub-32"
resource "aws_instance" "ditwl-web-01" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "t3.micro"
  subnet_id     = data.aws_subnet.ditwl-sn-za-pro-pub-32.id
  tags = {
    Name = "HelloWorld"
  }
}

There are many ways to “search” for the resources, I have shown how to search the VPC using a filter and the subnet using a tag.

:warning: Please don’t consider this example code as a best practice, I like to use modules, vars, naming standards and tags that are not used in this short examples.

Interesting, thanks!