Hi @dubgregd,
I’d agree that root module input variables doesn’t seem like the best approach here. Root module input variables are typically for values that need to vary from one run to the next, used in situations like provider configurations, rather than fixed values that describe the infrastructure you want to create.
Instead, I’d suggest reframing your current root module as a shared module you can call multiple times. The main requirement for that would be to delete the provider "aws"
blocks and any backend configuration you have in there, because those should only be declared in the root module.
Then you can write a new root module that calls your shared module twice with different settings:
terraform {
required_providers = {
aws = {
source = "hashicorp/aws"
}
}
# (if you had a backend configuration in your
# old root module then move that into here too)
}
provider "aws" {
alias = "eu-west-1"
region = "eu-west-1"
}
provider "aws" {
alias = "eu-south-1"
region = "eu-south-1"
}
module "eu-west-1" {
source = "./modules/region"
vpc_cidr = “10.10.10.0/24”
subnet1_cidr = “10.10.10.0/25”
subnet2_cidr = “10.10.10.128/25”
my_ami = “ami-038d7b856fe7557b3”
providers = {
aws = aws.eu-west-1
}
}
module "eu-south-1" {
source = "./modules/region"
vpc_cidr = “10.10.20.0/24”
subnet1_cidr = “10.10.20.0/25”
subnet2_cidr = “10.10.20.128/25”
my_ami = “ami-063c648dab7687f2b”
providers = {
aws = aws.eu-south-1
}
}
In the above example I used the providers
meta-argument so that each of the calls to this ./modules/region
module will see a different AWS provider configuration as its default configuration, and thus any AWS provider resources you declare in there will be automatically associated with the appropriate configuration for that region.
I will note that a somewhat-typical strategy for decomposing infrastructure into multiple Terraform configurations is to split by failure domain, and regions in AWS are the top-level failure domain, so there might still be other good reasons to have a separate root module per region, though a nice thing about the above structure is that this ./modules/region
module doesn’t need to know anything about how it was called and so you wouldn’t need to modify that module in order to use it from another configuration aimed at a different set of regions later, if you decided to e.g. have one configuration for Europe and another configuration for the USA: you’d just call that shared module from both configurations in that case, with different AWS provider configurations in each case.
Note that if you already have your old configuration deployed as “production” then the refactoring I described above would not be compatible with your existing state. In that case, I think the easiest path forward would be to start with a fresh new state and use terraform import
to import your existing remote objects into the new resource addresses, and then discard the old separate states once you are done to preserve the assumption that each remote object is only bound to one Terraform resource instance (across all configurations) at a time.