Hi @ShitulGupta,
It isn’t clear to me from your initial message whether you want to produce a “copy” of all of the same objects in two regions, or if you want to deploy some objects in one region and some in another. I’m going to guess the first and answer that, but my answer would be different if your goal were to split different functionality across different regions.
A Terraform “project” (that isn’t an official term in Terraform, but I’m using it because you did) typically consists of multiple configurations that share modules, so that you can deploy things in different permutations for different cases. As @javierruizjimenez said, the usual way to get this done would be to write a shared Terraform module representing what you want to deploy in one region and then write three other small configurations representing each of your environment+region pairs, which you can then deploy separately:
# staging environment
provider "aws" {
region = "us-west-2"
}
module "common" {
source = "../modules/common"
# (any settings that need to vary per environment/region)
}
# production environment, first region
provider "aws" {
region = "us-west-2"
}
module "common" {
source = "../modules/common"
# (any settings that need to vary per environment/region)
}
# production environment, second region
provider "aws" {
region = "us-east-1"
}
module "common" {
source = "../modules/common"
# (any settings that need to vary per environment/region)
}
The above is what @javierruizjimenez also suggested, so I’m just restating it here for completeness but I know you’ve said in a subsequent comment that you want some other options.
If the infrastructure to be deployed in each environment+region is identical except for the environment name then you can potentially avoid the per-environment+region wrapper configurations if you use Workspaces to separate each environment+region, perhaps naming your workspaces like production_us-west-1
and then splitting out the environment name and region name from the workspace name inside the configuration:
locals {
environment_region = split("_", terraform.workspace)
environment = local.environment_workspace[0]
region = local.environment_workspace[1]
}
provider "aws" {
region = local.region
}
You’d then use commands like terraform workspace select production_us-west-1
to switch between the workspaces, rather than switching to a different working directory.
However, if you take the workspace option be sure to note what I said in the other comment (that @javierruizjimenez mentioned) about the potential for a whole-region AWS S3 outage making all of your remote state snapshots unavailable, if you choose to use AWS S3 for state storage.
If you want to manage both regions for your production environment in a single configuration – keeping in mind the problem that an outage in one of those regions may then make it difficult to make changes in your other region to respond to the outage – then your options are constrained a little because the AWS provider is designed to work with only one region per provider configuration, and so you’ll need two provider configurations to work with two providers, and that’s not a difference that can be written dynamically based on input variables. (Provider configurations are static in Terraform.)
Therefore to implement a cross-region configuration while having a different number of regions per environment will require you to have a separate configuration per environment, so that the production configuration can have two providers and two “copies” of your infrastructure, while the staging configuration will have only one. This is therefore similar to the first approach I described above, but you’ll only have one configuration per environment rather than one configuration per environment+region pair:
# staging environment
provider "aws" {
region = "us-west-2"
}
module "common" {
source = "../modules/common"
# (any settings that need to vary per environment)
}
# production environment
provider "aws" {
region = "us-west-2"
alias = "us-west-2"
}
provider "aws" {
region = "us-east-1"
alias = "us-east-1"
}
module "common_us-west-2" {
source = "../modules/common"
providers = {
aws = aws.us-west-2
}
# (any settings that need to vary per environment/region)
}
module "common_us-east-1" {
source = "../modules/common"
providers = {
aws = aws.us-east-1
}
# (any settings that need to vary per environment/region)
}
Notice that this again uses multiple instances of this “common” module to create three copies of the same infrastructure, but this time both of the production instances live in a single configuration where you can terraform apply
them together.
These two environments are now structurally different from one another, so there is no alternative to writing a separate configuration for each one.
I’ve focused on AWS here because you mentioned AWS, but for completeness (in case someone else finds this discussion in future) I want to note that the rigidness of this is actually a consequence of a design decision of the AWS provider: each instance of the provider can interact with only one region.
Some other providers have different design tradeoffs that allow for some different permutations. For example, the Google Cloud Platform provider interprets its region
argument as a default region, used for any resource that doesn’t explicitly choose one, but it allows you to override the region on a per-resource basis and thus manage objects across many regions with a single configuration of the provider. That can potentially combine with Terraform 0.13’s module for_each
to allow dynamically distributing objects across zero or more regions with a single configuration:
variable "regions" {
type = set(string)
}
provider "google" {
# no "region" argument here, because we'll set it on
# a per-resource basis inside the common module.
}
module "common" {
source = "../modules/common"
for_each = var.regions
region = each.key
# (...and any other arguments that must vary per
# environment or region)
}
Inside the “common” module you can then declare variable "region"
and use it in the region
argument of regional GCP resource types:
region = var.region
The AWS provider design doesn’t have any equivalent to this at the moment, so this approach is not available there. This more dynamic version is available for any provider that allows varying the region/endpoint on a per-resource basis, but not for the providers that consider region/endpoint to be a per-provider-configuration setting only.