The same code works in eu-west-1 but is failing in eu-west-2

Hi,
I have a fairly straight sns module I’d like to deploy in eu-west-2 region. Unfortunately what is working fine in eu-west-1 doesn’t work in eu-west-2. I’m not sure where the problem is, so I am asking for help.
Module definition: https://pastebin.com/ACdkkTAM

vars: https://pastebin.com/ffJH58X3

module’s main.tf: https://pastebin.com/v2ZCePd5
module’s policy: https://pastebin.com/Rrens42x
module’s output.tf: https://pastebin.com/rRhkekBM

and the error present in eu-west-2 only: https://pastebin.com/8VY2YNHp
Any ideas what is wrong here from the point of view of eu-west-2 region?

Hi @80kk,

The symptoms you saw here seem like what can happen if you change the region argument of an existing provider configuration after it already created some resources. Because in AWS the region argument is really just a way to set the base URL for the API, changing that argument causes the AWS provider to access a different API URL and so it will then typically find that all of the existing objects recorded from a previous run are now returning 404 Not Found, or similar.

That can lead to errors like you’ve seen here in situations where there are complex inter-dependencies between resources, because Terraform is now expecting to be able to correlate the existing objects as per the configuration but some of those objects no longer exist.

If you wish to deploy the same objects in multiple regions then you need to make sure that Terraform is using separate local resource instances to track each one, so that each one can have its own identity in Terraform.

One way to do that would be to create a new workspace for each region, which will cause the objects for each region to be tracked separately. If you name the workspaces after regions then you can populate the region automatically based on the workspace name:

provider "aws" {
  region = terraform.workspace
}

…although that will be tricky to do in retrospect because you presumably currently have a single workspace called default, which isn’t a valid AWS region name.

The When to use Multiple Workspaces part of the workspaces documentation advises caution when using workspaces like this, and instead proposes the other approach I was going to mention here: putting your per-region objects in a shared module and then writing one top-level configuration per region that calls into that shared module:

provider "aws" {
  region = "eu-west-1"
}

module "regional" {
  source = "../modules/regional"
}
provider "aws" {
  region = "eu-west-2"
}

module "regional" {
  source = "../modules/regional"
}

You can then run terraform apply once in each of these top-level configuration directories to give each region its own “copy” of the objects described in the “regional” module.

There are some other variants of these approaches too, but the general idea in all of them is to make sure that Terraform sees each of your objects as being distinct, rather than (as currently seems to be the case) trying to bind the objects from two regions to the same objects in the state.

1 Like

Sorry, I have that configured above the module definition:

provider "aws" {
  region  = "eu-west-2"
  profile = "bastion"

  assume_role {
    role_arn     = "arn:aws:iam::1234567890:role/STS_Role"
    session_name = "terraform"
  }
}