Move resource to a different provider

I’ve been using custom module with the AWS provider for a long time, and I now have a need to make a new instance of the AWS provider using an alias. I have Terraform code that looks like this which has been working fine using the default (non-aliased) provider:

data "aws_lambda_layer_version" "foo" {
  layer_name = "some_layer"
}

resource "aws_lambda_function" "foo" {
  layers = [data.aws_lambda_layer_version.foo.arn]
  ...
}

I then add a new provider with an alias:

provider "aws" {
  region  = "us-east-1"
  version = "~> 3.3.0"
}

provider "aws" {
  region  = "us-east-1"
  version = "~> 3.3.0"
  alias   = "production"
}

I can do terraform init and terraform providers to confirm that the production aliased provider exists:

$ terraform providers
.
├── provider.aws ~> 3.3.0
├── provider.aws.production ~> 3.3.0
└── module.my_module
    ├── provider.aws (inherited)
    ├── provider.aws.production

Then I change my aws_lambda_layer_version data source to use the new production provider:

data "aws_lambda_layer_version" "foo" {
  provider = aws.production
  layer_name = "some_layer"
}

And doing so will break terraform plan:

terraform plan

Error: Provider configuration not present

To work with
    
module.my_module.data.aws_lambda_layer_version.foo
its original provider configuration at
module.my_module.provider.aws.production is required, but it has
been removed. This occurs when a provider configuration is removed while
objects created by that provider still exist in the state. Re-add the provider
configuration to destroy module.my_module.data.aws_lambda_layer_version.foo,
after which you can remove the provider configuration again.

How can I move this data source to the aliased provider?

I did try terraform state rm to remove the data source from the state, hoping that it could be re-initialized under the new provider, but a subsequent plan resulted in the same error pasted above.

$ terraform version
Terraform v0.12.28
+ provider.aws v3.3.0

Hi @jrobison-sb,

Do you have a providers argument in your module block to tell Terraform to use the root provider.aws.production also for the resources in module.my_module?

module "my_module" {
  # ...

  providers = {
    aws            = aws
    aws.production = aws.production
  }

  #...

I think right now Terraform is seeing this module as using its own aws.production provider, which isn’t present in your configuration.

@apparentlymart thanks for your reply.

It turns out that this was a case where I needed to do this: https://www.terraform.io/docs/configuration/modules.html#passing-providers-explicitly

I had my providers declared in the root module (eg. terraform/environments/staging/provider.tf), and I did not have them declared within the child module (eg. terraform/modules/aws/my_module/). After seeing the above link and adding provider {} blocks to the child module, this seems to be working.

Weirdly, it seems to be working with or without a providers {} block inside my module "my_module" {} block. But either way, I think I’m good now.

Thanks.

Hi @jrobison-sb,

Unfortunately for historical reasons (emulating some Terraform 0.10-and-earlier behaviors), if you don’t pass aliased providers explicitly then Terraform assumes you intend to have a module-specific provider configuration, which tends to lead to the annoying situation you found here.

Hopefully a future release will be able to clean up that old behavior and rationalize things a little, but we’re currently in a mode of trying to minimize breaking changes as we iterate towards a Terraform 1.0, so sadly in this case I expect this particular quirk will be with us for a while yet. :confounded:

@apparentlymart Thanks for your reply. Now that I have circled back to this and begun attempting to create resources across multiple accounts in a single apply, I’m seeing some crazy behavior.

I’m now using two aws providers similar to what I pasted above, but now one of them using using a separate AWS account:

provider "aws" {
  # This should assume-role in AWS account 22222222222222
  region  = "us-east-1"
  version = "~> 3.3.0"
  assume_role {
    role_arn = "arn:aws:iam::22222222222222:role/OrganizationAccountAccessRole"
  }
}

provider "aws" {
  # This should pick up my default IAM keys in ~/.aws, which are hooked up to
  # the parent AWS account 1111111111111
  region  = "us-east-1"
  version = "~> 3.3.0"
  alias   = "production"
}

The weird part is that resources created using the default provider (eg. resources without a specified provider = whatever) are created in account 1111111111111, they’re not created with the assume-role provider. Then when I do terraform destroy, I get an error saying that the assumed-role has no access to the resource. It seems like Terraform is not-using the role to create resources, and then attempting to use the role to destroy them.

I put together a small bit of code showing my setup and more verbose steps on how I can reproduce the problem. Any thoughts?

Thanks.

Hi @jrobison-sb,

Unfortunately I only have ready access to one AWS account at the moment and so it wasn’t really practical for me to directly reproduce what you showed in that repository, but I tried to study your configuration anyway and “imagine” how Terraform might’ve interpreted it.

The one thing that stuck out to me was your module/providers.tf file, which at the time I’m writing this has the following contents:

# https://www.terraform.io/docs/configuration/modules.html#passing-providers-explicitly

provider "aws" {
  region = "us-east-1"
}

provider "aws" {
  region = "us-east-1"
  alias  = "production"
}

Because these blocks have a region argument set, they don’t meet the definition of a Proxy Configuration Block. With that said, I think you could get the behavior you were intending by removing those region arguments, making the blocks be empty aside from the alias argument in the second one:

provider "aws" {
}

provider "aws" {
  alias  = "production"
}

These empty blocks help Terraform understand that you intend them to be placeholders for the configurations you’re passing from the calling module in the providers argument inside the module block. If they aren’t empty then Terraform believes you intend to define new provider configurations, and since neither of them contain an assume_role block they end up picking up only the credentials from your environment and thus attempting to take actions in the wrong account.

The totally-empty provider "aws" block above isn’t actually needed, as noted in the callout box in the documentation:

Note: Although a completely empty proxy configuration block is also valid, it is not necessary: proxy configuration blocks are needed only to establish which aliased provider configurations a child module expects. Don’t use a proxy configuration block if a module only needs a single default provider configuration, and don’t use proxy configuration blocks only to imply provider requirements.

…so with that said, it would also be valid to omit the first provider block and include only the one with the alias = "production" argument, but I included them both in the above example just for completeness.

@apparentlymart Thanks for your reply. You were right, region is what was blocking me. When I use provider {} blocks that are either empty or contain only an alias, cross-account works as expected.

Coincidentally, the reason region made its way into that bit of code was because my editor incorrectly flagged region as being required. That’s not a terraform problem though, of course.

Thanks again.

Hi @jrobison-sb,

That is unfortunately another known quirk right now: because of the ambiguity of the current syntax (as I noted earlier, it’s trying to remain backward-compatible) systems like text editor validation have no way to recognize whether a particular provider block is being used as a “real” provider configuration or as a proxy provider configuration, and so indeed they do unfortunately tend to indicate errors in cases like these.

This is another aspect of the current design that we’re unhappy with and hope to address in a future release.