Reading remote state tries to delete resources

I have a module that creates an AWS backup Plan and Vault, it saves the backend in S3 in an awsbackup/terraform.tftstate - I user workspace and its set to steve

I then have a resource to create an EC2 instance, state is in myinstance/terraform.tfstate in S3, I want to add that instance to the AWS Backup Selection, as there is not Data section for aws Backup in terraform I need to read the remote state backend to get the output of the PlanID

snippet of code:

terraform {
  backend "s3" {
    # Replace this with your bucket name!
    bucket = "mys3bucket1"
    # Replace the name with your name dev, qe, uat or prod
    key = "myinstance/terraform.tfstate"
    # Replace region for production only to us-east-1
    region = "us-west-1"
    dynamodb_table = "mysite-state-lock"

data "terraform_remote_state" "db" {
  backend = "s3"
  config = {
    # Replace this with your bucket name!
    bucket = "mys3bucket1"
    key    = "env:/steve/awsbackup/terraform.tfstate"
    region = "us-west-1"

resource "aws_backup_selection" "backupinstance" {
  iam_role_arn = data.aws_iam_role.awsbackuparn.arn
  name         = "${terraform.workspace}_myinstance-backup_selection"
  plan_id      = data.terraform_remote_state.db.outputs.backup_plan_id
  resources = [

The good news is it reads the plan ID

+ resource "aws_backup_selection" "backupinstance" {
      + iam_role_arn = "arn:aws:iam::137889747264:role/steve-backup-iamrole"
      + id           = (known after apply)
      + name         = "steve_myinstance-backup_selection"
      + plan_id      = "e58a6ede-3fee-4e11-98dd-f08d3bd7dcbd"
      + resources    = (known after apply)

The bad news on an apply it tries to delete my previously created AWS backup plan and vault

Here is a snippet of the apply

 # aws_backup_vault.awsbackup will be destroyed
  - resource "aws_backup_vault" "awsbackup" {
      - arn             = "arn:aws:backup:us-west-1:137889747264:backup-vault:steve-backup_vault" -> null
      - id              = "steve-backup_vault" -> null
      - kms_key_arn     = "arn:aws:kms:us-west-1:137889747264:key/e71636fa-714b-4012-9c5d-946dc722a667" -> null
      - name            = "steve-backup_vault" -> null
      - recovery_points = 0 -> null
      - tags            = {
          - "Name"      = "steve"
          - "managedby" = "terraform"
          - "version"   = "0.0.1"
        } -> null

Plan: 2 to add, 0 to change, 3 to destroy.

I expect the 2 to add but am not expecting it to destroy the items I created earlier.

Hi @caesharley,

If I’m understanding correctly, it seems like you are doing some refactoring where you are replacing some objects that were previously defined inline with some objects that are imported from a separate remote state.

By default, Terraform understands you removing an object from your configuration as a request to destroy that object. In the case where you’ve moved a particular object to now be managed by a separate Terraform configuration entirely, you’ll need to override that default by asking your original Terraform configuration to “forget” that object, which will remove it from the state associated with that configuration as if this configuration had not created it in the first place:

terraform state rm aws_backup_vault.awsbackup

After this command succeeds, the state associated with the current configuration will no longer be aware of the steve-backup_vault object. Since you’ve removed the resource "aws_backup_vault" "awsbackup" block from the configuration, a subsequent terraform apply should say nothing about it, because it now exists neither in the configuration nor in the state.

If that object will now be managed by some other Terraform configuration then you can import it to bind it to a resource in that other configuration by switching to the directory containing that configuration (using cd, for example) and then importing the remote object named steve-backup_vault to whichever resource will now be considered to be managing that object:

terraform import aws_backup_vault.example steve-backup_vault

Each object should be managed by only one Terraform configuration at a time, or the two competing configurations will “fight” one another to apply changes to the same object.

I am not trying to refactor or replace objects, I am trying to get the Planid Output from the remote terraform state and then use that to create an AWS Backup resource.

I don’t want to remove the AWS Backup Plan and Vault I need to keep those so that when I add more EC2 Instances or EFS I just append the Resources that I need backed up.

Thanks for responding.


It is not the Remote state that is causing this it is something to do with resource “aws_backup_selection”

resource “aws_backup_selection” “backupinstance” {
iam_role_arn = data.aws_iam_role.awsbackuparn.arn
name = “{terraform.workspace}_myinstance-backup_selection" plan_id = data.terraform_remote_state.db.outputs.backup_plan_id resources = [ "{aws_instance.web.arn}”

I replaced the data.terraform_remote_state.db.outputs.backup_plan_id with the actual id and removed the backend and it still tried to delete the resources

Hi @caesharley,

Terraform will only plan to delete something that is recorded in the Terraform state, so because we’re seeing a delete plan for your aws_backup_vault.awsbackup that means that at some point that state was updated by a configuration containing a resource "aws_backup_vault" "awsbackup" configuration block.

If you didn’t intentionally create that and then remove it, I guess the next most likely cause is that you’ve inadvertently re-used the same state location for two different configurations, and so each time you apply one Terraform will try to remove the objects that were created by the other one.

To see if that is true, could you review the backend "s3" blocks in all of the configurations you’ve been working with – the ones where you’ve been dealing with AWS Backup objects, in particular – and make sure they all have distinct values for the key argument?

The backends are different on each, one is awsbackup/terraform.tfstate and that holds the vault and myinstance/terrfrom.tfstate on the instance and resource assignment.

My findings are that the AWS Backup Terraform resources are not designed to work independently. The original query I raised about it being the backend was a red herring and to prove that as I mentioned in a previous comment I removed the data remote state retrieval and hard coded so that all I had was the resource aws_backup_selection and it still tried to destroy.

In Summary
One state file that has the aws_backup_plan and the aws_backup_vault
Another state file that has just the aws_backup_selection

Just will not work.

To resolve I reverted to tagging and added the aws_backup_selection to the first state file and now have to enforce the backup tag on the created resources, which is ok.

I am going to close, thanks for all the comments and advise.