Terraform test: unexpected destroy order

I have a Terraform test file, with the following (redacted) content:

run "setup" {
  ...
}

run "create_provider" {
  ...
}

run "create_consumer_same_region" {
  ...
}

run "create_consumer_different_region" {
  ...
}
  • The setup run step is a dependency for all the other run steps
  • create_provider is also used by both create_consumer_same_region and create_consumer_different_region

The test creation works fine and all the tests are succeeding. However, when the destroy phase of the test is executed, the create_provider run step can’t be destroyed because other resources still depend on the resources this step creates.

The DEBUG logs of Terraform during a test show the following:

$ grep -e "TestFileRunner: starting plan" -e "TestFileRunner: starting destroy" tf-1.11.log
2025-04-25T13:18:12.929+0200 [DEBUG] TestFileRunner: starting plan for tests/defaults.tftest.hcl/setup
2025-04-25T13:18:13.229+0200 [DEBUG] TestFileRunner: starting plan for tests/defaults.tftest.hcl/create_provider
2025-04-25T13:18:21.085+0200 [DEBUG] TestFileRunner: starting plan for tests/defaults.tftest.hcl/create_consumer_same_region
2025-04-25T13:19:10.772+0200 [DEBUG] TestFileRunner: starting plan for tests/defaults.tftest.hcl/create_consumer_different_region
2025-04-25T13:21:45.517+0200 [DEBUG] TestFileRunner: starting destroy plan for tests/defaults.tftest.hcl/create_consumer_different_region
2025-04-25T13:24:10.068+0200 [DEBUG] TestFileRunner: starting destroy plan for tests/defaults.tftest.hcl/create_provider
2025-04-25T13:24:13.569+0200 [DEBUG] TestFileRunner: starting destroy plan for tests/defaults.tftest.hcl/setup

We can see that the plans (and related applies) initially execute in the correct order. However, the destroy for the create_consumer_same_region run step doesn’t seem to be execute at all. Thus, the destroy phase for create_provider fails because resources created in create_consumer_same_region still exist.

If I comment out any of the 2 create_consumer_* run steps so that only one is executed, all the destroy plans execute correctly and the test suite succeeds. However, it’s when the 2 create_consumer_* steps are present in the file that only one gets destroyed and the test suite fails by not being able to destroy all the resources created (and there are then leftover resources that must be manually cleaned up).

I’m not sure what could create this behavior. Is there a specific configuration in Terraform that would prevent running the destroy phase in specific scenarios?

Hi @multani!

What sources / state keys are you using for your run blocks? If the create_consumer_different_region and create_consumer_same_region are referencing the same state then the destroy will only be called against the latest run block. But, I’d still expect that to destroy everything in the state, unless the provider is being changed or something and it can’t find the old resources.

1 Like

I’m using the same source for each create_consumer_* run steps, but I’m using a different provider:

run "create_consumer_same_region" {
  module {
    source = "./"
  }
  ...
}

run "create_consumer_different_region" {
  providers = {
    aws = aws.region2
  }

  module {
    source = "./"
  }
  ...
}

I may see how it could be a problem. I suspect the 2nd run doesn’t delete the configuration from the first one because I’m using a different provider instance … I didn’t see that.

Is there a way to instantiate in the same test suite 2 times the same module, but independently from each other?

And I should read the documentation better :slight_smile:

run "create_consumer_different_region" {
  providers = {
    aws = aws.region2
  }

  module {
    source = "./"
  }

  state_key = "other"

  ...
}

I will try that!

Yes, the state key attribute is what I’d recommend - that should mean the two run blocks maintain independent state files and should both be cleaned up. Bear in mind you need to be using Terraform v1.11 or higher for the state key attribute to be available.

What’s happening in the original case is that switching provider while using the same underlying state causes Terraform to effectively forget about the resources created in the first run block as it only tries to clean up each internal state file once.

1 Like

Thanks a lot @liamcervante, using the state_key parameter fixes my problem!

1 Like