Can we destroy/remove all resources deployed via module

Hi there,

I’m posting this question to know if I can destroy/remove all the resources created by a module. Here is the catch, I am calling that module on a counter, so multiple resources are created.

ex:
module1 = 10 resources
module2 = 10 resources
module3 = 10 resources

Now I want to delete module2, is there a simple way to delete all the 10 resources created under module2?

Another similar question is can we delete/remove resources with counter?

ex:
aws_instance.fruit[0]
aws_instance.fruit[1]
aws_instance.fruit[2]

can I delete resource for something similar to:
# this doesn't work in terraform
terraform destroy -target aws_instance.fruit[1]

I was going through these links:

Thanks,

Hi @Let-itGo,

The usual way to destroy something in Terraform is to remove it from your configuration and run terraform apply.

Terraform is a “desired state” system, and so it will notice that the objects it previously created are no longer in the configuration (which describes the “desired state”) and so propose to delete them to make the remote system match your configuration change.

1 Like

Hi @apparentlymart

I too face similar issue to the op. I have defined a module (“db”) that creates multiple resources (14 resources) while provisioning a databese. I then invoke the module twice from my main.tf with say “my_db1” and “my_db2”. The providers are passed into these invocations. Now i want to delete all the resources created for my_db1. To do this, i simply removed the block
module "my_db1" { source= "..\modules\db" ... ... providers { .... } }
I then run terraform plan. I was expecting to see that terraform would be destroying 14 resources that where created as part of “my_db1”. Instead i get error about “Provider configuration not present”

What am i doing wrong? Please let me know if you need any details.

Kind Regards
Ashwin

Hi @mbapai,

It sounds like you are using a module which contains a provider block, and so removing the module is removing both the resources in the module and the provider configuration that would be required to destroy them, making it impossible for Terraform to proceed.

Including provider blocks in non-root modules is not recommended specifically because it creates this situation, but Terraform unfortunately must continue to allow it for backward compatibility.

If you are able to modify the child module then the best solution would be to redesign it so that it does not declare its own provider configurations and instead uses only provider configurations defined in the root module. If you complete that refactoring before you remove the module then Terraform will be able to use the provider configuration in the root module to destroy the resources in the child module.

If you cannot modify the child module then there is a different approach to trick Terraform:

  • Make a new module directory containing only a single .tf file containing a provider configuration equivalent to the one in your existing module.
  • Find the module block you want to remove and instead change its source argument to refer to the new module directory you created in the previous step.
  • Run terraform init to re-initialize with the new module source location.
  • Run terraform apply to apply the change. Terraform should notice that the child module still exists but no longer contains any resources, and so it will propose to delete the existing objects belonging to that module.
  • One the apply is complete and all of the objects have been destroyed, you can then safely remove the module block because there will no longer be any existing objects depending on the provider configuration inside.

Unless you can modify the real module to not have its own provider configuration, you will need to repeat this process every time you remove a call to this module.

2 Likes

Hi,

The usual way to destroy something in Terraform is to remove it from your configuration and run terraform apply.

This doesn’t seem to work when using null_resource with a local-exec with the when=destroy configuration.

If I simply remove the resource from the Terraform config, the local-exec provisioner never gets called. However it does get called if I explicitly run a terraform destroy command or if I update the null_resource’s trigger values prior to running terraform apply.

The added complication is that I have some such resources within reusable modules. The expectation is that if I use one of those modules it will provision desired resources, and if I remove the module from my configuration then the respective resources should be deleted, but this doesn’t happen.

What would you suggest in this situation?

Hi @CostinTanasoiu,

Indeed one of the key disadvantages of provisioners is that they are imperative rather than declarative and so they don’t play well with otherwise-typical usage patterns. That’s part of why we say that Provisioners are a last resort; ideally whatever you are currently doing with a destroy-time provisioner would be done instead by the destroy phase of a custom resource type in a provider, which then allows the managed resource to behave as a sort of “memory” that Terraform can track even when the configuration has been removed.

If possible I’d recommend looking in the registry for a provider that can replace whatever you are doing in that provisioner. If you can’t find something specialized to your goal then you could instead use one of the ones named things like “shell” or “exec” which run arbitrary external commands in response to the resource instance lifecycle events, including destroy.

If you can’t replace all of the provisioners in your configuration with resources, then indeed there won’t be any good alternative to running terraform destroy. In that case you may need to split your configuration into two or more parts son that the parts you regularly need to destroy are in separate configurations than the parts you want to keep indefinitely.