`terraform destroy` - differences from a regular apply

This is just a question of curiosity, not related to any immediate practical issue…

I decided I wanted to understand whether terraform destroy differed from just deleting all the resource blocks from a Terraform configuration and running a normal apply… so, I started looking into the code :slight_smile:

It turns out that it is really complicated!

Is there, perhaps, anyone around on these forums who can share how destroy-mode came to be written as a significantly different mode of operation, compared to just skipping reading all the resource blocks when loading a configuration?

Are there subtle semantic differences that I’m overlooking, or is it just the way it is for historical reasons?

Hi @maxb,

Most of the complication in destroy comes from one particular legacy feature of Terraform; providers can reference anything else in the configuration. If providers had been designed to be completely independent of configuration from the start, then it would be as simple as you say, basically just planning to destroy each resource in the reverse order from create.

However since providers can reference other objects in the configuration, then the process is more like “apply hypothetical configuration with all objects removed, except for those on which a provider depends, but act as if they had been removed after their evaluation, both during plan and again during apply”. In order to ensure the provider can get these values, we need to be able to evaluate all temporary values as well, in the proper create order, before the provider is configured, which must be before that provider’s resources begin their destroy operations.

Naïve application of these processes also quickly leads to cycles due to the evaluation “direction change” that happens at providers. Because providers are the same logical object for both create, read, and destroy; providers that depend on managed resources can’t be connected to both the create and destroy paths of the graph without creating cycles, so they are indirectly placed in the overall topological order via the transitive dependencies of their resources.


Gosh… thanks for the insight!

I am so used to conversations in this forum explaining to people that, no, they cannot in the same plan, plan the creation of a Kubernetes cluster, and the creation of objects which need the Kubernetes API server to be up to plan … that I’d forgotten that requiring the remote service to be online to plan a creation is actually not necessarily a common thing, and hadn’t considered practical uses of resource to provider dependencies.

I mean, we still don’t recommend this pattern (it’s generically called out in the documentation as being “unsafe”), and a different architecture would likely have been reached if we were to design Terraform today, but we do have to support existing configuration. While there are probably some useful constructions of providers depending on managed resources, the overall behavior is not intuitive for users, and the ability to quickly fall into the same traps that you mention with kubernetes makes it difficult to use reliably.