Instance replacement creates new instance before destroying old one

I have an issue that I cannot explain and I am pretty sure that things used to work.

I have an aws_instance resource. When updating the AMI, it gets replaced. But instead of destroying the previous instance and creating the new one, it creates the new instance and leave the “destroy” for the end.

This causes issues with EBS volume attachments that did not happen a couple of minor versions before. When the instance that is meant to be destroyed is destroyed first, the attachment (skip_destroy=true) is gone with it, and it can be re-attached to the new instance afterwards.

Funny enough, I have a second, similar type of instance in the same stack that behaves as I expect and gets destroyed before created.

In all of this there is no create_before_destroy life-cycle directive involved.

Any tips?

Hi @hsanjuan,

Can you confirm that you don’t have create_before_destroy set on any resources in your configuration?

I ask because setting create_before_destroy on one resource requires Terraform to treat other resources in its dependency chain as implied create_before_destroy too, or else there there isn’t a correct way to construct the dependency graph. The usual cause of behavior like you are seeing is adding create_before_destroy to something else in the dependency chain of your aws_instance and thus forcing Terraform to treat the aws_instance as create_before_destroy too.

1 Like

Thank you, that was the issue. I had a create_before_destroy resource with an explicit dependency to this instance. I had no idea that the lifecycle setting would extend to everything on the dependency chain.