Data Sources and Destroy Time Provisioners

I’m trying to identify a way to rewrite my destroy-time provisioners so that they are no longer using external data sources (since that has been deprecated), and I’m not sure how to go about it.

My use-case is this: When I destroy a VM, I have a number of cleanup tasks that need to be performed, that use credentials for some external systems. Currently, these pull credentials from a Vault data source. I don’t want to hard code the credentials, because they would then end up in my git repo. I know that with null resources, I can define the items I need for destroy as part of the triggers block. Is there anything equivalent for other types of resources? Or is there another method I can use to provide credentials to a destroy-time provisioner that doesn’t require hardcoding them into the terraform script?


Hi @ajchiarello,

Unfortunately, destroy-time provisioners are a flawed design that is the root cause of numerous graph-related bugs that are planned to be fixed in the next major release, and so this restriction on what they can access is honestly coming as a compromise to avoid having to remove the feature altogether: we can at least keep it working as long as destroy-time work never depends on anything else in the configuration.

With that said, my main recommendation here would be that if possible you find a way to solve your problem that doesn’t involve using provisioners at all. It’s impossible to give a general alternative solution that will work everywhere, but a common idea we’ve used and seen others use is to have the virtual machines themselves be responsible for their own bootstrapping and teardown actions.

The usual way to get that done is to include extra scripting in the virtual machine images that takes appropriate actions on instance boot and instance shutdown, e.g. using systemd or something equivalent to it. That way Terraform’s job is just to start up and shut down the virtual machines, and the VMs can otherwise take care of themselves.

It sounds like in your case that would require the VMs to have access to some credentials from vault, and so this particular pattern might not be a good fit for you. In principle the VMs could be given access to the relevant credentials in Vault using Vault’s cloud integration mechanisms and ACLs, but if your VMs aren’t already known to Vault then I understand that this would be quite a significant architectural change.

I wound up migrating my cleanup actions to an ansible playbook that can pull the necessary credentials from Vault, and calling the playbook as a local-exec destroy provisioner that provides only its own name.