Option to use existing resource

I have a possibly idealistic goal to create/destroy/create/destroy/… an environment regularly, so that I am sure no dependencies exist from one environment build to another.

It sounds simple enough, but in my experience with GCP, there are some things in the way. For example, crypto key rings are “create only” - they cannot be deleted/recreated. And certain Google service accounts are created behind the scenes when using features, and similarly can’t be removed once created. And some resources reside in common projects that belong to other teams and have permission restrictions for delete - such as shared VPC.

From what I understand, Terraform’s approach to existing resources is “import”, which is a one-off mechanism for each type of resource. I could maintain a batch file with a bunch of import commands, though variables known only to Terraform are difficult. In other cases, like the crypto key ring, the resource will not exist on the first run and can’t be imported.

Another approach has been to use two blocks, “data” and “resource”, for the same resource, with logic to use the pre-existing “data” resource if it isn’t null, otherwise create it. But that runs into problems with “depends_on” which is restrictive on depending on a computed resource.

Both of these are ugly.

I’d like to see a way to declare a “create or use existing” mechanism for a resource. For example, let me place import_if_exists = true in the “resource” block.

What is best practice for this scenario?

Import is only to take control of a resource that happened to be created elsewhere, but now should be managed by Terraform. It doesn’t sound like that is what you want.

Instead for resources which are maintained elsewhere (maybe using Terraform or maybe not) the approach would be to use data sources to fetch information as needed. You can also use remote state if the resource is still maintained by Terraform (but another team for example). For some things it can also be simplest to just add in the IDs, etc. directly into your code (with the caveat that if they were to change you’d need to update you code - so useful for things that “never change”, like a shared VPC)

Sorry I don’t see the solution.

Another factor for me is the destroy process. I can’t use terraform destroy, because I use GKE, and (A) terraform wants to destroy kubernetes app by app before taking out the cluster, and this takes an unnecessary long time, (B) some GCP features create resources that terraform doesn’t know about and doesn’t destroy, and (C) the GCP tf provider hangs deleting the kubernetes cluster.

So I have to use a python script that invokes gcloud to delete the environment, and delete the tf state.

Consider data storage. First run on a new GCP project, I need to create a storage bucket for app data. Next I want to be able to rebuild without destroying the data. I destroy the environment with a python script and keep the storage bucket. On the second run, terraform has brand new state. It needs to attach to that storage bucket instead of creating it.

The solution for declaring the storage bucket in tf is - what?

It sounds like you want to create the storage bucket (possibly via Terraform but a different set of code) and then reference it from your main code (remote state, data source or just hard coded name).

You don’t want a resource for the storage bucket in your main code at all, as you aren’t wanting a terraform destroy for that code to touch the bucket.

Gosh how complex. I give.