I have a Terraform GCE resource being created using the following module:
resource "google_compute_address" "gke_proxy1" {
...
}
resource "google_compute_disk" "gke_proxy1" {
...
}
resource "google_compute_instance" "gke_proxy1" {
...
}
However at some point, outside Terraform scripts, created the google_compute_address for a specific GCE already so when I try and apply the TF script, it tries to create the compute address which is not what I want. For e.g. using:
terraform plan -var "env=dev" -target module.gke-proxy-primary;
If I run the following to show real-world state :
terraform state show module.gke-proxy-secondary.google_compute_instance.gke_proxy1
Output shows the expected compute_address state address as expected:
network_interface {
name = "nic0"
...
}
network_interface {
name = "nic1"
...
}
So I guess I need rebuild the TF state from the real-world infra so I have tried the following since there’s been a drift between state and real world: Running :
terraform refresh -var "env=dev" -target module.gke-proxy-primary;
Gives me :
module.gke-proxy-primary.google_compute_disk.gke_proxy1: Refreshing state... [id=projects/proj1/zones/europe-west2-a/disks/gke-proxy1]
module.gke-proxy-primary.google_compute_instance.gke_proxy1: Refreshing state... [id=projects/proj1/zones/europe-west2-a/instances/gke-proxy1]
So it’s not refreshing the compute_address.gke_proxy1 remote bucket state for the resource as expected from real world. Any ideas what I’m doing wrong?