module "instance" {
source = "../module/instances"
name = "vm1"
# Fill variables
}
Expected Behavior
No changes. Your infrastructure matches the configuration.
Actual Behavior
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
module.gerrit-instance.google_compute_instance.vm_instance must be replaced
-/+ resource "google_compute_instance" "vm_instance" {
#...
~ id = "~~" -> (known after apply)
~ instance_id = "~~" -> (known after apply)
# ...
name = "vm1"
# ...
~ scheduling {
# (5 unchanged attributes hidden)
}
# (3 unchanged blocks hidden)
}
Steps to Reproduce
terraform init
terraform apply
terraform plan
Additional Context
Create “vm1” instance on gcp
To check apply, run $ terraform plan
Planned action is Remove “vm1” and create replacement “vm1”
Should I delete and create a replacement instead of updating every time I modify an argument?
See the “forces replacement” comment? That’s telling you why Terraform believes it needs to replace this resource.
I don’t work with GCE so I can’t easily test, but here’s my guess as to what is going on:
In your configuration you’re setting "false" as a string, and this is apparently being accepted and silently converted somewhere … but stored in some way that gives rise to it being considered a change when terraform-provider-google reads back the instance state on the next run.
Changing it to an actual boolean false in your configuration might fix this.
Assuming I’m right, this minor mistake producing such confusing behaviour is probably worth a bug report - it’s pretty user-unfriendly - but consistent with other problems caused by mistakes coding Terraform providers that I’ve seen.
This seems like something that might happen if the provider schema doesn’t match the underlying API: if the remote API only returns its “confidential instance config” when that nested flag is true then the provider will need some extra logic to avoid telling Terraform Core that the block has been removed when it refreshes the object from the remote API.
I would suggest reporting this in an issue in this provider’s own repository.