Hello,
It’s strange, so let me backup and provide more context. I’m using Azure. The azure resource group that is used to house the resources is already created, and I don’t have permissions to edit or alter it, but I do have permissions to add to it. The same goes for the Azure Service Principal / Service connection that Terraform uses as the identity.
I create the resource group as part of my manifest definition (azurerm_resource_group.mygroup), and my workflow is to terraform init
and then run terraform import <resource> <resource group ID>
so that Terraform can continue to manage it.
From there I run, terraform validate, plan, apply, they all work fine. When I add a separate resource (one that has nothing to do with the azurerm_kubernetes_cluster, like a separate external database, plan
shows the following:
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the
last "terraform apply":
# azurerm_resource_group.k8s has changed
~ resource "azurerm_resource_group" "k8s" {
id = "/subscriptions/578e0f86-0491-4137-9a4e-3a3c0ff28e91/resourceGroups/DEV-Lift_Stihl-Dev_CentralUS"
name = "DEV-Lift_Stihl-Dev_CentralUS"
~ tags = {
- "environment" = "stihldevlift" -> null
}
# (1 unchanged attribute hidden)
# (1 unchanged block hidden)
}
Notice that the tags is being set to null despite me having defined them in my manifest to not be null. It’s right after this I get the following.
Terraform will perform the following actions:
# azurerm_kubernetes_cluster.k8s will be created
+ resource "azurerm_kubernetes_cluster" "k8s" {
+ dns_prefix = "stihldevliftrgk8s"
+ fqdn = (known after apply)
+ id = (known after apply)
+ kube_admin_config = (known after apply)
...
...
... (etc)
It just built this cluster resource so why does it think it has to build it again? I think the tags are because I don’t have permissions to manipulate the resource group. I also think that this is contributing to the cluster issue. I can get around the tags error by adding tags = {}
. But, by that time, Terraform already thinks it has to build a new one, but in apply
already sees the resource and tells me I need to import it, per the errors above. So somehow it “forgot” that it just created it. How can I debug that’s happening there?
I have a couple contributing factors I’m trying to sniff out here. Terraform warns that by registering providers manually we could get hard to decipher errors. Maybe I’m missing a Provider I need but am not aware of? Maybe it’s related to a permissions issue where not being able to successfully manage an imported resource causes Terraform to ‘malfunction’ in a way that its state is confused?
Thanks for any direction!