Hi,
I have a kubernetes cluster and i want o have some sort of zero downtime on my cluster when change OS size for example.
So i did try to use lifecycle feature " create before destroyed". As i guessed that the cluster name can be an issue , i did used a "random id " resource to cobine with my cluster name.
My initial cluster name is " eolementhe-top-k8s-cluster-143e" ( with value “143e” as the random id)
so part of my code is like below
resource "random_id" "server" {
byte_length = 2
}
resource "azurerm_kubernetes_cluster" "eolementhe" {
lifecycle {
ignore_changes = [default_node_pool[0].node_count]
create_before_destroy = true
}
name = "${local.eolementhe.azure.kubernetes_cluster_name}-${random_id.server.hex}"
location = azurerm_resource_group.eolementhe.location
resource_group_name = azurerm_resource_group.eolementhe.name
# The dns_prefix must contain between 3 and 45 characters, and can contain
# only letters, numbers, and hyphens. It must start with a letter and must end
# with a letter or a number.
dns_prefix = local.eolementhe.name_prefix
kubernetes_version = var.orchestrator_version
role_based_access_control {
enabled = true
}
linux_profile {
admin_username = var.admin_username
ssh_key {
key_data = file(local.eolementhe.ssh_public_key)
}
}
when launching my “terraform apply” through github actions, it seem that an new random id is properly created but i get an error saying that my cluster resource already exists and needs to be imported into the state but this resource is already present ( i checked twice on tfstate file)
did i missed something related to the sue of this lifecycle feature ?