AAD-integrated Kubernetes cluster no longer able to use kubernetes provider following disabling local admin account

On an AAD-integrated Kubernetes cluster, we are no longer able to use kubernetes provider following disabling local admin account.

Error: Invalid index
│ 
│   on provider.tf line 15, in provider "kubernetes":
│   15:   client_certificate     = base64decode(module.aks.aks.kube_admin_config.0.client_certificate)
│     ├────────────────
│     │ module.aks.aks.kube_admin_config is empty list of object
│ 
│ The given key does not identify an element in this collection value.
╵
╷
│ Error: Invalid index
│ 
│   on provider.tf line 16, in provider "kubernetes":
│   16:   client_key             = base64decode(module.aks.aks.kube_admin_config.0.client_key)
│     ├────────────────
│     │ module.aks.aks.kube_admin_config is empty list of object
│ 
│ The given key does not identify an element in this collection value.
╵
╷
│ Error: Invalid index
│ 
│   on provider.tf line 17, in provider "kubernetes":
│   17:   cluster_ca_certificate = base64decode(module.aks.aks.kube_admin_config.0.cluster_ca_certificate)
│     ├────────────────
│     │ module.aks.aks.kube_admin_config is empty list of object
│ 
│ The given key does not identify an element in this collection value.

Our provider block is shown below:

provider "kubernetes" {
  host                   = module.aks.aks.kube_admin_config.0.host
  username               = module.aks.aks.kube_admin_config.0.username
  password               = module.aks.aks.kube_admin_config.0.password
  client_certificate     = base64decode(module.aks.aks.kube_admin_config.0.client_certificate)
  client_key             = base64decode(module.aks.aks.kube_admin_config.0.client_key)
  cluster_ca_certificate = base64decode(module.aks.aks.kube_admin_config.0.cluster_ca_certificate)
}

The error message certainly makes sense as there would be no local admin credentials available due to disabling local account access.

How would we therefore structure our provider block such that Terraform uses its own service principal (which has access to the cluster through AAD) to make changes to kubernetes objects on that cluster.

Thanks in advance!

This is the solution that worked for us

provider "kubernetes" {
  host                   = data.azurerm_kubernetes_cluster.this.kube_config.0.host
  cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.this.kube_config.0.cluster_ca_certificate)
  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command = "./kubelogin"
    args = [
      "get-token",
      "--login",
      "spn",
      "--environment",
      "AzurePublicCloud",
      "--tenant-id",
      var.tenant_id,
      "--server-id",
      var.aad_server_id,
      "--client-id",
      var.client_id,
      "--client-secret",
      var.client_secret
    ]
  }
}

This method of authentication relied on the kubelogin utility (GitHub - Azure/kubelogin: A Kubernetes credential (exec) plugin implementing azure authentication), which allows for a non-interactive cluster login using an Azure Service Principal.

Note that because we’re using TF cloud, the workaround was to include the kubelogin binary in the source repo

1 Like