EKS cluster not destroyed completely. Getting `Error: Unauthorized`

I’m following the instructions from the Provision an EKS Cluster (AWS) tutorial.

  1. git clone https://github.com/hashicorp/learn-terraform-provision-eks-cluster
  2. terraform init
  3. terraform apply
  4. aws eks --region $(terraform output region) update-kubeconfig --name $(terraform output cluster_name)
    • First problem: Provided region_name '"us-east-2"' doesn't match a supported format.
    • Easily resolved by providing parameters directly: aws eks --region 'us-east-2' update-kubeconfig --name 'training-eks-orAxPAav'
    • But nicer if this just worked.
  5. I follow the rest of the examples, metrics server, dashboard, authenticating. All work fine.
  6. Last step: terraform destroy

During the destroy, terraform-destroy.out.txt (100.7 KB), things just stop somewhere in the middle and I get this on stderr:

Error: Unauthorized

I do a second terraform destroy, terraform-destroy-2.out.txt (40.1 KB). More things get destroyed but again stops in the middle with Error: Unauthorized on stderr.

I do a third terraform destroy and this time it halts immediately, terraform-destroy-3.out.txt (3.8 KB). With this on stderr:

Error: Delete "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp [::1]:80: connect: connection refused

Any further attempts to destroy end the same way.

As a workaround, I manually remove the offending resource from the state:

$ terraform state rm module.eks.kubernetes_config_map.aws_auth[0]
Removed module.eks.kubernetes_config_map.aws_auth[0]
Successfully removed 1 resource instance(s).

After that, a final destroy succeeds with 0 resources destroyed.

Could this be a bug in terraform or the eks module or is there a better example I could be following as a basis for my first Terraform-managed EKS environemnt?

Can you confirm you’re running this with the following IAM permissions?

Yes, I also tried creating an IAM role with that exact policy (and added assume_role to provider) and get the same result as reported.

1 Like

I am getting the same error. I do not think that this is an AWS IAM related error, as terraform tries to reach a local endpoint: http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth ?

It seems that removing all version constraints and getting the latest versions of providers fixes the issue.

1 Like

After spending more time with this, it seems it is still an intermittent issue with the latest versions of providers. Sometimes it works, sometimes it doesn’t.

My current best guess is having expired kubernetes auth tokens. So, after I get the first Unauthorized error, doing a terraform refresh before the next terraform destroy seems to get things working. No need to force removing from state.

I hope this can get fixed but until then I’ll build a habit of doing a refresh before a destroy.

The guide includes (now?) a -raw option.