EKS cluster not destroyed completely. Getting `Error: Unauthorized`

I’m following the instructions from the Provision an EKS Cluster (AWS) tutorial.

  1. git clone https://github.com/hashicorp/learn-terraform-provision-eks-cluster
  2. terraform init
  3. terraform apply
  4. aws eks --region $(terraform output region) update-kubeconfig --name $(terraform output cluster_name)
    • First problem: Provided region_name '"us-east-2"' doesn't match a supported format.
    • Easily resolved by providing parameters directly: aws eks --region 'us-east-2' update-kubeconfig --name 'training-eks-orAxPAav'
    • But nicer if this just worked.
  5. I follow the rest of the examples, metrics server, dashboard, authenticating. All work fine.
  6. Last step: terraform destroy

During the destroy, terraform-destroy.out.txt (100.7 KB), things just stop somewhere in the middle and I get this on stderr:

Error: Unauthorized

I do a second terraform destroy, terraform-destroy-2.out.txt (40.1 KB). More things get destroyed but again stops in the middle with Error: Unauthorized on stderr.

I do a third terraform destroy and this time it halts immediately, terraform-destroy-3.out.txt (3.8 KB). With this on stderr:

Error: Delete "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp [::1]:80: connect: connection refused

Any further attempts to destroy end the same way.

As a workaround, I manually remove the offending resource from the state:

$ terraform state rm module.eks.kubernetes_config_map.aws_auth[0]
Removed module.eks.kubernetes_config_map.aws_auth[0]
Successfully removed 1 resource instance(s).

After that, a final destroy succeeds with 0 resources destroyed.

Could this be a bug in terraform or the eks module or is there a better example I could be following as a basis for my first Terraform-managed EKS environemnt?

Can you confirm you’re running this with the following IAM permissions?

Yes, I also tried creating an IAM role with that exact policy (and added assume_role to provider) and get the same result as reported.

1 Like

I am getting the same error. I do not think that this is an AWS IAM related error, as terraform tries to reach a local endpoint: http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth ?

It seems that removing all version constraints and getting the latest versions of providers fixes the issue.

1 Like

After spending more time with this, it seems it is still an intermittent issue with the latest versions of providers. Sometimes it works, sometimes it doesn’t.

My current best guess is having expired kubernetes auth tokens. So, after I get the first Unauthorized error, doing a terraform refresh before the next terraform destroy seems to get things working. No need to force removing from state.

I hope this can get fixed but until then I’ll build a habit of doing a refresh before a destroy.

The guide includes (now?) a -raw option.

I have the exact same error. It seems related with the auth-map.
It is the same terraform template that for 1 year and this error has started to appear only from few months. I don’t know if it is due to the upgrade of providers version or the terraform new version.

So far, doing terraform refresh before terraform destroy has been working for me.

I just ran in to this. Performing a refresh before a destroy did not solve my problem. I had to remove the module.eks.kubernetes_config_map.aws_auth[0] from my state in order to proceed with the destroy.

Terraform v0.14.8

1 Like

I hit the same as you guys and when tried deleting it from the state
terraform state rm module.eks.kubernetes_config_map.aws_auth[0]
However, it was complaining unable to find that even terraform state list shows it. I was only successful when put in commas terraform state rm 'module.eks.kubernetes_config_map.aws_auth[0]'

1 Like

To this date they have not resolved this issue.