I’ve been encountering a persistent issue for a few days now and could really use some guidance. I have a set of Terraform files that are responsible for setting up an EKS cluster, including VPCs, roles, policies, and other configurations. Additionally, I have a GitHub Actions workflow that interacts with this setup to deploy various Kubernetes resources.
My challenge lies with integrating a manually created IAM role and policy (with OIDC) into the aws-auth ConfigMap using Terraform. When I update aws-auth manually, everything works perfectly. However, when I attempt to automate this update through Terraform during the initial setup, it leads to failures. Specifically, if I include the update in the node group creation, Terraform errors out, stating that the aws-auth ConfigMap already exists.
Here’s what I’ve tried:
Applying the aws-auth update manually — this works without issues.
Automating the update through Terraform, which fails during the node group creation or reports that aws-auth already exists if I add a dependency on the node group creation.
What could I be doing wrong, and what are the best practices for successfully automating this process?
I’ve already searched for solutions but haven’t found anything conclusive.
I have just run into the same problem. Have you figured out a solution yet? I can’t even manually update aws-auth because I’m creating my EKS cluster with Terraform Cloud and the cluster only has the permissions for the assumed role that Terraform Cloud used to create the cluster.
I found this discussion from 2 years ago. It has a some options but none of them seem great.