AWS EKS - Terraform: Error configmap "aws-auth" does not exist

Hello Team,

I am trying to create EKS cluster with the available eks terraform module - GitHub - terraform-aws-modules/terraform-aws-eks: Terraform module to create an Elastic Kubernetes (EKS) cluster and associated resources 🇺🇦. The eks cluster is getting created successfully, but when it is executing the mapUsers section, where I have updated one IAM user to be added to kubeconfig file, there it is getting failed.

provider "aws" {
region = "us-east-1"
}

//data "aws_eks_cluster" "default" {
// name = module.eks_cluster_creation.cluster_id
//}

data "aws_eks_cluster_auth" "default" {
name = module.eks_cluster_creation.cluster_name
}

provider "kubernetes" {
host = module.eks_cluster_creation.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks_cluster_creation.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.default.token
}

terraform {
backend "s3" {
bucket = "statefile"
key = "tf/eks.tf"
region = "us-east-1"
}
}

locals {
name = "Sandbox-Cluster-Test"
region = "us-east-1"
azs = slice(data.aws_availability_zones.myaz.names, 0, 3)
tags = {
Example = local.name
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
}

data "aws_vpc" "myvpc"{
filter {
name = "tag:Name"
values = ["VPC-DevOps"]
}
}
data "aws_availability_zones" "myaz" {
state = "available"
}

resource "aws_subnet" "public-subnets" {
count = 3
vpc_id = data.aws_vpc.myvpc.id
cidr_block = var.public-subnet-cidr1[count.index]
tags = {
Name = "Public-k8s-subnet"
}
availability_zone = "${data.aws_availability_zones.myaz.names[count.index]}"
map_public_ip_on_launch = true
}

data "aws_route_table" "publicrt" {
vpc_id = data.aws_vpc.myvpc.id
filter {
name = "tag:Name"
values = ["public-route-table"]
}
}

resource "aws_route_table_association" "public-route-1" {
count = "${length(var.public-subnet-cidr1)}"
subnet_id = "${element(aws_subnet.public-subnets.*.id, count.index)}"
route_table_id = data.aws_route_table.publicrt.id
}

module "eks_nodegroup_role" {
source = "./eks-role"
}

module "eks_cluster_creation" {
source = "terraform-aws-modules/eks/aws"
version = "19.13.1"
cluster_name = local.name

iam_role_arn = module.eks_nodegroup_role.eks_role
cluster_endpoint_public_access = true
cluster_endpoint_private_access = false
subnet_ids = flatten([aws_subnet.public-subnets[*].id])
vpc_id = data.aws_vpc.myvpc.id
manage_aws_auth_configmap = true
aws_auth_users = [
{
userarn = "arn:aws:iam::xxxxxxxxx:user/usertest"
username = "usertest"
groups = ["system:masters"]
}
]

aws_auth_accounts = [
"xxxxxxxxx"
]

depends_on = [module.eks_nodegroup_role]
}

resource "null_resource" "kubectl" {
provisioner "local-exec" {
command = "aws eks --region us-east-1 update-kubeconfig --name ${local.name}"
}
depends_on = [module.eks_cluster_creation]
}
==================================================================
**Output**
[DEBUG] [aws-sdk-go]
[31mâ•· [0m [0m
[31m│ [0m [0m [1m [31mError: [0m [0m [1mThe configmap "aws-auth" does not exist [0m
[31m│ [0m [0m
[31m│ [0m [0m [0m with module.eks_cluster_creation.kubernetes_config_map_v1_data.aws_auth[0],
[31m│ [0m [0m on .terraform/modules/eks_cluster_creation/[main.tf](http://main.tf/) line 553, in resource "kubernetes_config_map_v1_data" "aws_auth":
[31m│ [0m [0m 553: resource "kubernetes_config_map_v1_data" "aws_auth" [4m{ [0m [0m
[31m│ [0m [0m
[31m╵ [0m [0m
2023-04-28T12:07:03.961Z [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2023-04-28T12:07:03.961Z [DEBUG] provider: plugin process exited: path=.terraform/providers/[registry.terraform.io/hashicorp/kubernetes/2.20.0/linux_amd64/terraform-provider-kubernetes_v2.20.0_x5](http://registry.terraform.io/hashicorp/kubernetes/2.20.0/linux_amd64/terraform-provider-kubernetes_v2.20.0_x5) pid=15195
2023-04-28T12:07:03.966Z [DEBUG] provider: plugin exited

Apart from the aws-auth does not exist, I also see the below error:
[DEBUG] provider.stdio: received EOF, stopping recv loop: err=“rpc error: code = Unavailable desc = error reading from server: EOF”.
Could anyone please help here whether it is issue with way mapUser parameter is defined or something else?

1 Like

Thanks @macmiranda , I have updated and tried to improve the formatting better compared to initial one.

Have you tried setting the input variable create_aws_auth_configmap to true?

Thanks @macmiranda for the reply. I have not yet set the input variable create_aws_auth_configmap, but let me give a try by setting the above input variable and run the terraform code.
As I was just creating the EKS cluster not with any nodegroups, so did not set it up. As per my understanding and through the information what I see is that the input variable 'create_aws_auth_configmap` is used when creating self managed node groups. And for AWS managed node groups which will be default selection, then the aws_auth config file will be created by AWS itself.

@macmiranda , Updated terraform code with parameter " create_aws_auth_configmap" as true and I do not see any error now. The eks cluster is getting created successfully.

But I do not see below IAM user, added to the auth file.
aws_auth_users = [ { userarn = "arn:aws:iam::123456789:user/usertest" username = "usertest" groups = ["system:masters"] }

I did kubectl config view , below is the output, but do not see the IAM user:
`kind: Config
preferences: {}
users:

You’re not going to find the user ARN in the kubeconfig.
You’re supposed to log in to AWS as usual on the CLI. Just make sure that when you run aws sts get-caller-identity, the calling user matches the one in aws_auth_users.
Every time a call is made using kubectl, it executes the credential helper that you see in the kubeconfig to get temporary Kubernetes creds for the current AWS user.

1 Like

Thanks @macmiranda , I will try out with the iam user in the code through the kubectl command and configuring the AWS CLI with the same iam user.

I have similar issue. I’m using EKS 1.26.
If I enable create_aws_auth_configmap and add users in aws_auth_users I keep getting and error configmaps "aws-auth" already exists.

I tried multiple config combinations in one or another way aws auth users doesn’t work.

You should probably use manage_aws_auth_configmap instead of create_aws_auth_configmap

Sorry, to jump into this thread. I have used manage_aws_auth_configmap = true today. And yet, I get the same error as the OP.

"msg": "\nError: The configmap \"aws-auth\" does not exist\n\n  with module.eks.kubernetes_config_map_v1_data.aws_auth[0],\n  on .terraform/modules/eks/main.tf line 553, in resource \"kubernetes_config_map_v1_data\" \"aws_auth\":\n 553: resource \"kubernetes_config_map_v1_data\" \"aws_auth\" {",
    "rc": 1,
    "stderr": "\nError: The configmap \"aws-auth\" does not exist\n\n  with module.eks.kubernetes_config_map_v1_data.aws_auth[0],\n  on .terraform/modules/eks/main.tf line 553, in resource \"kubernetes_config_map_v1_data\" \"aws_auth\":\n 553: resource \"kubernetes_config_map_v1_data\" \"aws_auth\" {\n\n",
    "stderr_lines": [
        "",
        "Error: The configmap \"aws-auth\" does not exist",
        "",
        "  with module.eks.kubernetes_config_map_v1_data.aws_auth[0],",
        "  on .terraform/modules/eks/main.tf line 553, in resource \"kubernetes_config_map_v1_data\" \"aws_auth\":",
        " 553: resource \"kubernetes_config_map_v1_data\" \"aws_auth\" {",
        ""

Unfortunately, this thread has been hijacked many times and is dealing with 2 different issues, one where aws-auth already exists and another where it doesn’t.

In your case @devang.sanghani, see my reply from May 4th:

Oh yes, tried that too, which lead me to some other error.

  "msg": "\nError: Post \"https://xxxxxxxxxxxxx.gr7.eu-west-1.eks.amazonaws.com/api/v1/namespaces/kube-system/configmaps\": dial tcp 100.68.84.210:443: i/o timeout\n\n  with module.eks.kubernetes_config_map.aws_auth[0],\n  on .terraform/modules/eks/main.tf line 536, in resource \"kubernetes_config_map\" \"aws_auth\":\n 536: resource \"kubernetes_config_map\" \"aws_auth\" {",
    "rc": 1,

This is still an issue for many :

It would be helpful if you created a new topic and shared your complete module block.

Alright, thanks! Will do it.