Errors with coredns and configmap during deploying aws eks module via terraform

I receive 2 errors when i deploy AWS EKS module via Terraform. How to solve it?

Error: unexpected EKS Add-On (my-cluster:coredns) state returned during creation: timeout while waiting for state to become 'ACTIVE' (last state: 'DEGRADED', timeout: 20m0s)

Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp [::1]:80: connect: connection refused

What role should i write in aws_auth_roles parameter? AWSServiceRoleForAmazonEKS or a custom role with policies: AmazonEKSWorkerNodePolicy, AmazonEC2ContainerRegistryReadOnly, AmazonEKS_CNI_Policy?

What role should i add to instance-profile? AWSServiceRoleForAmazonEKS or a custom role with policies: AmazonEKSWorkerNodePolicy, AmazonEC2ContainerRegistryReadOnly, AmazonEKS_CNI_Policy?

Terraform deploys EC2 machines for worker node, but i dont see a nodegroup with worker nodes in eks, probably coredns issue is here.

My config:

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 18.20.2"
  cluster_name    = var.cluster_name
  cluster_version = var.cluster_version
  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = false
  cluster_addons = {
    coredns = {
      resolve_conflicts = "OVERWRITE"
    }
    kube-proxy = {}
    vpc-cni = {
      resolve_conflicts = "OVERWRITE"
    }
  }
  subnet_ids = ["...","..."] 
  self_managed_node_group_defaults = {
    instance_type                          = "t2.micro"
    update_launch_template_default_version = true
  }
  self_managed_node_groups = {
    one = {
      name         = "test-1"
      max_size     = 2
      desired_size = 1
      use_mixed_instances_policy = true
      mixed_instances_policy = {
        instances_distribution = {
          on_demand_base_capacity                  = 0
          on_demand_percentage_above_base_capacity = 10
          spot_allocation_strategy                 = "capacity-optimized"
        }
      }
    }
 }
  create_aws_auth_configmap = true
  manage_aws_auth_configmap = true
  aws_auth_users = [
    {
      userarn  = "arn:aws:iam::...:user/..."
      username = "..."
      groups   = ["system:masters"]
    }
  ]
  aws_auth_roles = [
    {
      rolearn  = "arn:aws:iam::...:role/aws-service-role/eks.amazonaws.com/AWSServiceRoleForAmazonEKS"
      username = "AWSServiceRoleForAmazonEKS"
      groups   = ["system:masters"]
    }
  ]
  aws_auth_accounts = [
    "..."
  ]
}

hi,

question have you resloved this I’m facing the same issue?

as i remember i started to use eks managed node groups

Hi,

thanks for the answer, I’m useing the managed node Group.