Hello Team,
I am trying to create EKS cluster with the available eks terraform module - GitHub - terraform-aws-modules/terraform-aws-eks: Terraform module to create an Elastic Kubernetes (EKS) cluster and associated resources 🇺🇦 . The eks cluster is getting created successfully, but when it is executing the mapUsers section, where I have updated one IAM user to be added to kubeconfig file, there it is getting failed.
provider "aws" {
region = "us-east-1"
}
//data "aws_eks_cluster" "default" {
// name = module.eks_cluster_creation.cluster_id
//}
data "aws_eks_cluster_auth" "default" {
name = module.eks_cluster_creation.cluster_name
}
provider "kubernetes" {
host = module.eks_cluster_creation.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks_cluster_creation.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.default.token
}
terraform {
backend "s3" {
bucket = "statefile"
key = "tf/eks.tf"
region = "us-east-1"
}
}
locals {
name = "Sandbox-Cluster-Test"
region = "us-east-1"
azs = slice(data.aws_availability_zones.myaz.names, 0, 3)
tags = {
Example = local.name
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
}
data "aws_vpc" "myvpc"{
filter {
name = "tag:Name"
values = ["VPC-DevOps"]
}
}
data "aws_availability_zones" "myaz" {
state = "available"
}
resource "aws_subnet" "public-subnets" {
count = 3
vpc_id = data.aws_vpc.myvpc.id
cidr_block = var.public-subnet-cidr1[count.index]
tags = {
Name = "Public-k8s-subnet"
}
availability_zone = "${data.aws_availability_zones.myaz.names[count.index]}"
map_public_ip_on_launch = true
}
data "aws_route_table" "publicrt" {
vpc_id = data.aws_vpc.myvpc.id
filter {
name = "tag:Name"
values = ["public-route-table"]
}
}
resource "aws_route_table_association" "public-route-1" {
count = "${length(var.public-subnet-cidr1)}"
subnet_id = "${element(aws_subnet.public-subnets.*.id, count.index)}"
route_table_id = data.aws_route_table.publicrt.id
}
module "eks_nodegroup_role" {
source = "./eks-role"
}
module "eks_cluster_creation" {
source = "terraform-aws-modules/eks/aws"
version = "19.13.1"
cluster_name = local.name
iam_role_arn = module.eks_nodegroup_role.eks_role
cluster_endpoint_public_access = true
cluster_endpoint_private_access = false
subnet_ids = flatten([aws_subnet.public-subnets[*].id])
vpc_id = data.aws_vpc.myvpc.id
manage_aws_auth_configmap = true
aws_auth_users = [
{
userarn = "arn:aws:iam::xxxxxxxxx:user/usertest"
username = "usertest"
groups = ["system:masters"]
}
]
aws_auth_accounts = [
"xxxxxxxxx"
]
depends_on = [module.eks_nodegroup_role]
}
resource "null_resource" "kubectl" {
provisioner "local-exec" {
command = "aws eks --region us-east-1 update-kubeconfig --name ${local.name}"
}
depends_on = [module.eks_cluster_creation]
}
==================================================================
**Output**
[DEBUG] [aws-sdk-go]
[31mâ•· [0m [0m
[31m│ [0m [0m [1m [31mError: [0m [0m [1mThe configmap "aws-auth" does not exist [0m
[31m│ [0m [0m
[31m│ [0m [0m [0m with module.eks_cluster_creation.kubernetes_config_map_v1_data.aws_auth[0],
[31m│ [0m [0m on .terraform/modules/eks_cluster_creation/[main.tf](http://main.tf/) line 553, in resource "kubernetes_config_map_v1_data" "aws_auth":
[31m│ [0m [0m 553: resource "kubernetes_config_map_v1_data" "aws_auth" [4m{ [0m [0m
[31m│ [0m [0m
[31m╵ [0m [0m
2023-04-28T12:07:03.961Z [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2023-04-28T12:07:03.961Z [DEBUG] provider: plugin process exited: path=.terraform/providers/[registry.terraform.io/hashicorp/kubernetes/2.20.0/linux_amd64/terraform-provider-kubernetes_v2.20.0_x5](http://registry.terraform.io/hashicorp/kubernetes/2.20.0/linux_amd64/terraform-provider-kubernetes_v2.20.0_x5) pid=15195
2023-04-28T12:07:03.966Z [DEBUG] provider: plugin exited
Apart from the aws-auth does not exist, I also see the below error:
[DEBUG] provider.stdio: received EOF, stopping recv loop: err=“rpc error: code = Unavailable desc = error reading from server: EOF”.
Could anyone please help here whether it is issue with way mapUser parameter is defined or something else?
Thanks @macmiranda , I have updated and tried to improve the formatting better compared to initial one.
Have you tried setting the input variable create_aws_auth_configmap
to true
?
Thanks @macmiranda for the reply. I have not yet set the input variable create_aws_auth_configmap
, but let me give a try by setting the above input variable and run the terraform code.
As I was just creating the EKS cluster not with any nodegroups, so did not set it up. As per my understanding and through the information what I see is that the input variable 'create_aws_auth_configmap` is used when creating self managed node groups. And for AWS managed node groups which will be default selection, then the aws_auth config file will be created by AWS itself.
@macmiranda , Updated terraform code with parameter " create_aws_auth_configmap" as true and I do not see any error now. The eks cluster is getting created successfully.
But I do not see below IAM user, added to the auth file.
aws_auth_users = [ { userarn = "arn:aws:iam::123456789:user/usertest" username = "usertest" groups = ["system:masters"] }
I did kubectl config view , below is the output, but do not see the IAM user:
`kind: Config
preferences: {}
users:
You’re not going to find the user ARN in the kubeconfig.
You’re supposed to log in to AWS as usual on the CLI. Just make sure that when you run aws sts get-caller-identity
, the calling user matches the one in aws_auth_users
.
Every time a call is made using kubectl
, it executes the credential helper that you see in the kubeconfig to get temporary Kubernetes creds for the current AWS user.
1 Like
Thanks @macmiranda , I will try out with the iam user in the code through the kubectl command and configuring the AWS CLI with the same iam user.
Clasyc
June 27, 2023, 2:26pm
9
I have similar issue. I’m using EKS 1.26.
If I enable create_aws_auth_configmap
and add users in aws_auth_users
I keep getting and error configmaps "aws-auth" already exists
.
I tried multiple config combinations in one or another way aws auth users doesn’t work.
You should probably use manage_aws_auth_configmap instead of create_aws_auth_configmap
Sorry, to jump into this thread. I have used manage_aws_auth_configmap = true
today. And yet, I get the same error as the OP.
"msg": "\nError: The configmap \"aws-auth\" does not exist\n\n with module.eks.kubernetes_config_map_v1_data.aws_auth[0],\n on .terraform/modules/eks/main.tf line 553, in resource \"kubernetes_config_map_v1_data\" \"aws_auth\":\n 553: resource \"kubernetes_config_map_v1_data\" \"aws_auth\" {",
"rc": 1,
"stderr": "\nError: The configmap \"aws-auth\" does not exist\n\n with module.eks.kubernetes_config_map_v1_data.aws_auth[0],\n on .terraform/modules/eks/main.tf line 553, in resource \"kubernetes_config_map_v1_data\" \"aws_auth\":\n 553: resource \"kubernetes_config_map_v1_data\" \"aws_auth\" {\n\n",
"stderr_lines": [
"",
"Error: The configmap \"aws-auth\" does not exist",
"",
" with module.eks.kubernetes_config_map_v1_data.aws_auth[0],",
" on .terraform/modules/eks/main.tf line 553, in resource \"kubernetes_config_map_v1_data\" \"aws_auth\":",
" 553: resource \"kubernetes_config_map_v1_data\" \"aws_auth\" {",
""
Unfortunately, this thread has been hijacked many times and is dealing with 2 different issues, one where aws-auth
already exists and another where it doesn’t.
In your case @devang.sanghani , see my reply from May 4th:
Oh yes, tried that too, which lead me to some other error.
"msg": "\nError: Post \"https://xxxxxxxxxxxxx.gr7.eu-west-1.eks.amazonaws.com/api/v1/namespaces/kube-system/configmaps\": dial tcp 100.68.84.210:443: i/o timeout\n\n with module.eks.kubernetes_config_map.aws_auth[0],\n on .terraform/modules/eks/main.tf line 536, in resource \"kubernetes_config_map\" \"aws_auth\":\n 536: resource \"kubernetes_config_map\" \"aws_auth\" {",
"rc": 1,
This is still an issue for many :
opened 02:55AM - 18 Mar 23 UTC
upstream blocker
## Description
se esta implementando un eks con nodos administrados pero s iemp… re falla
por favor revisar o dar una solucion
adjunto codigo y el error que sale
Steps to reproduce the behavior:
```
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.default.token
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}
data "aws_caller_identity" "current" {}
data "aws_availability_zones" "available" {}
locals {
name = "ekstest"
cluster_version = "1.25"
region = "eu-west-2"
tags = {
Cluster = local.name
Project = "culqi"
PCI= "false"
}
}
resource "aws_eks_addon" "coredns" {
cluster_name = module.eks.cluster_id
addon_name = "coredns"
resolve_conflicts = "OVERWRITE"
depends_on = [
module.eks
]
}
################################################################################
# EKS Module
################################################################################
module "eks" {
source = "../../module/eks"
cluster_name = local.name
cluster_version = local.cluster_version
cluster_enabled_log_types= [
"api",
"audit",
"authenticator",
"controllerManager",
"scheduler"
]
cluster_endpoint_public_access = true #double ccheck
# IPV6
cluster_ip_family = "ipv6"
create_cni_ipv6_iam_policy = true
vpc_id = data.aws_vpc.vpc_name-application.id
subnet_ids = ["${data.aws_subnet.subnets01.id}", "${data.aws_subnet.subnets02.id}", "${data.aws_subnet.subnets03.id}"]
#control_plane_subnet_ids = module.vpc.intra_subnets
eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
instance_types = ["m5.xlarge"]
attach_cluster_primary_security_group = true
iam_role_attach_cni_policy = true
}
cluster_addons = {
kube-proxy = {
most_recent = true
}
vpc-cni = {
most_recent = true
before_compute = true
service_account_role_arn = module.vpc_cni_irsa.iam_role_arn
configuration_values = jsonencode({
env = {
# Reference docs https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html
ENABLE_PREFIX_DELEGATION = "true"
WARM_PREFIX_TARGET = "1"
}
})
}
}
manage_aws_auth_configmap = true
eks_managed_node_groups = {
complete = {
name = "default_node"
use_name_prefix = true
subnet_ids = ["${data.aws_subnet.subnets01.id}", "${data.aws_subnet.subnets02.id}", "${data.aws_subnet.subnets03.id}"]
use_custom_launch_template = false
min_size = 2
max_size = 10
desired_size = 4
capacity_type = "SPOT"
force_update_version = true
instance_types = ["m5.xlarge"]
disk_size = 180
labels = local.tags
ami_id = data.aws_ami.eks_default.image_id
enable_bootstrap_user_data = true
pre_bootstrap_user_data = <<-EOT
export FOO=bar
EOT
post_bootstrap_user_data = <<-EOT
echo "you are free little kubelet!"
EOT
taints = [
{
key = "dedicated"
value = "gpuGroup"
effect = "NO_SCHEDULE"
}
]
update_config = {
max_unavailable_percentage = 33 # or set `max_unavailable`
}
description = "EKS managed node group launch template"
ebs_optimized = true
disable_api_termination = false
enable_monitoring = true
block_device_mappings = {
xvda = {
device_name = "/dev/xvda"
ebs = {
volume_size = 180
volume_type = "gp3"
iops = 3000
throughput = 150
encrypted = true
kms_key_id = module.ebs_kms_key.key_arn
delete_on_termination = true
}
}
}
# Remote access cannot be specified with a launch template
remote_access = {
ec2_ssh_key = module.key_pair.key_pair_name
source_security_group_ids = [aws_security_group.remote_access.id]
}
metadata_options = {
http_endpoint = "enabled"
http_tokens = "required"
http_put_response_hop_limit = 2
instance_metadata_tags = "disabled"
}
create_iam_role = true
iam_role_name = "eks-managed-node-group-complete-example"
iam_role_use_name_prefix = false
iam_role_description = "EKS managed node group complete example role"
iam_role_tags = {
Purpose = "Protector of the kubelet"
}
iam_role_additional_policies = {
AmazonEC2ContainerRegistryReadOnly = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
additional = aws_iam_policy.node_additional.arn
}
tags = {
ExtraTag = "EKS managed node group"
}
}
}
# Extend cluster security group rules
cluster_security_group_additional_rules = {
ingress = {
description = "EKS Cluster allows 443 port to get API call"
type = "ingress"
from_port = 443
to_port = 443
protocol = "TCP"
cidr_blocks = ["0.0.0.0/0"]
source_node_security_group = false
}
}
tags = local.tags
}
################################################################################
# Supporting Resources
################################################################################
module "vpc_cni_irsa" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
version = "~> 5.0"
role_name_prefix = "VPC-CNI-IRSA"
attach_vpc_cni_policy = true
vpc_cni_enable_ipv6 = true
oidc_providers = {
main = {
provider_arn = module.eks.oidc_provider_arn
namespace_service_accounts = ["kube-system:aws-node"]
}
}
tags = local.tags
}
module "ebs_kms_key" {
source = "terraform-aws-modules/kms/aws"
version = "~> 1.5"
description = "Customer managed key to encrypt EKS managed node group volumes"
# Policy
key_administrators = [
data.aws_caller_identity.current.arn
]
key_service_roles_for_autoscaling = [
# required for the ASG to manage encrypted volumes for nodes
"arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling",
# required for the cluster / persistentvolume-controller to create encrypted PVCs
module.eks.cluster_iam_role_arn,
]
# Aliases
aliases = ["eks/${local.name}/ebs"]
tags = local.tags
}
module "key_pair" {
source = "terraform-aws-modules/key-pair/aws"
version = "~> 2.0"
key_name_prefix = local.name
create_private_key = true
tags = local.tags
}
resource "aws_security_group" "remote_access" {
name_prefix = "${local.name}-remote-access"
description = "Allow remote SSH access"
vpc_id = data.aws_vpc.vpc_name-application.id
ingress {
description = "SSH access"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["10.0.0.0/8"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = merge(local.tags, { Name = "${local.name}-remote" })
}
resource "aws_iam_policy" "node_additional" {
name = "${local.name}-additional"
description = "node additional policy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"ec2:Describe*",
]
Effect = "Allow"
Resource = "*"
},
]
})
tags = local.tags
}
data "aws_ami" "eks_default" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amazon-eks-node-${local.cluster_version}-v*"]
}
}
data "aws_vpc" "vpc_name-application" {
filter {
name = "tag:Name"
values = ["${var.vpc_name-application}"]
}
}
data "aws_subnet" "subnets01" {
filter {
name = "tag:Name"
values = ["vpc-application-private-us-west-2b"]
}
}
data "aws_subnet" "subnets02" {
filter {
name = "tag:Name"
values = ["vpc-application-private-us-west-2a"]
}
}
data "aws_subnet" "subnets03" {
filter {
name = "tag:Name"
values = ["vpc-application-private-us-west-2c"]
}
}
data "aws_eks_cluster_auth" "default" {
name = local.name
}
resource "aws_iam_policy" "additional" {
name = "${local.name}-additional"
description = "Example usage of node additional policy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"ec2:Describe*",
]
Effect = "Allow"
Resource = "*"
},
]
})
tags = local.tags
}
```
- [ ] âś‹ I have searched the open/closed issues and my issue is not listed.
## Versions
- Module version [Required]:
terraform-aws-eks 19.10.1
- Terraform version:
Terraform v1.3.9
## Reproduction Code [Required]
## Expected behavior
## Actual behavior
### Terminal Output Screenshot(s)
![2023-03-17 21_40_25-main tf - eks - Visual Studio Code](https://user-images.githubusercontent.com/42852702/226080187-52070c59-4337-4e59-a577-8e0115f4329b.png)
## Additional context
It would be helpful if you created a new topic and shared your complete module block.
Alright, thanks! Will do it.