I have a EKS cluster
resource "aws_eks_cluster" "main" { # takes 10 minutes to deploy
name = var.cluster_name
role_arn = aws_iam_role.eks-cluster-role.arn
vpc_config {
subnet_ids = values(aws_subnet.subnets)[*].id
endpoint_public_access = true
endpoint_private_access = true
}
enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"] # api, audit, authenticator, controllerManager, scheduler https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html
version = var.eks_version
#platform_version = "eks.1" # https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html
# Ensure that IAM Role permissions are created before and deleted after EKS Cluster handling.
# Otherwise, EKS will not be able to properly delete EKS managed EC2 infrastructure such as Security Groups.
depends_on = [
aws_iam_role_policy_attachment.eks-cluster-policy,
aws_iam_role_policy_attachment.eks-vpc-resource-controller,
aws_cloudwatch_log_group.eks,
]
}
This will automatically generate a kubernetes ConfigMap in namespace kube-system
called aws-auth
that I need to modify.
Is there anyway to replace that aws-auth
directly from the terraform configuration (without having to use terraform import
)?