I successfully created an EKS cluster with managed node groups in private subnets using CloudShell and the following YAML configuration.
Now, I want to achieve the same setup using Terraform. I tried the example code for managed node groups, but I encountered an error stating: Instances failed to join the Kubernetes cluster.
When creating the cluster with YAML, I faced the same error unless I included privateNetworking: true
. Unfortunately, this parameter is not available in Terraform, so I assume there is something else I need to specify to resolve this issue.
YAML configuration for successfully creating a cluster with managed node groups:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: clusterFoo
region: eu-central-1
version: "1.31"
vpc:
id: vpc-123
subnets:
private:
eu-central-1a:
id: subnet-234
eu-central-1b:
id: subnet-345
securityGroup: sg-567
cloudWatch:
clusterLogging:
enableTypes: ["api", "audit", "authenticator", "controllerManager", "scheduler"]
managedNodeGroups:
- name: nodeFoo
amiFamily: AmazonLinux2
ami: ami-678
instanceType: t2.medium
desiredCapacity: 1
minSize: 1
maxSize: 1
volumeSize: 41
privateNetworking: true
overrideBootstrapCommand: |
#!/bin/bash
/etc/eks/bootstrap.sh clusterFoo
addons:
- name: vpc-cni
version: latest
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- name: kube-proxy
version: latest
Terraform script that fails with the error: 'Instances failed to join the Kubernetes cluster.
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
cluster_name = "clusterFoo"
cluster_version = "1.31"
bootstrap_self_managed_addons = false
cluster_addons = {
coredns = {}
eks-pod-identity-agent = {}
kube-proxy = {}
vpc-cni = {}
}
cluster_security_group_id = "sg-567"
# Optional
cluster_endpoint_public_access = false
cluster_endpoint_private_access = true
# Optional: Adds the current caller identity as an administrator via cluster access entry
enable_cluster_creator_admin_permissions = true
vpc_id = "vpc-123"
subnet_ids = [
"subnet-234",
"subnet-345"
]
# control_plane_subnet_ids = ["subnet-xyzde987", "subnet-slkjf456", "subnet-qeiru789"]
# EKS Managed Node Group(s)
eks_managed_node_group_defaults = {
instance_types = ["t2.medium"]
}
eks_managed_node_groups = {
example = {
# Starting on 1.30, AL2023 is the default AMI type for EKS managed node groups
ami_id = "ami-678"
# ami_type = "AL2023_x86_64_STANDARD"
instance_types = ["t2.medium"]
min_size = 1
max_size = 1
desired_size = 1
}
}
tags = {
Environment = "dev"
Terraform = "true"
}
}