Questions about aws_eks_node_group


just playing around with Kubernetes on AWS. When I tried to setup my on cluster using Terraform I used the aws_eks_node_group resource (see 1.). That mostly worked. With the instructions I found in the internet, which suggested to configure the nodes manually I had less success. Beside that the “node group” seemed to be shorter and more elegant. That leads to by first question:

Is aws_eks_node_group resource now the way to go?

When I try to destroy my cluster with “terraform destroy” I run into some problems. The workaround seems to be to delete some AWS load balancer (classic) manually. I believe that it was created implicitly with aws_eks_node_group resource.

Is it true that this load balancer is created with aws_eks_node_group?
Is the “destroy” problem a bug or do I forgot something (like a depends_on clause …)?

Getting a grip on that load balancer seems to be essential, when I want to configure DNS (e.g. via Route 53) for my cluster. However I failed to figure out how to do this with Terraform. My ideas so far:

  • Use some data source connected with one of the Terraform resources related with EKS (aws_eks_node_group or aws_eks_cluster). However I found no suitable information in these data sources.

  • Find the data via the aws_elb data source (see 2.). However this data source does not support filters like e.g. the aws_ami data source (see 3.). So I can only find it, if I have found it already …


Any advice would be welcome.

Best regards & thanks
Harald Wilhelmi

Actually I managed to answer my questions myself:

The problem occurred not with the destruction of node group but with the underlying VPC.
The load balancer is not created by terraform but was created by a later helm command adding a deployment in Kubernetes. When deleting the deployment before doing the
“terrafrom destroy” the “terrafrom destroy” works fine.