Hello,
I’ve not used Terraform much and I have a question around best practice for importing resources.
My scenario is that I need to create a AWS Elastic Kubernetes Service (EKS) cluster and then update the AWS authentication config map on that cluster. As the config map is automatically generated when the EKS cluster is created, this means that I have to import it into the Terraform state before I can preform an update.
So now my process is -
- Use Terraform to create a VPC, EKS cluster and node group.
- Import the AWS authentication config map.
- Use Terraform to update the AWS authentication config map.
Now, as far as I can see there is no way to import a resource within a Terraform file, so that leaves me with no option other than to use the -target
option to first create a VPC, EKS cluster and node group.
However, output after a successful apply
suggests that the target option should not be used.
The -target option is not for routine use
So I’m not exactly sure what the best practice here is. Obviously there is the option to have separate Terraform files, but that is undesirable because it would mean that I’d have multiple Terraform states for a single underlying resource (the EKS cluster, which contains the AWS authentication config map). This would just be confusing and potentially cause issues where there are dependencies across states (for example the AWS authentication config map implicitly depends on the node group).
Here is my script. As you can see I’m having to plan
and apply
twice, which slows the process, and as I’m using Octopus that also means that I can’t simply use the built in step to apply a template.
Is it possible to complete the following steps in a single apply? Or is there some other best practice that I’m missing? Any advice would be appreciated.
Script
### Initilize Terraform
terraform init -get-plugins=true
### Create/update the VPC, EKS cluster and node group, but not the AWS authentication config map.
terraform plan -out=octopus1.tfplan -target=module.vpc -target=module.kubernetes-cluster -target=module.node-groups
terraform apply -auto-approve octopus1.tfplan
### (Maybe) import the AWS authentication config map in the Terraform state.
terraform state show -no-color module.user-management.kubernetes_config_map.aws_auth
[[ $? > 0 ]] && terraform import module.user-management.kubernetes_config_map.aws_auth kube-system/aws-auth
### Create/update all resource, including the AWS authentication config map.
terraform plan -out=octopus2.tfplan
terraform apply -auto-approve octopus2.tfplan
Template
provider "aws" {
region = var.cluster_region
}
terraform {
required_version = "~> 0.13.1"
required_providers {
aws = {
version = "~> 3.4.0"
source = "hashicorp/aws"
}
kubernetes = {
version = "~> 1.13.2"
source = "hashicorp/kubernetes"
}
}
backend "s3" {
bucket = "terraform-state.#{Cluster.Url}"
key = "terraform/cluster/cluster.tfstate"
region = "#{Cluster.AwsRegion}"
dynamodb_table = "terraform-locks.#{Cluster.Url}"
encrypt = true
}
}
module "vpc" {
source = "../../modules/vpc"
cluster_name = var.cluster_name
cidr_block = var.vpc_cidr_block
node_availability_zone = var.node_availability_zone
additional_availability_zone = var.additional_availability_zone
}
module "kubernetes-cluster" {
source = "../../modules/kubernetes-cluster"
cluster_name = var.cluster_name
cluster_region = var.cluster_region
subnet_ids = [
module.vpc.public_subnet_id,
module.vpc.private_subnet_id,
module.vpc.additional_private_subnet_id
]
kubernetes_version = var.eks_kubernetes_version
}
module "node-groups" {
depends_on = [module.kubernetes-cluster]
source = "../../modules/node-groups"
cluster_name = var.cluster_name
subnet_id = module.vpc.private_subnet_id
max_nodes = tonumber(var.max_nodes)
min_nodes = tonumber(var.min_nodes)
instance_types = var.node_instance_types
disk_size = tonumber(var.node_disk_size)
kubernetes_version = var.node_kubernetes_version
}
module "user-management" {
source = "../../modules/user-management"
node_role_arn = module.node-groups.node_user_arn
sso_admin_role_arn = var.sso_admin_role_arn
}