Best practice when importing a resource is required before a module can be applied

Hello,

I’ve not used Terraform much and I have a question around best practice for importing resources.

My scenario is that I need to create a AWS Elastic Kubernetes Service (EKS) cluster and then update the AWS authentication config map on that cluster. As the config map is automatically generated when the EKS cluster is created, this means that I have to import it into the Terraform state before I can preform an update.

So now my process is -

  1. Use Terraform to create a VPC, EKS cluster and node group.
  2. Import the AWS authentication config map.
  3. Use Terraform to update the AWS authentication config map.

Now, as far as I can see there is no way to import a resource within a Terraform file, so that leaves me with no option other than to use the -target option to first create a VPC, EKS cluster and node group.

However, output after a successful apply suggests that the target option should not be used.

The -target option is not for routine use

So I’m not exactly sure what the best practice here is. Obviously there is the option to have separate Terraform files, but that is undesirable because it would mean that I’d have multiple Terraform states for a single underlying resource (the EKS cluster, which contains the AWS authentication config map). This would just be confusing and potentially cause issues where there are dependencies across states (for example the AWS authentication config map implicitly depends on the node group).

Here is my script. As you can see I’m having to plan and apply twice, which slows the process, and as I’m using Octopus that also means that I can’t simply use the built in step to apply a template.

Is it possible to complete the following steps in a single apply? Or is there some other best practice that I’m missing? Any advice would be appreciated.

Script

### Initilize Terraform
terraform init -get-plugins=true

### Create/update the VPC, EKS cluster and node group, but not the AWS authentication config map.
terraform plan -out=octopus1.tfplan -target=module.vpc -target=module.kubernetes-cluster -target=module.node-groups
terraform apply -auto-approve octopus1.tfplan

### (Maybe) import the AWS authentication config map in the Terraform state.
terraform state show -no-color module.user-management.kubernetes_config_map.aws_auth
[[ $? > 0 ]] && terraform import module.user-management.kubernetes_config_map.aws_auth kube-system/aws-auth

### Create/update all resource, including the AWS authentication config map.
terraform plan -out=octopus2.tfplan
terraform apply -auto-approve octopus2.tfplan

Template

provider "aws" {
  region = var.cluster_region
}

terraform {
  required_version = "~> 0.13.1"
  required_providers {
    aws = {
      version = "~> 3.4.0"
      source  = "hashicorp/aws"
    }
    kubernetes = {
      version = "~> 1.13.2"
      source  = "hashicorp/kubernetes"
    }
  }
  backend "s3" {
    bucket         = "terraform-state.#{Cluster.Url}"
    key            = "terraform/cluster/cluster.tfstate"
    region         = "#{Cluster.AwsRegion}"
    dynamodb_table = "terraform-locks.#{Cluster.Url}"
    encrypt        = true
  }
}

module "vpc" {
  source = "../../modules/vpc"
  cluster_name = var.cluster_name
  cidr_block                   = var.vpc_cidr_block
  node_availability_zone       = var.node_availability_zone
  additional_availability_zone = var.additional_availability_zone
}

module "kubernetes-cluster" {
  source = "../../modules/kubernetes-cluster"
  cluster_name   = var.cluster_name
  cluster_region = var.cluster_region
  subnet_ids = [
    module.vpc.public_subnet_id,
    module.vpc.private_subnet_id,
    module.vpc.additional_private_subnet_id
  ]
  kubernetes_version = var.eks_kubernetes_version
}

module "node-groups" {
  depends_on = [module.kubernetes-cluster]
  source = "../../modules/node-groups"
  cluster_name = var.cluster_name
  subnet_id = module.vpc.private_subnet_id
  max_nodes = tonumber(var.max_nodes)
  min_nodes = tonumber(var.min_nodes)
  instance_types = var.node_instance_types
  disk_size = tonumber(var.node_disk_size)
  kubernetes_version = var.node_kubernetes_version
}

module "user-management" {
  source = "../../modules/user-management"
  node_role_arn = module.node-groups.node_user_arn
  sso_admin_role_arn = var.sso_admin_role_arn
}

I’d suggest looking at the commnuity EKS module (GitHub - terraform-aws-modules/terraform-aws-eks: Terraform module to create an Elastic Kubernetes), either to use or at least to see what they do, as the aws auth config map is managed by the module and nothing special is needed when creating the cluster (no -target, etc.).

Thanks for the reply, and the link you provided has helped with specific scenario I used as an example.

However, the crux of the issue still remains as there are other resources that AWS automatically creates when a new EKS cluster is provisioned that need to be managed by Terraform.

For example, a storage class called gp2 is created and that needs to be updated. Using exactly the same process as above, I’m importing the resource into Terraform and then running terraform apply again to configure it as required.

We just don’t use the auto created storage class and instead create our own one

We also don’t use it, but AWS makes it the default storage class, so an annotation needs to be updated.

I’ve tried to import the resource using the local-exec provisioner within the module, but this fails because the state is understandably locked.

resource "null_resource" "import_gp2" {
  provisioner "local-exec" {
    command = "terraform import -no-color module.storage-class.kubernetes_storage_class.gp2 gp2"
  }
}

resource "kubernetes_storage_class" "gp2" {
  depends_on = [null_resource.import_gp2]
  metadata {
    name = "gp2"
    annotations = {
      "storageclass.kubernetes.io/is-default-class" = "false"
    }
  }
  ...
}
Acquiring state lock. This may take a few moments...
module.storage-class.null_resource.import_gp2: Creating...
module.storage-class.null_resource.import_gp2: Provisioning with 'local-exec'...
module.storage-class.null_resource.import_gp2 (local-exec): Executing: ["/bin/sh" "-c" "terraform import -no-color module.storage-class.kubernetes_storage_class.gp2 gp2"]

module.storage-class.null_resource.import_gp2 (local-exec): Error: Error locking state: Error acquiring the state lock: ConditionalCheckFailedException: The conditional request failed