Cannot 'import' resource from K8S before 'apply'

Hi,

I cannot import a K8S service account (terraform import) until I invoke terraform apply.
After ‘apply’ (which fails due to resource already exists) the import succeeds and then subsequent ‘apply’ also succeeds.
Could you shed more light on this behaviour? I’d expect ‘import’ to succeed on first run, without the need for prior ‘apply’.

My use case:
I create EKS cluster using one Terraform module.
Then I want to annotate ‘aws-node’ service account as described in https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html
I have another Terraform module that does that (among other things). However ‘aws-node’ service account already exists in EKS, so I need to import it first.
Expected flow (which does not work):
module eks: apply
module service account: import
module service account: apply
Flow required to succeed:
module eks: apply
module service account: apply (fails as service account already exists)
module service account: import
module service account: apply

The output (shortened) is something like:
module eks> terraform apply
Apply complete! Resources: 22 added, 0 changed, 0 destroyed.

module serviceaccount> terraform import module.eks-activate.kubernetes_service_account.kube-system_aws-node kube-system/aws-node
Error: Unable to fetch service account from Kubernetes: Get http://localhost/api/v1/namespaces/kube-system/serviceaccounts/aws-node: dial tcp 127.0.0.1:80: connect: connection refused

module serviceaccount> terraform apply
Error: serviceaccounts “aws-node” already exists

module serviceaccount> terraform import module.eks-activate.kubernetes_service_account.kube-system_aws-node kube-system/aws-node
Import successful!

module serviceaccount> terraform apply
Apply complete! Resources: 1 added, 1 changed, 0 destroyed.

That’s how K8S provider is set up in module ‘serviceaccount’:

data "external" "aws_iam_authenticator" {
  program = ["sh", "-c", "aws-iam-authenticator token -i ${local.cluster_name} | jq -r -c .status"]
}
 
provider "kubernetes" {
  host                      = data.terraform_remote_state.eks.outputs.eks_cluster_endpoint
  cluster_ca_certificate    = "${base64decode(data.terraform_remote_state.eks.outputs.eks_cluster_certificate_authority.data)}"
  token                     = "${data.external.aws_iam_authenticator.result.token}"
  load_config_file          = false
  version = "~> 1.9"
}

data "terraform_remote_state" "eks" {
  backend = "s3"

  config = {
    bucket = "foo"
    key = "bar"
    region = var.region
  }
}

The workaround of (a failing) ‘apply’ before ‘import’ is sort of ugly, so it’d be good to know if there is a better way of achieving the same effect.

Regards,
Marek

1 Like

I also had this issue.
I was able to implement it like this:

resource "null_resource" "aws-node-iam" {
  provisioner "local-exec" {
    command = "aws eks --region ${var.region} update-kubeconfig --name ${var.cluster_name} && kubectl annotate sa aws-node -n kube-system eks.amazonaws.com/role-arn=${aws_iam_role.cni-role.arn}"
  }

  provisioner "local-exec" {
    when = "destroy"
    command = "aws eks --region ${var.region} update-kubeconfig --name ${var.cluster_name} && kubectl annotate sa aws-node -n kube-system eks.amazonaws.com/role-arn-"
  }
  
  depends_on = [
    "spotinst_ocean_aws.eks" # or eks cluster or autoscale group
  ]
}

Its not ideal but it works

Cheers,
Amitay