Import of Kubernetes resource fails with 'Error: Unauthorized'

Hello,

I have a Terraform deployment that deploys a Kubernetes cluster and then later configures ingress. The basic order of events is this -

  1. Create VPC, EKS cluster and node group. Also deploy the nginx-ingress-controller helm chart to the Kubernetes cluster, which will create the nginx namespace on the Kubernetes cluster and provision a classic load balancer in AWS.
  2. Import the nginx namespace into the Terraform state. This is required for two reasons -
    • Most importantly, when terrafrom destory is run the nginx namespace is destroyed, ensuring that the classic load balancer that was provisioned in AWS is removed and thus allowing the VPC to be deleted. Without importing the nginx namespace, automated destruction of the cluster is impossible.
    • So that kubernetes_service data source can be included , which in turn allows an aws_elb data source to be included, which is then used to retrieve the hostname of the classic load balancer.
  3. Create a Route53 hosted zone and add an A record for each of the ingresses that will be required.

So because I have to use terraform import there is absolutely no way for me to run terraform apply just once. Instead, I have to run it for step 1 and step 3 above, using the -target option (which I know isn’t recommended).

While that’s not ideal, I could cope with it. However I’m running in to an issue with the import command where I see Error: Unauthorized.

Acquiring state lock. This may take a few moments...
module.ingress.kubernetes_namespace.nginx: Importing from ID "nginx"...
module.ingress.kubernetes_namespace.nginx: Import prepared!
    Prepared kubernetes_namespace for import
module.ingress.kubernetes_namespace.nginx: Refreshing state... [id=nginx]
Error: Unauthorized
Releasing state lock. This may take a few moments...

I’ve looked in to the issue and it seems that there are severe limitations with the import command that mean it doesn’t work properly where there are provider configurations that depend on a data source.

The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs. For example, a provider configuration cannot depend on a data source.

This seems to be my issue, as can be seen from my provider configuration.

terraform {
  required_version = "1.0.6"
  ...
}

...

module "kubernetes-cluster" {
  source         = "../../modules/kubernetes-cluster"
  cluster_name   = var.cluster.name
  cluster_region = var.cluster.region
  subnet_ids = [
    module.vpc.private_subnet_1_id,
    module.vpc.private_subnet_2_id,
    module.vpc.public_subnet_1_id
  ]
  kubernetes_version = var.cluster.k8s_version
}

data "aws_eks_cluster" "cluster" {
  name = module.kubernetes-cluster.id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.kubernetes-cluster.id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

...

So my question is this… How can I possibly use Terraform when I need to import a resource into my Kubernetes cluster?

  • It is not possible to implicitly import a resource through terraform apply, meaning that I have to split my deployment into two pieces.
  • It is not possible to import a resource to a provider that is not statically configured. But as I’m creating the Kubernetes cluster within the template it’s not possible to know those values.

In an ideal world this would work with just one run of terraform apply, but I don’t think that’s possible. Any tips and (hopefully!) solutions welcome.

Thanks,
David

If you are using Terraform Cloud as the backend, you could sperate your code into two pieces, then use remote state to source the tokens , and use run tiggers to trigger the second workspace when the first one is applied successfully.

Thanks for the reply.

While using a terraform_remote_state data source may prove useful to me in some situations, sadly this is not one of them. Ultimately the import limitation around provider configurations that depend on a data source still remains.

The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs. For example, a provider configuration cannot depend on a data source.

To prove that this unfortunately isn’t the solution for me, I created a cluster using one deployment, and then a short while after I attempted to create a storage class using another deployment.

│ Error: Unauthorized
│ 
│   with module.storage-class.kubernetes_storage_class.efs,
│   on ../../modules/storage-class/storage-class.tf line 35, in resource "kubernetes_storage_class" "efs":
│   35: resource "kubernetes_storage_class" "efs" {

The unauthorised error in this case is because the token that is being output by the first deployment has expired, which is an additional issue as even if the provider configuration refreshed the data source the data would be invalid/expired.

At the moment, Terraform is simply unusable to me as it can’t be users without some kind of manual intervention, so I rally hope there is a solution somewhere.

Thanks,
David

Run trigger schedules the run of second piece of code immidiately after the first one has finished successfully. There shouldn’t be a delay between the two runs.

Also the aws_eks_cluster_auth datasource should be in the second set of code so that it will get a new token whenever the apply of 2nd set of code is triggers.

To make this clear,

First repo ----->
aws provider
vpc,eks cluster and node group

Second repo------>
aws provider
aws_eks_cluster_auth data source <—this will get a temp token for k8s provider
kubernetes provider
nginx-ingress-controller
aws_elb,route53 etc.

Hmm, the more I look in to Terraform the more I feel that it’s not the right solution here. It’s just not flexible enough, particularly around importing existing resources. I think the better thing to do in this case would be to move to a combination CloudFormation and K8s manifests.

had the same error
tf local apply

rm -rf .terraform  
terraform init

then i was able to import resource

I tried

rm -rf. terraform 
terraform init

but it didn’t work for me.

I then tried terraform refresh and that did.

> terraform refresh
...
data.aws_eks_cluster_auth.cluster: Reading...
data.aws_eks_cluster.cluster: Reading...
data.aws_eks_cluster_auth.cluster: Read complete after 0s [id=my-cluster]
data.aws_eks_cluster.cluster: Read complete after 0s [id=my-cluster]
...

> terraform import 'module.env0-agent-eks.module.eks[0].module.eks.kubernetes_config_map.aws_auth[0]' kube-system/aws-auth
Acquiring state lock. This may take a few moments...

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

Releasing state lock. This may take a few moments...
1 Like

terraform refresh was the answer for me on this error. Whether local kubectl is caching tokens, or terraform is caching them, or maybe remote state is caching a data source output from a co-worker’s apply, I’m not sure. I suspect the latter