Cycle dependency in plan

Currently I’m using:
terraform-aws-modules/terraform-aws-eks
By default manage_aws_auth_configmap and create_aws_auth_configmap vars in the module are false.
I am creating my own configmap via the kubernetes_config_map.aws_auth resource.

In the kubernetes I need to get the cluster endpoint so I am using a datasource for aws_eks_cluster and aws_eks_cluster_auth.

Running an plan on a fresh setup complains because can’t get data about cluster before the cluster is created.

╷
│ Error: reading EKS Cluster (test-cluster): couldn't find resource
│
│   with data.aws_eks_cluster.default,
│   on k8s.tf line 20, in data "aws_eks_cluster" "default":
│   20: data "aws_eks_cluster" "default" {
│

So I put a depends_on on the [ module.eks ]

However even with the module not handling the configmap it still see’s a Cycle.

provider "kubernetes" {
  host                   = data.aws_eks_cluster.default.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.default.token
}

data "aws_eks_cluster" "default" {
  name = module.eks.cluster_name

  depends_on = [
    module.eks
  ]
}
data "aws_eks_cluster_auth" "default" {
  name = module.eks.cluster_name

  depends_on = [
    module.eks
  ]
}
│ Error: Cycle: data.aws_eks_cluster.default, module.eks.kubernetes_config_map_v1_data.aws_auth, module.eks (close), kubernetes_config_map.aws_auth, data.aws_eks_cluster_auth.default, provider["registry.terraform.io/hashicorp/kubernetes"], module.eks.kubernetes_config_map.aws_auth

You’ve specified that the kubernetes provider cannot be configured until module.eks is done processing… but at the same time, the module.eks makes use of the kubernetes provider! In other words, you’re saying A must happen before B and B must also happen before A, which is logically impossible to satisfy.

I realise you might expect the module’s use of the kubernetes provider to be nullified by the count = 0, but due to the way Terraform works, that doesn’t actually happen - the dependency is still there.

Ultimately I think this approach cannot work, and it needs to be split up into two separate Terraform configurations.

The reason I think that, is that Terraform intrinsically works around the idea of “plan, then apply” - whereas in your situation, on fresh configuration, there’s absolutely no way for this to work without two “plan, then apply” cycles:

  • plan the EKS cluster creation
  • apply the EKS cluster creation
  • Only now, after the apply has finished, is it possible to get a working kubernetes provider to plan what you want to do with that
  • apply results of planning the kubernetes provider

But the module you linked to contains kubernetes resources in the same module which can create the cluster, which seems to contradict what I said… maybe those always error the first time through? I’m not sure how that’s supposed to work.

When it’s inside the eks module the aws_eks_cluster can be a dependency but since I’m trying to depends_on a whole module is where the dependency cycle gets tricky.
I tried to change the dependency from module.eks to aws_eks_cluster.this (the underlying resource) but it wouldn’t allow that since it’s not a declared resource in my TF plan

Yes that is what I was hoping for. If the count was 0 it shouldn’t have the resource in the plan (at least inside the module version of it) and then the only one would be outside the module and avoid the cycle issue.

I know it’s the same project but I have the eks, aws provider stuff in eks.tf and the kubernetes provider in k8s.tf I just don’t think you can do a plan/apply on the project specifying specific files, believe it defaults to the cwd (.)

I tried to use terraform plan -target=module.eks but it’s still complaining about the data source from the k8s file.

Hi @cdenneen,

The trick here is that the count and for_each arguments themselves create dependencies, and so the nodes in the graph are for the resource blocks themselves, not the dynamic instances you’re declaring using count.

Terraform needs to know when is the appropriate time to evaluate the count argument, so therefore it needs to build the dependency graph before it’s evaluated the count argument, and therefore the graph must be cycle-free regardless of what value count ends up having in the end.

I think @maxb’s idea of splitting into two configurations would be the most robust, since then you can ensure the cluster is up and running before trying to do anything with it.

I’m not familiar with this module so I’m not sure if there’s a more “surgical” option that would allow this to all be in one configuration. I think regardless of the dependencies of the data resource there is still the fundamental problem here that Terraform will need to configure the Kubernetes provider before asking it to create any plans, and some features of the Kubernetes provider require access to the server during planning in order to fetch dynamic schema, so they cannot possibly work before the cluster is running.

However, one general rule I’ll share – even though I’m not 100% sure if it will help in this case – is that a single Terraform configuration should typically not include both a resource block and a data block that refer to the same object. From what you’ve described it sounds like your call to the EKS module is indirectly declaring a resource "aws_eks_cluster" block, but you’re also using a data "aws_eks_cluster" block to access the same object, and in that situation you need to take a lot of care to describe the correct order to Terraform; treating the entire module as a dependency isn’t sufficient because (as @maxb pointed out) the module uses the Kubernetes provider itself and so there is no valid order of operations here.

I think this module has a somewhat tricky design in that it seems to expect the calling module to define a provider configuration that itself depends on the output of the module. If the module is written very carefully then that can work in principle because Terraform treats each individual input variable and output value as a separate dependency node, so a module author can carefully design a module to have essentially two independent dependency chains, but if you use module-level depends_on then you’d effectively break that because you force Terraform to treat everything in the module together as a dependency.

The “complete” example shows using the outputs of the module to configure the Kubernetes provider for other parts of the module to use, and so I think the author intended for this module to be one of those “two separate dependency chains” designs, which means that you will need to avoid doing anything which forces Terraform to treat the entire module together as a single dependency node. Perhaps you can adapt that example from the module’s repository to get a working solution within a single module.

I solved it by doing this

resource "null_resource" "execute_command" {
  provisioner "local-exec" {
    command = <<-EOT
      aws eks get-token \
        --cluster-name ${module.eks.cluster_name} \
        --region ${var.region} \
        --profile ${var.profile} > eks_token.txt
    EOT
  }
}

provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "cat"
    args        = ["eks_token.txt"]
  }
}