Terraform executing blocks before aws is ready

I have two modules, first is “base” which configures vpc AND EKS with output variables for eks cluster id. The second module configures the k8s namespaces, deployments and jobs. However the second module almost ALWAYS fails because it tries to execute its work (such as creating namespace) times out because it is doing the work BEFORE the API endpoint is actually available. I tried to create a “serial” link by exposing in outputs.tf for module 1 a variable (eks cluster id), and importing it as a variable into module 2, but it still tries to create the namespace way too early. Rerunning immediately with same settings, obviously works. In googling this appears to be a common issue, but i am not clear on how anyone actually solves it. For example, the “answer” is to usually use null resource but i dont understand how that works and all examples are non working for example referencing non existing “first” resource. even if i got that working, i dont know how to tell that kubernetes in aws is actually “responding” or not, so i dont know what to put in the exec line.

module "us-east-1-base" {
  source = "./modules/base"

  providers = {
aws = aws.us-east-1
  }

  tenancy = var.tenancy
  vpc_name = var.class
  environment = var.environment
  aws_region = "us-east-1"
  aws_az = ["us-east-1a","us-east-1c"]
  vpc_cidr = "10.255.64.0/20"

}





    provider "kubernetes" {
        # if below not provided then pulls data from kubectl
        load_config_file       = false
        alias = "us-east-1-k8s"

        host                   = module.us-east-1-base.eks_cluster_endpoint
        cluster_ca_certificate = module.us-east-1-base.eks_cluster_ca
        token                  = module.us-east-1-base.eks_cluster_token
    }

    # base webserver
    module "us-east-1-webserver" {
      source = "./modules/webserver"
      providers = {
        aws = aws.us-east-1
        kubernetes = kubernetes.us-east-1-k8s
      }

      vpc_id = module.us-east-1-base.vpc_id
      customer              = var.customer
      tenancy               = var.tenancy
      environment           = var.environment
      aws_region            = "us-east-1"
      eks_arn               = module.us-east-1-base.eks_arn
    }

    outputs from base
    # returned to force serialization so that the EKS cluster will be built for any subsequent module

    output "eks_arn" {
        depends_on = [helm_release.nginx_ingress]
        value = module.eks.cluster_arn
    }

I am not sure how to get module.2.kubernetes_namespace to wait for the resources to become available.

ideas?

Looks like i fixed it by stealing this code from the eks module to wait for the eks cluster to come online before i create namespaces or deployments.

    resource "null_resource" "wait_for_cluster" {
      depends_on = [
      ]

      provisioner "local-exec" {
        command     = var.wait_for_cluster_cmd
        interpreter = var.wait_for_cluster_interpreter
        environment = {
          ENDPOINT = var.eks_cluster_endpoint
        }
      }
    }
    resource "kubernetes_namespace" "webserver" {
      depends_on = [null_resource.wait_for_cluster]
      metadata {
    name = local.namespace
      }
    }

Though it seems silly to me there is no way to tell terraform to wait for something to finish before using it (ie depends_on). another option google seems to suggest is to NOT have multiple modules in the same tf and have separate runs… that then just pushes the “programming” into bash or some other script to make sure everything occurs in order which also seems counter intuitive. is it really the case that you can not use eks in modules (again, googling seems to confirm this is an issue) which just seems wrong to me so i may be missing something as this would imply that you pretty much can not use modules for anything complicated which is the whole point of modules! oh, create a vpc in one module and use it in another, sorry you cant do that because the second module runs parallel so it accesses the vpc before its created. or maybe it only applies to things that execute local-exec?

I had tried to serialize which google seems to suggest will work:
module/base/outputs.tf

    # returned to force serialization so that the EKS cluster will be built for any subsequent module
    output "eks_arn" {
        depends_on = [helm_release.nginx_ingress]
        value = module.eks.cluster_arn
    }

and then in main.tf

    module "us-east-1-webserver" {
      source = "./modules/webserver"

      providers = {
        aws = aws.us-east-1
        kubernetes = kubernetes.us-east-1-k8s
      }
      vpc_id = module.us-east-1-base.vpc_id
      eks_cluster_endpoint = module.us-east-1-base.eks_cluster_endpoint

      customer              = var.customer
      tenancy               = var.tenancy
      environment           = var.environment
      aws_region            = "us-east-1"
      eks_arn               = module.us-east-1-base.eks_arn
    }

but that didnt do anything and the webserver tried to create the namespace before the helm install (meaning the depends_on didnt work, and eks wasnt up and running yet).