Kubernetes provider error when using aws-eks as a submodule

Hello, i’m using terraform eks module v.1.11.0, kubernetes provider 1.11.1. I’ve put my cluster configuration in a module and describe kubernetes provider in the root module like this

provider "kubernetes" {
  host                   = module.eks-cluster.cluster_host
  cluster_ca_certificate = module.eks-cluster.cluster_ca_certificate
  token                  = module.eks-cluster.cluster_token
  load_config_file       = false
  version                = "= 1.11.1"
}

I’m exposing variables from the underlying module like this

output "cluster_host" {
  value = data.aws_eks_cluster.cluster.endpoint
}
output "cluster_ca_certificate" {
  value = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}
output "cluster_token" {
  value = data.aws_eks_cluster_auth.cluster.token
}

And getting this error
Error: Get http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth: dial tcp [::1]:80: connect: connection refused
It doesn’t happen if i use aws-eks module as a root one, as described in the documentation. How do i fix this? Thanks.

1 Like

Seems like related to https://github.com/terraform-providers/terraform-provider-kubernetes/issues/708
Terraform docs mentions this.
https://www.terraform.io/docs/providers/kubernetes/#stacking-with-managed-kubernetes-cluster-resources
My goal is to make cluster provisioning easier, so i’m refactoring current setup from root module to submodules. Can anyone give an advice on this? Seems like separate applies will only make it harder.