EKS Cluster Unreachable

Hi all,

I’ve been using the helm provider exec plugin authentication for terraform to authenticate to EKS. As of today, it has stopped working and none of my jobs can connect to either of my two clusters.

Error: Kubernetes cluster unreachable: 
Get "https://<api-server-endpoint>.gr7.us-east-1.eks.amazonaws.com/version?timeout=32s"
getting credentials: exec: executable aws failed with exit code 1

I can successfully run the get-token command from my machine, so the cluster is reachable. I’ve tried using several different provider versions and api versions and continue to see the same error.

Does anyone have any idea what could be happening or how to troubleshoot this further? Or any workarounds? Thanks in advance.

Config:

provider "helm" {
  kubernetes {
    host                   = data.terraform_remote_state.eks.outputs.cluster_endpoint
    cluster_ca_certificate = base64decode(data.terraform_remote_state.eks.outputs.cluster_certificate_authority_data)
    exec {
      api_version = "client.authentication.k8s.io/v1beta1"
      args        = ["eks", "get-token", "--cluster-name", local.cluster_name]
      command     = "aws"
    }
  }
}
1 Like

Did you ever resolve this? And if so, how? I just ran into the same error with little to no indication why.

1 Like

Hello! I have the same issue, did you find a solution?

For me what’s strange is that if I -target the failing resource, it passes :grimacing:

If I disable remote execution on terraform cloud, and run it locally it passes too.

I posted a solution to this problem here, hope it helps:

Solution: Error getting credentials