Is there any way to reuse the configuration of provider kubernetes in provider helm?

Currently I have

provider "kubernetes" {
  load_config_file = "false"

  host = aws_eks_cluster.main.endpoint

  exec {
    api_version = "client.authentication.k8s.io/v1alpha1"
    command = "aws"
    args = [
      "--region",
      var.aws_region,
      "eks",
      "get-token",
      "--cluster-name",
      aws_eks_cluster.main.name,
    ]
    env = {
      name = "AWS_PROFILE"
      value = var.aws_profile
    }
  }
}

provider "helm" {
  kubernetes {
    load_config_file = "false"

    host = aws_eks_cluster.main.endpoint

    exec {
      api_version = "client.authentication.k8s.io/v1alpha1"
      command = "aws"
      args = [
        "--region",
        var.aws_region,
        "eks",
        "get-token",
        "--cluster-name",
        aws_eks_cluster.main.name,
      ]
      env = {
        name = "AWS_PROFILE"
        value = var.aws_profile
      }
    }
  }
}

I just copy and pasted the configuration from provider "kubernetes" {...} into the provider "helm" { kubernetes {...}} but in the spirity of DRY, I would like to just write this once and tell helm provider to use the the kubernetes provider config.

Is there any way to do that?

2 Likes

Same here. Any elegant/better solutions ? Dynamic blocks maybe?