Series of root modules with provider config from outputs

I’m wondering if there’s a better pattern I can adopt.

I have a series of root modules receiving their provider configuration from the prior root module’s outputs. This appears to be necessary if I wish to avoid placing provider configuration in child modules.

This works, but it requires another tool to orchestrate the series of root modules and transfer secrets in the remote state. I prefer to use only Terraform and avoid persisting secrets in the remote state.

Provider secrets would normally be managed outside of Terraform. For example you login to AWS and then Terraform picks up the credentials from the .aws/credentials file or env vars.

This would normally be be achieved by the use of cli tools (such as saml2aws or the aws cli, for example) or within your CI system (e.g. Jenkins, GitHub Acitons, etc.)

Like this, @stuart-c ?

Well there isn’t normally anything coming out of a root module, as the authentication is separate.

For example for a EKS cluster you would login to AWS using the AWS CLI which creates the authentication tokens to access the cluster via the Kubernetes provider - there are no tokens stored anywhere (and not from a previous root module) [for AWS tokens are all short lived, so would expire within a few hours of creation].

So in your diagram I’d remove all the downward arrows & also have something going upwards to login to your cloud provider for the root module 1 box.

Going back to your original message, remember that anything handled within Terraform gets persisted in the state file (e.g. anything which is a local/passed in variable or resource details). This is why providers are generally configured via other methods - env vars or files created using your CI system or a cli.

Cool, we can use EKS managed with Terraform in CI as an example. In the TF module where the kubectl and helm providers are configured, the kube-config (as a file, or separate values for CA, token, etc.) must be present when that root module is invoked, correct? That is, we wouldn’t have an opportunity to invoke an AWS provider to obtain a kube-config for the EKS cluster.

I read in TF docs that it is not assured that data sources will be available for provider configuration, so only remote state and root module variables can be used. Before that, through trial and error, I found that data sources were sometimes available at the plan phase, but there were corner cases when they were not, so I was looking for a more simple, reliable pattern to follow.

Thank you very much for your insights, @stuart-c .

So the kube-config file would often be created by something run before Terraform. Terraform then picks up that file to be able to access the cluster.

We actually use this code a lot:

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)

    exec {
      api_version = "client.authentication.k8s.io/v1beta1"
      args        = ["token", "--cluster-id", local.cluster_name]
      command     = "aws-iam-authenticator"
    }
  }
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)

  experiments {
    manifest_resource = true
  }

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    args        = ["token", "--cluster-id", local.cluster_name]
    command     = "aws-iam-authenticator"
  }
}

data "aws_eks_cluster" "cluster" {
  name = local.cluster_name
}

I see. Have you had any issues with sourcing the EKS endpoint from the aws_eks_cluster data source? That’s just the sort of provider configuration dependency I was aiming to avoid. There may be no problem in this case because it is not a circular dependency, and the IAM credential is presumed available. I take it that TF is able to infer the dependency on the data source and “knows” to configure the AWS provider before the kubernetes provider. Another possibility, unclear from the plan fragment, is that there’s no AWS provider configuration in this plan, and the aws-iam-authenticator command is invoked by the kubernetes provider and it obtains the AWS credential from the default/shared credentials discovery path, e.g. env vars, ~/.aws/credentials, etc.

All of this seems to point to continuing to orchestrate a series of root modules with another tool like a script or CI or both. I was hoping for something a little more bespoke to Terraform itself, perhaps Terragrunt or Terraspace, but I haven’t looked closely at those yet.

Thanks again, @stuart-c !

In our case we use Jenkins and that sets up the env vars, etc. for access to AWS, Vault, etc.

You can use an externally created kubeconfig file if you wanted instead of the way we are doing it.

But yes in general you need something outside of Terraform to do the various setup tasks, such as authenticating with your cloud provider and pointing at the cluster/Vault server/etc.

1 Like