Provider configuration through interpolation


I have my own terraform module which create an eks cluster as documented here plus a couple of services on top of k8s in form of helm releases using the helm provider.

After some day since the cluster was created successfully I added some output to the code and tried to apply again but got the error:

Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

I read here this statement from tf doc:


When using interpolation to pass credentials to the Kubernetes provider from other resources, these resources SHOULD NOT be created in the same Terraform module where Kubernetes provider resources are also used . This will lead to intermittent and unpredictable errors which are hard to debug and diagnose. The root issue lies with the order in which Terraform itself evaluates the provider blocks vs. actual resources. Please refer to this section of Terraform docs for further explanation.

I would like to know if it is sufficient to reorganize the code in this way:

module "infra" {
  source = "./modules/infra"

module "services" {
  source = "./modules/services"
  ... (using outputs from module above)

the first module ouputs the eks params which are in turn passed to 2nd module and used for configuring its kubernetes and helm providers.

Is this a right way to operate or I am forced to use 2 separated projects, which 2 sperated runs and tfstates?

Thanks in advance,


I would love to know this too. Can anyone help?

Hi @fabiomarinetti,

I think the behavior that this warning is referring to is that Terraform must evaluate the provider configuration and pass it to the provider prior to asking that provider to produce a plan, and so in order to get a correct result you must make sure that any external attributes you refer to in the provider configuration have values that are either defined directly in the configuration or that the provider (of the referred-to object) can decide during planning.

Unfortunately that typically doesn’t apply to IP addresses or hostnames assigned dynamically as part of creating an object. Unless the remote system uses a documented systematic scheme for constructing such a hostname, the provider must wait until it’s already created the object in order to know what its IP address or hostname will be, and so that value typically won’t be available during the plan phase.

This rule remains regardless of how you assign the values. If you export the hostname from one module and then pass it into another then the result is still an unknown value and so this caveat still applies.

For this reason, the typical answer is to separate the different layers into separate configurations, which you can then terraform apply in the correct order yourself in order to ensure that the necessary services for the second configuration are available before applying it.

1 Like