Kubernetes provider depending on another resource


I’m trying to configure a kubernetes cluster that was previously created with rancher. To do that, I thought that I’d get the kubeconfig file right from rancher with the rancher2_cluster data source, and then put it in a standard place with a local_file resource.

Unfortunately, the terraform kubernetes provider is failing with:

Error: Failed to load config (; default context):
invalid configuration: no configuration has been provided
in provider \"kubernetes\"

It seems that the provider must be defined before any execution, including the kubeconfig retrieval.

My provider definition file:

provider "rancher2" {
  api_url    = "https://rancher.example.com"
  token_key  = "token-xxxxx:MyTOKEN"

# Get kubeconfig file (from rancher)
data "rancher2_cluster" "my_cluster" {
    name = "my_cluster"

resource "local_file" "kubeconfig" {
    filename = "/root/.kube/config"
    content = "${data.rancher2_cluster.my_cluster.kube_config}"

provider "kubernetes" {
    config_path = local_file.kubeconfig.filename

Is this expected? Can I do something about it?

However, since provider configurations must be evaluated in order to perform any resource type action, provider configurations may refer only to values that are known before the configuration is applied. In particular, avoid referring to attributes exported by other resources unless their values are specified directly in the configuration.

If you only use the Rancher provider to retrieve the kube_config for the Kubernetes config, then you probably don’t need to modify resources with both providers during the same apply operation. So you might simply write it as two configs, one for Rancher and one for Kubernetes, and also hardcode the config_path in the Kubernetes provider.

If you want to export more variables from Rancher than the kube_config and re-use in the Kubernetes provider, I’d recommend using the terraform_remote_state data source

1 Like

Indeed, that’s good to know. Thank you.