Module does not support depends_on

terraform --version
Terraform v0.13.0-beta3
+ provider kyma-project.io/kyma-incubator/gardener v0.0.9
+ provider registry.terraform.io/hashicorp/helm v1.2.3
+ provider registry.terraform.io/hashicorp/kubernetes v1.11.3
+ provider registry.terraform.io/hashicorp/local v1.4.0
+ provider registry.terraform.io/hashicorp/tls v2.1.1
main.tf

├── main.tf
├── modules
│   ├── flux
│   │   ├── chart-values
│   │   │   ├── flux-helm-operator-values.yaml.tpl
│   │   │   └── flux-values.yaml.tpl
│   │   ├── locals.tf
│   │   ├── main.tf
│   │   ├── providers.tf
│   │   ├── templates
│   │   │   ├── artifactory-chart-repo-list-development.tpl
│   │   │   ├── artifactory-chart-repo-list-poc.tpl
│   │   │   └── artifactory-chart-repo-list-staging.tpl
│   │   ├── variables.tf
│   │   └── version.tf
│   └── sealed-secrets
│       ├── chart-values
│       │   └── sealed-secrets-values.yaml.tpl
│       ├── locals.tf
│       ├── main.tf
│       ├── outputs.tf
│       ├── providers.tf
│       ├── variables.tf
│       └── version.tf

main.tf
terraform {
  required_version = ">= 0.13.0"
  required_providers {
    gardener = {
      source = "kyma-project.io/kyma-incubator/gardener"
      version = "0.0.9"
    }
  }
}

module "sealed-secrets" {
  source = "./modules/sealed-secrets"
  shoot_credentials_file_path = module.base.shoot_credentials_file_path
  adm_namespace_name = module.base.adm_namespace
  sealed_secrets_chart_version = var.sealed_secrets_chart_version
  sealed_secrets_image_version = var.sealed_secrets_image_version
  depends_on = [module.base]
}

module "flux" {
  source = "./modules/flux"
  shoot_credentials_file_path = module.base.shoot_credentials_file_path
  flux_chart_version = var.flux_chart_version
  flux_helm_operator_chart_version = var.flux_helm_operator_chart_version
  flux_image_version = var.flux_image_version
  flux_helm_operator_image_version = var.flux_helm_operator_image_version
  flux_gitops_auth_user = var.flux_gitops_auth_user
  flux_gitops_auth_key = var.flux_gitops_auth_key
  flux_git_repo_url = var.flux_git_repo_url
  flux_git_branch = var.flux_git_branch
  flux_git_paths = var.flux_git_paths
  flux_git_label = var.flux_git_label
  artifactory_helm_repo_url = var.artifactory_helm_repo_url
  artifactory_helm_repo_username = var.artifactory_helm_repo_username
  artifactory_helm_repo_password = var.artifactory_helm_repo_password
  artifactory_helm_repo_environment = var.artifactory_helm_repo_environment
  depends_on = [module.sealed-secrets]
}

cat modules/flux/providers.tf 
provider "helm" {
  kubernetes {
    config_path = var.shoot_credentials_file_path
    load_config_file = "true"
  }
}

provider "kubernetes"{
  alias = "shootconfig"
  config_path = var.shoot_credentials_file_path
  load_config_file = "true"
}
Error: Module does not support depends_on

  on main.tf line 82, in module "sealed-secrets":
  82:   source = "./modules/sealed-secrets"

Module "sealed-secrets" cannot be used with depends_on because it contains a
nested provider configuration for "kubernetes.shootconfig", at
modules/sealed-secrets/providers.tf:8,10-22.

This module can be made compatible with depends_on by changing it to receive
all of its provider configurations from the calling module, by using the
"providers" argument in the calling module block.


Error: Module does not support depends_on

  on main.tf line 91, in module "flux":
  91:   source = "./modules/flux"

Module "flux" cannot be used with depends_on because it contains a nested
provider configuration for "helm", at modules/flux/providers.tf:1,10-16.

This module can be made compatible with depends_on by changing it to receive
all of its provider configurations from the calling module, by using the
"providers" argument in the calling module block.


Error: Module does not support depends_on

  on main.tf line 91, in module "flux":
  91:   source = "./modules/flux"

Module "flux" cannot be used with depends_on because it contains a nested
provider configuration for "kubernetes.shootconfig", at
modules/flux/providers.tf:8,10-22.

This module can be made compatible with depends_on by changing it to receive
all of its provider configurations from the calling module, by using the
"providers" argument in the calling module block.

My Question is how do i organize the provider for the modules. ? since it is a common connection setting for each module.

Kevin

@apparentlymart Apologies for asking you once more for your inputs. I am trying to understand what am I doing incorrect regarding the providers . Do you have a practical example to understand the configuration?

Thanks,
Kevin

Hi @linuxbsdfreak,

Terraform 0.11 introduced the concept of passing providers explicitly to child modules as a way to support different modules using different provider configurations without each module declaring its own provider blocks.

We did this in order to address the long-standing problem that when you have a provider configuration in a child module and you remove that child module from your configuration you would be simultaneously removing both the resources in that module and the provider configurations necessary to destroy the remote objects associated with those resources, creating a situation where removing the module from configuration is impossible without unusual interventions such as manually editing state snapshots.

As part of the Terraform 0.11 release we updated the documentation to describe provider configurations in child modules as not recommended and to show the newer patterns. This documentation was revised for Terraform 0.12 as part of a general revamp of the documentation in light of the language changes in that release, but the substance of the advice remained the same:

In all cases it is recommended to keep explicit provider configurations only in the root module and pass them (whether implicitly or explicitly) down to descendent modules. This avoids the provider configurations from being “lost” when descendent modules are removed from the configuration. It also allows the user of a configuration to determine which providers require credentials by inspecting only the root module.

Terraform 0.13 is taking the deprecation of provider configurations in nested modules further by disallowing them altogether when used with the new module for_each, count and depends_on mechanisms, because using those creates additional situations where changes to configuration can become blocked due to the dependencies between resources and their providers. Using provider blocks in nested modules remains valid for modules not using those features in the interests of backward compatibility with existing configurations, but the recommendation against that still stands.

I don’t know the details of your use-case, so I can’t really offer more specific advice than this, except to say that the providers argument that the error message is referring to is what’s described under Passing Providers Explicitly.


I think the main design decision you’ll need to make here is how many provider configurations for each distinct provider your “flux” module will need.

If it needs only one configuration for kubernetes and one for helm then you can choose to use default (unaliased) provider configurations for both by moving the two provider blocks from the child module into the root module, removing the provider = kubernetes.shootconfig arguments in resources, and then passing the two providers to the flux module in the calling module block like this:

module "flux" {
  source = "./modules/flux"
  providers = {
    helm       = helm
    kubernetes = kubernetes.shootconfig
  }

  # ...
}

The providers block above means that inside this module the default kubernetes provider configuration is an alias for the root module’s kubernetes.shootconfig, and the default helm provider configuration is an alias for the root module’s default helm configuration. This providers argument is the module block equivalent of provider in a resource block, and it’s a mapping rather than a single reference because modules can use multiple provider configurations while resources always use exactly one.

If your flux module actually needs two kubernetes provider configurations (you showned only one here, but I expect you wouldn’t have set alias = "shootconfig" if there wasn’t also a default kubernetes configuration in that module), then you can pass in both a default and an additional (aliased) configuration by assigning both in the providers argument:

  providers = {
    helm                   = helm
    kubernetes             = kubernetes
    kubernetes.shootconfig = kubernetes.shootconfig
  }

However, the above won’t be allowed unless the child module declares that it is expecting to receive an additional kubernetes provider configuration called “shootconfig”, so in this case the module must include what the documentation calls a “proxy provider configuration”, which is a placeholder for a configuration that will be populated by the calling module:

# This block _does_ go in the child module,
# and is a proxy configuration because it
# contains only `alias` and no other arguments.
provider "kubernetes" {
  alias = "shootconfig"
}

Once your shared module contains either no provider blocks or only “proxy” provider blocks, you should no longer see the error message you mentioned.

Hi @apparentlymart

Thanks for the detailed explanation. This is my current scenario with detailed config with the use case

main.tf

module "base" {
  source = "./modules/base"
  shoot_cluster_name = var.shoot_cluster_name
  project_name = var.project_name
  robot_kubeconfig_file_path = "/path/to-robot.kubeconfig"
  shoot_kubeconfig_dir_path = var.shoot_kubeconfig_dir_path
}


module "sealed-secrets" {
  source = "./modules/sealed-secrets"
  shoot_credentials_file_path = module.base.shoot_credentials_file_path
  adm_namespace_name = module.base.adm_namespace
  sealed_secrets_chart_version = var.sealed_secrets_chart_version
  sealed_secrets_image_version = var.sealed_secrets_image_version
  depends_on = [module.base]
}

module "flux" {
  source = "./modules/flux"
  shoot_credentials_file_path = module.base.shoot_credentials_file_path
  memcached_image_version = var.memcached_image_version
  flux_chart_version = var.flux_chart_version
  flux_helm_operator_chart_version = var.flux_helm_operator_chart_version
  flux_image_version = var.flux_image_version
  flux_helm_operator_image_version = var.flux_helm_operator_image_version
  flux_gitops_auth_user = var.flux_gitops_auth_user
  flux_gitops_auth_key = var.flux_gitops_auth_key
  flux_git_repo_url = var.flux_git_repo_url
  flux_git_branch = var.flux_git_branch
  flux_git_paths = var.flux_git_paths
  flux_git_label = var.flux_git_label
  artifactory_helm_repo_url = var.artifactory_helm_repo_url
  artifactory_helm_repo_username = var.artifactory_helm_repo_username
  artifactory_helm_repo_password = var.artifactory_helm_repo_password
  artifactory_helm_repo_environment = var.artifactory_helm_repo_environment
  depends_on = [module.sealed-secrets]
}

cat modules/base/providers.tf

provider "kubernetes"{
  alias = "robotconfig"
  config_path = var.robot_kubeconfig_file_path
  load_config_file = "true"
}

provider "kubernetes"{
  alias = "shootconfig"
  config_path = "${var.shoot_kubeconfig_dir_path}${var.shoot_cluster_name}.kubeconfig"
  load_config_file = "true"
}

cat modules/base/outputs.tf

output "shoot_credentials_file_path" {
  value = local_file.credentials.filename
}

output "adm_namespace" {
  value = kubernetes_namespace.namespace_adm.id
}

output "shoot_cluster_name" {
  value = var.shoot_cluster_name
}

output "project_name" {
  value = var.project_name
}

cat modules/base/main.tf

data "kubernetes_secret" "robot_kubeconfig" {
  provider = kubernetes.robotconfig
  metadata {
    name = "${var.shoot_cluster_name}.kubeconfig"
    namespace = var.project_name
  }
}

resource "local_file" "credentials" {
    content  = data.kubernetes_secret.robot_kubeconfig.data.kubeconfig
    filename = "${var.shoot_kubeconfig_dir_path}${var.shoot_cluster_name}.kubeconfig"
}

resource "kubernetes_namespace" "namespace_adm" {
  provider = kubernetes.shootconfig
  metadata {
    annotations = {
      name = local.adm_namespace_name
      managedby = local.managedby
    }
  name = local.adm_namespace_name
  }
}

cat modules/sealed-secrets/providers.tf

provider "helm" {
  kubernetes {
    config_path = var.shoot_credentials_file_path
    load_config_file = "true"
  }
}

provider "kubernetes"{
  alias = "shootconfig"
  config_path = var.shoot_credentials_file_path
  load_config_file = "true"
}

cat modules/sealed-secrets/main.tf

resource "kubernetes_secret" "sealed_secrets_k8s_secret" {
   provider = kubernetes.shootconfig
  metadata {
    name = local.sealed_secrets_tls_name
    namespace  = var.adm_namespace_name
    labels = {
       "sealedsecrets.bitnami.com/sealed-secrets-key" = "active"
    }
  }

  data = {
    "tls.crt" = tls_self_signed_cert.sealed_secrets_cert.cert_pem
    "tls.key" = tls_private_key.sealed_secrets_pvt_key.private_key_pem
  }

  type = "tls"
}

resource "helm_release" "helm_chart_sealed_secrets" {
  name       = local.sealed_secrets_release_name
  repository = local.k8s_stable_chart_repo
  chart      = local.sealed_secrets_chart_path
  namespace  = var.adm_namespace_name
  version    = var.sealed_secrets_chart_version
  timeout    = 900

  values = [local.sealed_secrets_chart_values]

  depends_on = [
    kubernetes_secret.sealed_secrets_k8s_secret
  ]
}

My question is . Based on your explanation. What would i change in the below files to fix the issue? Could you provide me a sample ?

modules/sealed-secrets/providers.tf
modules/base/providers.tf
main.tf 

The process is that i use a default kubeconfig robot_kubeconfig_file_path = "/path/to-robot.kubeconfig" to connect to the seed K8s cluster and thereafter get the kubeconfig of the target cluster and then use it for helm and k8s connection.

Thanks once again.
Kevin

Hi @linuxbsdfreak,

This is a lot to absorb all at once so I’m afraid I can’t give a full example here, but with these examples I can at least make some observations and share some ideas.

I’m understanding that you have two different Kubernetes configurations here: “robot” and “shoot”. Your “base” module uses both of them, while the other modules seem to only use the “shoot” one. Furthermore, the configuration of the “shoot” one seems to depend on a local file created by the base module which contains its credentials.

If all of the above is true, then I think the first step would be to write out both kubernetes configurations and the shoot helm configuration in the root module:

provider "kubernetes" {
  alias = "robot"

  config_path      = "/path/to-robot.kubeconfig"
  load_config_file = true
}

provider "kubernetes" {
  alias = "shoot"

  config_path      = module.base.shoot_credentials_file_path
  load_config_file = true
}

provider "helm" {
  alias = "shoot"

  kubernetes {
    config_path      = module.base.shoot_credentials_file_path
    load_config_file = true
  }  
}

Notice that the two “shoot” configurations both depend on the base module to get their credentials file path. That means that the base module can still be responsible for creating that file.

When you call the base module you’ll now pass in the two providers it needs like this:

module "base" {
  source = "./modules/base"
  providers = {
    kubernetes.robot = kubernetes.robot
    kubernetes.shoot = kubernetes.shoot
  }

  # ...
}

Now we can move on to changes in the base module itself. First we need to replace the blocks in modules/base/providers.tf to be proxy provider configurations rather than real provider configurations, so that they can be aliases for the two configurations in the root module:

provider "kubernetes" {
  alias = "robot"
}

provider "kubernetes" {
  alias = "shoot"
}

We’ll also need to make sure there’s a declaration for the shoot_credentials_file_path output value that the shoot provider configuration in the root module depends on, which you could put in a modules/base/outputs.tf file:

output "shoot_credentials_file_path" {
  # Referring to local_file.credentials here
  # ensures that anything using this output
  # will be delayed until the file is created.
  value = local_file.credentials.filename
}

Your sealed-secrets and flux modules seem to need only the “shoot” configuration, so we can use default (un-aliased) provider configurations in those modules and thus avoid the need for proxy provider configurations:

You can delete the provider "helm" and provider "kubernetes" blocks from modules/sealed-secrets/providers.tf completely, because this module will now use the two “shoot” configurations from the root module instead. That also means you can remove all of the provider = kubernetes.shootconfig annotations from your resource blocks: the default provider configuration will be sufficient now. The same should be true for the flux module.

To finish, change the two module blocks in the root module to pass the “shoot” provider configurations in as the child modules’ default provider configurations:

module "sealed-secrets" {
  source = "./modules/sealed-secrets"
  providers = {
    helm       = helm.shoot
    kubernetes = kubernetes.shoot
  }

  # ...
}

module "flux" {
  source = "./modules/flux"
  providers = {
    helm       = helm.shoot
    kubernetes = kubernetes.shoot
  }

  # ...
}

With all of that done, you should have three populated provider blocks in your root module, two “proxy provider configurations” (empty provider blocks) in your base module, and no provider blocks in the other two modules. As a result, all of your provider configurations will originate in your root module, as recommended.

I hope that’s enough to set you in the right direction. If you run into problems trying the above, please let me know and be sure to share exactly what you tried and exactly what error messages Terraform returned, so that I can make sure I’m answering the right question.

Hi @apparentlymart,

Thanks for your support. I understood the process and it works now fine

Kevin

I removed the provider "google" {} provider "google-beta" {} etc provider declarations from my child modules. That solved it.