Providers in Modules?

I vaguely modules that would have providers specified in the modules themselves, but I don’t see this trend anymore. What are tradeoffs to specify providers in modules or in main?

One thing I can see is to have Kubernetes context that is different, and passing in the cluster name before running a module that uses either Kubenetes or Helm providers.

It is still possible to define provider instances inside modules, but I believe it is considered deprecated to do so.

Two notable problems with defining provider instances in modules are:

  • Modules defining their own provider instances cannot be used with count or for_each in their module blocks.

  • Once a module defining its own provider instances has been added to a configuration, it becomes difficult to ever remove it, because removing the module leaves resources left in the state file, for which Terraform no longer has the provider instance configuration to know how to destroy. Consequently it raises an error. Once you get into this mess, you are left needing to craft a workaround to get out of it:

    • Either you modify the module to support an boolean input variable (e.g. destroy) which is used in count or for_each expressions on every resource block throughout the module, to cause them to have zero instances when your input variable is set.

    • Or you create a second ‘stub’ version of the module which contains the provider instance configuration but no resources, and switch the source in the referencing module block over to the stub module.

    • Once you’ve run an apply using one of the above options, the resources created by the module have been destroyed. Only after this apply, can you go back and commit another change to remove the module block.

1 Like

The second problem you described here is the reason for the first: if it were possible to declare a provider configuration in a child module using for_each then removing an instance of that module would hit the same problem you described with removing the module block: the provider configuration would no longer exist to destroy the objects Terraform is currently managing from inside that module instance.

We were too late to notice this problem for single-instance modules and so we had to resort only to documenting it as being a bad idea, to avoid breaking things for people who already had modules set up in this way who didn’t have any need to destroy them, and therefore had no problem.

But by the time of implementing multiple instances of the same module this problem was already well-understood, and with multiple-instance modules it was even harder to escape the trap due to the fact that all instances of a module need to have the same source code, so you can’t do the trick you described of just replacing one instance with an empty module that includes only the provider configuration.

Now that provider configurations in shared modules has been suitably discouraged, it seems plausible to me that a future edition of the Terraform language will finally remove that capability altogether, but I wouldn’t expect that to happen for at least a couple more years since a new language edition is expensive to support (while continuing to also support the old one for at least a little while) and so would need to deliver enough benefits to justify that cost.

1 Like

In our organization, with Kubernetes, we support AKS, EKS, GKE, and have a separate workspace per region. The original code base passed in the credentials to the module that implemented the Kubernetes provider.

I wouldn’t have a scenario where we would create, let’s say, five applications of a different name (though that could happen, for multi-tenant setup). We would use this for example to create GKE, AKS, EKS each in a separate region, and destroy them if not needed. So the module would be very attached to the provider, so in this case, I would want to use a provider alias if I was in a scenario, where I needed to use a module on two different Kubernetes clusters.

We may get into a situation where we may have 2+ Kubernetes clusters in the same region, so we would need to apply the same single application, such as Fission helm chart in each of these Kubernetes clusters.