Dynamically switching between different providers

Hi all,

we were looking forward to the count + for_each feature on modules quite some time and are very happy that it made it now to 0.13. Thanks for your good work!

But we are struggling a bit with the implementation for our use-case. We have one terraform repo with modules for different cloud providers (aws, gcp, azure). The modules share the input variables. Only one of that modules will be applied, depending on user decision. At the moment we use resource targeting to only run one module for the cloud-provider of choice t achieve that.

Now with the new count possibility we wanted to get rid of the targeting, as it has some other side effects. So the idea is, depending on a input variable only the desired cloud-provider module gets enabled and applied (count=1) and the others don’t.
But this does not work, because all of the terraform providers have to be configured top-level (no nested provider configs allowed) and need valid specs (e.g. auth credentials) set.

How can we achieve that? Our users can only provide settings for one specific provider.
Any ideas on how to maybe restructure the layout of the modules to achieve that, without targeting the module on every apply using the count on modules?

Thanks & Greetings,
Tom

Hi @tpatzig,

Having a single module be able to dynamically switch between providers is not an intended use-case of module count and for_each.

Our Module Composition guide has a section Multi-cloud Abstractions which describes the recommended way to achieve that: write a separate module for each provider while keeping the module interfaces (input variables and output values) similar, and then the calling module can decide which implementation to use by choosing the appropriate module.

If an object oriented programming analogy is helpful to you, you could perhaps think of this as like the difference between having a single class that handles all possible variations of a particular kind of functionality (what you tried) vs. defining an interface and then implementing it several times with the caller deciding which implementation to use. Terraform’s structural type system means that it’s slightly different in the details (e.g. “interface” in Terraform is a matter of conventions, not a named concept in its own right), but the design principle is similar.

If you’d like to talk about that some more then I’m happy to get into more details in a separate forum topic!

I noticed that some providers accept inputs defining safe, do nothing configurations. This appears to allow the following:

locals {
  aws = {
    region = "",
    access_key = "",
    secret_key = ""
  }
  azurerm = {
    subscription_id = "",
    client_id = "",
    client_secret = "",
    tenant_id = ""
  }
  digitalocean = {
    token = ""
  }
}

provider aws {
  region = local.aws.region
  access_key = local.aws.access_key
  secret_key = local.aws.secret_key
}

module use_aws {
  source = "./aws"
  count = local.aws.region != "" ? 1 : 0
}

provider azurerm {
  features {}
  subscription_id = local.azurerm.subscription_id
  client_id = local.azurerm.client_id
  client_secret = local.azurerm.client_secret
  tenant_id = local.azurerm.tenant_id
}

module use_azurerm {
  source = "./azurerm"
  count = local.azurerm.subscription_id != "" ? 1 : 0
}

provider digitalocean {
  token = local.digitalocean.token
}

module use_digitalocean {
  source = "./digitalocean"
  count = local.digitalocean.token != "" ? 1 : 0
}

provider null {}

resource null_resource testresource {
  provisioner local-exec { command = "echo this works" }
}

Move the locals block into vars, and user input should be able to dictate which providers are used without creating a new root module.

It’s unfortunate that provider blocks need to be explicit, but at least the most extensive configuration should be the only root module necessary.

Hi Jeremy,

thanks for your reply. Nice idea, but this only works if there is no code inside the modules.
Once you start adding aws resources into the use_aws module path, then you get:

Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.

But this is exactly what we want to achieve. Any other ideas?

Greetings,
Tom

We do have differente modules: one for azure, one for gcp and one for aws. They share the user inputs and they have same outputs.
Now the idea is to use count, based on the user input to run either the azure modules or the gcp or aws one. For example, if an azure subscription_id is given, we set count = 1 on the azure module and count = 0 for the other hyperscaler modules.

The count on the module itsself works, it will only apply that resources of the desired module. But, the problem is with the providers. Each of the configured providers needs auth credentials. So, if a user only wants to run in azure it requires to add aws and gcp credentials as well. The user can only provide credentials for one target provider, that should be applied.

Any idea how to achieve that?

Apparently the aws provider uses credentials configured for the aws cli tool if the access and private keys supplied are empty. It’s not a very appealing solution, but creating a default user that lacks permission to create or manipulate resources could work.

Were you able to resolve this? We’re seeing the same issue.

Unfortunately, not really. My root modules now instantiate providers next to the modules that use them. The user base and terraform configuration sizes involved are relatively small.

Laaaaaaame. :weary: That’s not good. I was thinking that we needed to reconfigure our set up to be more “v0.13 compliant” but I guess that’s not the case. :open_mouth: