Multiple Provider.tf files

Hi.
We have 2 provider.tf files. 1 in the root and 1 in the module. The issue here is that i upgraded the provider version in root but see that when making changes. It uses the version that is mentioned in the module one. Just trying to figure out also why does the module one override the root one.

Each module must declare its own provider requirements. When init is run, Terraform will try and find a single version of the module which meets the provider requirements across all modules being used (root and any child modules) that are utilising that provider.

Depending upon the provider constraints in the root and the child it is possible that the child module’s version constraint is the one that has to be honoured.

As an example,

Given:
root provider constraint = “>=2.7.0”
module provider constraint = “~>2.7.0”

When:

  • Latest provider version = “2.7.14”
    • Version used = “2.7.14”
  • As
    • “2.7.14” >= "2.7.0 ==TRUE
    • “2.7.14” ~>= "2.7.0 ==TRUE

When:

  • Latest Provider version = “2.8.1”
  • Highest “2.7.x” version = “2.7.14”
    • Version used = “2.7.14”
  • As
    • “2.8.1” >= "2.7.0 ==TRUE
    • “2.8.1” ~>= "2.7.0 ==FALSE
      • excludes V2.8.1
  • But still
    • “2.7.14” >= "2.7.0 ==TRUE
    • “2.7.14” ~>= "2.7.0 ==TRUE

Also ensure that if you are updating/upgrading your providers you
terraform init -upgrade
otherwise the versions that are in your dependency lock-file will be selected.

Hope that helps

Happy Terraforming.

wow. thank you for that super quick response. seems like the providers were split (provider A in root and provider B in module)
root provider constraint 4.0 (upgraded successfully from 3.5 without issues)
other root provider hashed out

module provider constraint 3.5
module other provider contraint 3.4

when running terraform plan to module provider.tf get error " Error: Backend initialization required, please run “terraform init”

hope this makes sense

No problem,

It is fine, and quite normal, to have providers in your sub-modules that are not in your root module, if the root module doesn’t require them in itself.

You should still be running your init/plan/apply in the root module. If you run the plan/apply in the directory of the sub-module it will try and treat the sub-module you are now in as a root module and require an init, create its own state etc.

The init --upgrade in the root module will take care of updating the providers and lock file for the providers of any sub-modules called from your root module if you have up-versioned the sub-module provider requirements.

Cheers!

Morning. The upgrade to both providers have been completed in the root but the run in devops tasks still shows it’s using the module provider versions. What i tried to do was to comment that entire module provider code and then do another terraform plan on the root directory but get the message "Error: Backend initialization required, please run “terraform init”. this is being run from the root directory of the provider. Worried that this will wipe the settings.

Hi @naidoob79,

Given you are seeing this in Devops tasks this sounds like the Dependency Lock File (.terraform.lock.hcl) - Configuration Language | Terraform | HashiCorp Developer possibly has not been checked in to/merged with the branch you are deploying from.

It is good practice to check this in with your codebase as it ensures that the versions of providers that modules have been developed and tested with as part of your project do not unexpectedly change when you deploy from pipeline agents that will be downloading providers each time.

The .terraform.lock.hcl file should be in the root of the root module (where you are running the terraform command from.

Confirm the version of this file in your repo has the expected (upgraded) providers and version constraints. If it has not, you will need to upload a an updated version after you have run your terraform init -upgrade

If you still are seeing issues, perhaps you can provide the following:

  • The details of your required_providers blocks for the root module and the sub modules from the files in your repo brach that you are deploying from. Example:
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "3.100.0"
    }
  }
  • The output from the terraform init section below, from the pipeline run where you are seeing the versions different from what you expect. Example:
Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/azurerm from the dependency lock file
- Using previously-installed hashicorp/azurerm v3.100.0

Terraform has been successfully initialized!
  • The details in the .terraform.lock.hcl file in the root module (example below) for all providers (the hashes information is not required) from the repo branch you are deploying from.
provider "registry.terraform.io/hashicorp/azurerm" {
  version     = "3.100.0"
  constraints = "3.100.0"
}

That should be sufficient to determine what Terraform is actually doing (as opposed to what you expect it to be doing) in the pipeline runs.