Error: Provider configuration not present in version 0.14.11

Hi, after upgrading to Terraform 0.14.11 from 0.13.7, I get the following error messages while executing a terraform plan. In version 0.13.7, this error didn’t appear.

Error: Provider configuration not present
To work with module.qubole-cluster-omega.restapi_object.qubole-cluster (orphan) its original provider configuration at module.qubole-cluster-omega.provider[“Terraform Registry”]
is required, but it has been removed. This occurs when a provider configuration is removed while objects created by that provider still exist in the state. Re-add the provider configuration to destroy module.qubole-cluster-omega.restapi_object.qubole-cluster (orphan), after which you can remove the provider configuration again.

Each of the error messages is related with an existing resource. (For this example I am simplifying with just one resource)

To upgrade to version 0.13.7 I had to execute the following command:

terraform state replace-provider registry.terraform.io/-/restapi registry.terraform.io/fmontezuma/restapi

The structure of my Terraform directories is the following:

→ local-modules
---->qubole-api
--------main.tf
--------default.tpl
--------outputs.tf
--------variables.tf
--------version.tf
→ hde
---->qubole-api-cluster
-------->dev
------------>qubole-clusters.tf

In local-modules/qubole-api/main.tf, it’s defined:

provider "restapi" {
     uri = "https://us.qubole.com/api/v1.3"
     headers = {
       "X-AUTH-TOKEN" = var.qubole_api_key
       "Content-Type" = "application/json",
       "Accept"       = "application/json",
     }
     debug                = true
     write_returns_object = true
}
locals {
     qubole-cluster = templatefile("${path.module}/${var.template}.tpl", {
     vpc_id                       = var.vpc_id
     aws_region                   = var.aws_region
     role_instance_profile        = var.role_instance_profile

[snip]
}
resource "restapi_object" "qubole-cluster" {
     path  = "/clusters"
     debug = true
     data  = local.qubole-cluster
}

In local-modules/qubole-api/versions.tf:

terraform {
  required_providers {
    restapi = {
      source  = "fmontezuma/restapi"
      version = ">=1.9.3"
    }
  }
}

In hde/qubole-api-cluster/dev/qubole-clusters.tf:

module "qubole-cluster-omega" {
   source                = "../../../../local-modules/qubole-api"
   environment           = "dev"

[snip]

How can I fix this error? Thanks

Hi @scuellar,

Can you run terraform providers in the same directory where you’ve run terraform plan, and share the output? It seems like not all of your configuration is agreed on which provider “restapi” refers to, and so the terraform providers command should hopefully show which providers each of your modules is referring to.

Sure @apparentlymart , here it is the output:

Edit: I ran the command in the wrong dir.

This is the correct one:

Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/aws]
├── provider[terraform.io/builtin/terraform]
├── module.qubole-cluster-omega
│   ├── provider[registry.terraform.io/fmontezuma/restapi] >= 1.9.3
│   ├── module.default-tags
│   └── module.qubole_cluster_secrets
│       ├── provider[registry.terraform.io/hashicorp/aws]
│       └── provider[registry.terraform.io/hashicorp/external]
├── module.tfstate
│   ├── provider[registry.terraform.io/hashicorp/aws]
│   ├── provider[registry.terraform.io/hashicorp/local]
│   ├── provider[registry.terraform.io/hashicorp/external]
│   ├── provider[registry.terraform.io/hashicorp/template]
│   ├── module.s3-policies-tfstate
│       ├── provider[registry.terraform.io/hashicorp/aws]
│       ├── provider[registry.terraform.io/hashicorp/null]
│       └── module.constants
│           └── provider[registry.terraform.io/hashicorp/aws]
│   ├── module.s3-tfstate
│       ├── provider[registry.terraform.io/hashicorp/aws]
│       └── module.constants
│           └── provider[registry.terraform.io/hashicorp/aws]
│   └── module.constants
│       └── provider[registry.terraform.io/hashicorp/aws]

Providers required by state:

    provider[registry.terraform.io/hashicorp/aws]

    provider[registry.terraform.io/hashicorp/null]

    provider[terraform.io/builtin/terraform]

    provider[registry.terraform.io/hashicorp/local]

    provider[registry.terraform.io/fmontezuma/restapi]

    provider[registry.terraform.io/hashicorp/external]

    provider[registry.terraform.io/hashicorp/template]

Thanks for that additional information, @scuellar!

It’s strange to see the error you reported when we can see in terraform providers that it has the correct provider configuration, listed under the module.qubole-cluster-omega module.

The other thing which I find surprising here is that Terraform seems to have detected that module.qubole-cluster-omega.restapi_object.qubole-cluster is removed from your configuration (that’s what “orphan” means in this context), but yet I can see the resource "restapi_object" "qubole-cluster" declaration block in the example you shared.

You mentioned that you accidentally ran terraform providers in the wrong directory at first. Is it possible that you were also running terraform plan in the wrong directory? That could potentially cause the situation you saw here, because it might appear to Terraform that the module as a whole doesn’t exist anymore and thus Terraform must plan to destroy all of the objects declared within it.

Thanks for your answer @apparentlymart . No, I confirm that the terraform plan is executed in the correct directory.

It’s an odd situation and I don’t know what else should I take a look. The terraform plan does not report that it will destroy something, it just ends with the errors and releasing the state lock.

Hi @scuellar,

The error message you saw here implies that Terraform thinks it needs to delete the object, but indeed you can’t see the final plan describing that because Terraform needs to configure the fmontezuma/restapi provider in order to create the plan, and thus it’s failing partway through the planning operation.

Unfortunately I’m not sure what to ask next in order to understand why Terraform thinks that the provider configuration and resource are both removed from the configuration. I think in order to understand better what’s happening the best path would be to run terraform plan with the environment variable TF_LOG=trace set, which will produce a very detailed log of Terraform’s internal behavior during the planning operation.

The log mentions a lot of Terraform implementation details though, since it’s there primarily to help the Terraform team with debugging. If you can create and share a GitHub Gist with all of the contents of that log (because these trace logs are typically too long to share directly in the forum) then I will try to interpret it. Note though that the log will imply lots of details about the full structure of your configuration, including parts that you haven’t shared with me yet, so I’d encourage you to review the output yourself before sharing it to exclude any parts which disclose details that you’d rather not share publicly.

1 Like

Thank you @apparentlymart , I will take a look at the trace and let you know any new finding.