Terraform AWS Provider Version Conflict and State Issues

Terraform Version

Terraform v0.13.0.

Terraform Configuration Files

terraform {
required_providers {
aws = {
version = “~> 4.65.0”
}

}

}

Debug Output

Initializing provider plugins…

  • Finding hashicorp/template versions matching “>= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0”…

  • Finding hashicorp/random versions matching “>= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1”…

  • Finding hashicorp/aws versions matching “~> 4.65.0, ~> 5.0, >= 2.47.0, >= 2.47.0, >= 2.47.0, ~> 5.0, >= 2.47.0, >= 2.47.0, >= 2.47.0, ~> 5.0, ~> 5.0, >= 2.47.0, >= 2.47.0, ~> 5.0, ~> 5.0, ~> 5.0, ~> 5.0, >= 2.47.0, >= 2.47.0, ~> 5.0, ~> 5.0, >= 2.47.0, ~> 5.0, >= 2.47.0, >= 2.47.0, ~> 5.0, ~> 5.0, >= 2.47.0, >= 2.47.0”…

  • Installing hashicorp/template v2.2.0…

  • Installed hashicorp/template v2.2.0 (self-signed, key ID 34365D9472D7468F)

  • Installing hashicorp/random v3.7.2…

  • Installed hashicorp/random v3.7.2 (self-signed, key ID 34365D9472D7468F)
    Partner and community providers are signed by their developers.
    If you’d like to know more about provider signing, you can read about it here:
    https://www.terraform.io/docs/plugins/signing.html
    Error: Failed to query available provider packages
    Could not retrieve the list of available versions for provider hashicorp/aws:
    no available releases match the given constraints ~> 4.65.0, ~> 5.0, >=
    2.47.0, >= 2.47.0, >= 2.47.0, ~> 5.0, >= 2.47.0, >= 2.47.0, >= 2.47.0, ~> 5.0,
    ~> 5.0, >= 2.47.0, >= 2.47.0, ~> 5.0, ~> 5.0, ~> 5.0, ~> 5.0, >= 2.47.0, >=
    2.47.0, ~> 5.0, ~> 5.0, >= 2.47.0, ~> 5.0, >= 2.47.0, >= 2.47.0, ~> 5.0, ~>
    5.0, >= 2.47.0, >= 2.47.0
    Cleaning up file based variables
    00:01
    ERROR: Job failed: exit status 1

Expected Behavior

Error: Invalid resource instance data in state
on .terraform/modules/vpc_vas/vpc/main.tf line 3:
3: resource “aws_vpc” “vpc_vc01” {
Instance module.vpc_vas.aws_vpc.vpc_vc01 data could not be decoded from the
state: unsupported attribute “enable_classiclink”.
Cleaning up file based variables
00:00
ERROR: Job failed: exit status 1

Actual Behavior

Error: Invalid resource instance data in state
on .terraform/modules/vpc_vas/vpc/main.tf line 3:
3: resource “aws_vpc” “vpc_vc01” {
Instance module.vpc_vas.aws_vpc.vpc_vc01 data could not be decoded from the
state: unsupported attribute “enable_classiclink”.
Cleaning up file based variables
00:00
ERROR: Job failed: exit status 1

Steps to Reproduce

Removed .terraform and .terraform.lock.hcl

Ran terraform init -upgrade

Removed resources from the state

Re-imported the resources

Additional Context

Hello Team,

I am using Terraform v0.13.0 for my project and encountered the following issue during initialization.

Error during terraform init:

Initializing provider plugins…
Terraform failed to retrieve the list of available versions for provider hashicorp/aws due to conflicting version constraints:

  • ~> 4.65.0

  • ~> 5.0

  • >= 2.47.0

As a result, Terraform could not find a single AWS provider version that satisfies all constraints, and the job failed.

Initially, my AWS provider configuration was:

terraform {
  required_providers {
    aws = {
      version = "~> 4.65.0"
    }
  }
}

Later, I upgraded the AWS provider to:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 5.0"
    }
  }
}

After upgrading, I encountered the following error:

Error: Invalid resource instance data in state
unsupported attribute "enable_classiclink"

To resolve this, I performed the following workaround:

  • Removed .terraform and .terraform.lock.hcl

  • Ran terraform init -upgrade

  • Removed resources from the state

  • Re-imported the resources

After this workaround, terraform plan shows that Terraform wants to destroy and recreate almost all resources:

Plan: 330 to add, 29 to change, 329 to destroy

I do not understand why Terraform is attempting to recreate all resources after the provider upgrade.
Please help me understand the root cause and advise on the correct way to resolve this issue without recreating the entire infrastructure.

Thank you. Terraform AWS Provider Version Conflict and State Issues · Issue #38079 · hashicorp/terraform · GitHub

Hi James Bardin,

Thank you for your response.

Let me clarify my environment and the steps I followed, as I believe some important context may be missing.

I am running Terraform in a self-managed GitLab setup, where:

  • GitLab is used for source code management
  • Terraform is executed via GitLab Runners
  • S3 backend is used to store the Terraform state file
  • DynamoDB is used for state locking

Before applying any changes to production, I followed a standard promotion approach:

  1. I first tested the AWS provider upgrade in a non-production (test) environment.

  2. In test, I performed the following steps:

    • Removed .terraform and .terraform.lock.hcl
    • Ran terraform init -upgrade
    • Removed affected resources from the state
    • Re-imported the resources
  3. After these steps, terraform plan and apply behaved as expected in the test environment, with no large-scale resource recreation.

Only after validating this successfully in test did I implement the same procedure in the production project.

However, in production—despite using the same backend (S3 + DynamoDB), workflow, and codebase—Terraform now produces the following plan:

Plan: 330 to add, 29 to change, 329 to destroy

This is the main point of concern, as I do not understand why Terraform is now treating almost all resources as needing replacement, given that:

  • The infrastructure already exists
  • The resources were re-imported
  • The same process worked correctly in the test environment

I understand your point regarding the very old Terraform version (v0.13.0), and I agree that upgrading Terraform itself is likely a necessary step. However, I would like to better understand:

  • The root cause of why the provider upgrade combined with state re-import leads to such widespread diffs in production
  • Whether this behavior is expected due to schema/state incompatibilities between AWS provider v4 and v5 when using Terraform 0.13
  • The correct and safest migration path to avoid full resource recreation, especially in a production environment

Any guidance on how to stabilize the state and provider versions—or a recommended step-by-step upgrade path—would be greatly appreciated.

Thank you for your time and support.

Best regards,
Prodipto

You have changed some unknown number of things in your environment, so it’s really not possible to guess what is going wrong here. You need to narrow down the scope, and change one piece at a time. Changing all modules, modifying the state, and upgrading the provider all at the same time makes it impossible to troubleshoot.

The deletion of the module sources and subsequent upgrade could have changed all the resource configurations as far as we can tell. You did mention that you couldn’t upgrade the provider because of conflicting version constraints, so there was already something in place to try and prevent this from happening. If you need to upgrade a module, do that one module first to make sure you have a stable configuration. Again, the key is to do one thing at a time whenever possible.

Then you can upgrade the provider, preferably without making any configuration changes. This will be made easier with a current Terraform release, since there are likely multiple old bugs showing up here. Once you’ve removed all the other variables from the situation, the plan output will be more useful for tracking down what is going wrong.

If the unsupported attribute error is all that ends up blocking you, you could try and edit the json state directly, removing only that attribute from the instance object. This is something modern terraform does transparently, but an unpatched early release may not have yet incorporated.

Hi James,

Thank you for the clarification — that helps.

I’d like to validate my understanding and ask one focused question about recovery.

In my setup:

Terraform state is securely stored in an S3 backend (with DynamoDB locking)

All Terraform source code is versioned in a self-managed GitLab instance

Given that both the state file and the source code history are preserved, I want to understand whether it should be possible to recover the project to a stable state (i.e., a plan with no adds and no destroys) by:

Restoring a previously working commit of the Terraform configuration

Pinning the same module and provider versions that were used at that time

Re-initializing Terraform without upgrading providers or modules

In other words, is it correct that the large plan proposing:

Plan: 330 to add, 29 to change, 329 to destroy

is not an unavoidable outcome, but rather the result of multiple upgrades and state/configuration changes occurring at the same time?

I want to confirm that, with the state intact and configuration versions aligned, it should be possible to return to a no-op plan before attempting any incremental upgrades.

Thanks again for your guidance.

Best regards,
Prodipto

Yes, if you can roll back the configuration and state to the point where you previously could plan with no changes, then it follows that you will again be able to run a plan with no changes.
I can’t say what is unavoidable in your case without working with the same data, but the replacement of most of the resources should not be required.

I would strongly suggest working on the Terraform update first, since having a current release with the most predictable behavior will probably make other updates easier. The upgrade guides have some information on changes that might need to be made. You should check with 0.14 next, but the goal is to get post v1.0 where there is the most compatibility to get to the latest releases. As always, check the upgrade guides and CHANGELOGs for issues you might encounter, for example there were some changes to the s3 backend in the last year or two.