Terraform AWS Provider Version Conflict and State Issues

Terraform Version

Terraform v0.13.0.

Terraform Configuration Files

terraform {
required_providers {
aws = {
version = “~> 4.65.0”
}

}

}

Debug Output

Initializing provider plugins…

  • Finding hashicorp/template versions matching “>= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0, >= 2.1.0”…

  • Finding hashicorp/random versions matching “>= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1, >= 2.2.1”…

  • Finding hashicorp/aws versions matching “~> 4.65.0, ~> 5.0, >= 2.47.0, >= 2.47.0, >= 2.47.0, ~> 5.0, >= 2.47.0, >= 2.47.0, >= 2.47.0, ~> 5.0, ~> 5.0, >= 2.47.0, >= 2.47.0, ~> 5.0, ~> 5.0, ~> 5.0, ~> 5.0, >= 2.47.0, >= 2.47.0, ~> 5.0, ~> 5.0, >= 2.47.0, ~> 5.0, >= 2.47.0, >= 2.47.0, ~> 5.0, ~> 5.0, >= 2.47.0, >= 2.47.0”…

  • Installing hashicorp/template v2.2.0…

  • Installed hashicorp/template v2.2.0 (self-signed, key ID 34365D9472D7468F)

  • Installing hashicorp/random v3.7.2…

  • Installed hashicorp/random v3.7.2 (self-signed, key ID 34365D9472D7468F)
    Partner and community providers are signed by their developers.
    If you’d like to know more about provider signing, you can read about it here:
    https://www.terraform.io/docs/plugins/signing.html
    Error: Failed to query available provider packages
    Could not retrieve the list of available versions for provider hashicorp/aws:
    no available releases match the given constraints ~> 4.65.0, ~> 5.0, >=
    2.47.0, >= 2.47.0, >= 2.47.0, ~> 5.0, >= 2.47.0, >= 2.47.0, >= 2.47.0, ~> 5.0,
    ~> 5.0, >= 2.47.0, >= 2.47.0, ~> 5.0, ~> 5.0, ~> 5.0, ~> 5.0, >= 2.47.0, >=
    2.47.0, ~> 5.0, ~> 5.0, >= 2.47.0, ~> 5.0, >= 2.47.0, >= 2.47.0, ~> 5.0, ~>
    5.0, >= 2.47.0, >= 2.47.0
    Cleaning up file based variables
    00:01
    ERROR: Job failed: exit status 1

Expected Behavior

Error: Invalid resource instance data in state
on .terraform/modules/vpc_vas/vpc/main.tf line 3:
3: resource “aws_vpc” “vpc_vc01” {
Instance module.vpc_vas.aws_vpc.vpc_vc01 data could not be decoded from the
state: unsupported attribute “enable_classiclink”.
Cleaning up file based variables
00:00
ERROR: Job failed: exit status 1

Actual Behavior

Error: Invalid resource instance data in state
on .terraform/modules/vpc_vas/vpc/main.tf line 3:
3: resource “aws_vpc” “vpc_vc01” {
Instance module.vpc_vas.aws_vpc.vpc_vc01 data could not be decoded from the
state: unsupported attribute “enable_classiclink”.
Cleaning up file based variables
00:00
ERROR: Job failed: exit status 1

Steps to Reproduce

Removed .terraform and .terraform.lock.hcl

Ran terraform init -upgrade

Removed resources from the state

Re-imported the resources

Additional Context

Hello Team,

I am using Terraform v0.13.0 for my project and encountered the following issue during initialization.

Error during terraform init:

Initializing provider plugins…
Terraform failed to retrieve the list of available versions for provider hashicorp/aws due to conflicting version constraints:

  • ~> 4.65.0

  • ~> 5.0

  • >= 2.47.0

As a result, Terraform could not find a single AWS provider version that satisfies all constraints, and the job failed.

Initially, my AWS provider configuration was:

terraform {
  required_providers {
    aws = {
      version = "~> 4.65.0"
    }
  }
}

Later, I upgraded the AWS provider to:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 5.0"
    }
  }
}

After upgrading, I encountered the following error:

Error: Invalid resource instance data in state
unsupported attribute "enable_classiclink"

To resolve this, I performed the following workaround:

  • Removed .terraform and .terraform.lock.hcl

  • Ran terraform init -upgrade

  • Removed resources from the state

  • Re-imported the resources

After this workaround, terraform plan shows that Terraform wants to destroy and recreate almost all resources:

Plan: 330 to add, 29 to change, 329 to destroy

I do not understand why Terraform is attempting to recreate all resources after the provider upgrade.
Please help me understand the root cause and advise on the correct way to resolve this issue without recreating the entire infrastructure.

Thank you. Terraform AWS Provider Version Conflict and State Issues · Issue #38079 · hashicorp/terraform · GitHub

Hi James Bardin,

Thank you for your response.

Let me clarify my environment and the steps I followed, as I believe some important context may be missing.

I am running Terraform in a self-managed GitLab setup, where:

  • GitLab is used for source code management
  • Terraform is executed via GitLab Runners
  • S3 backend is used to store the Terraform state file
  • DynamoDB is used for state locking

Before applying any changes to production, I followed a standard promotion approach:

  1. I first tested the AWS provider upgrade in a non-production (test) environment.

  2. In test, I performed the following steps:

    • Removed .terraform and .terraform.lock.hcl
    • Ran terraform init -upgrade
    • Removed affected resources from the state
    • Re-imported the resources
  3. After these steps, terraform plan and apply behaved as expected in the test environment, with no large-scale resource recreation.

Only after validating this successfully in test did I implement the same procedure in the production project.

However, in production—despite using the same backend (S3 + DynamoDB), workflow, and codebase—Terraform now produces the following plan:

Plan: 330 to add, 29 to change, 329 to destroy

This is the main point of concern, as I do not understand why Terraform is now treating almost all resources as needing replacement, given that:

  • The infrastructure already exists
  • The resources were re-imported
  • The same process worked correctly in the test environment

I understand your point regarding the very old Terraform version (v0.13.0), and I agree that upgrading Terraform itself is likely a necessary step. However, I would like to better understand:

  • The root cause of why the provider upgrade combined with state re-import leads to such widespread diffs in production
  • Whether this behavior is expected due to schema/state incompatibilities between AWS provider v4 and v5 when using Terraform 0.13
  • The correct and safest migration path to avoid full resource recreation, especially in a production environment

Any guidance on how to stabilize the state and provider versions—or a recommended step-by-step upgrade path—would be greatly appreciated.

Thank you for your time and support.

Best regards,
Prodipto