Terraform upgrade 0.13.6 to 0.14.6 not picking up

My main.tfstate file sits in aws s3 bucket. The main.tfstate version is reporting terraform_version = 0.13.6. When I run terraform v0.13.6 plan and apply everything looks good, no errors and no changes required. Then I try to upgrade to terraform v0.14.6 and I run terraform init; terraform plan; terraform apply. Also no errors and no changes reported. But when I look at the main.tfstate file I see that the version is still reporting 0.13.6. I ran this across multiple different code deployments and some upgraded successfully and reporting 0.14.6 while others did not upgrade and still reporting 0.13.6, using same upgrade procedure

  "version": 4,
  "terraform_version": "0.13.6",
  "serial": 6,
  "lineage": "9733d965-d4c8-4ef7-6c82-ca37078b35c2",
  "outputs": {
    "acl": {
      "value": "private",
      "type": "string"

Was there any resolution for this? I am seeing the same issues. I’m guessing it’s because nothing is actually being changed in the statefile. I want to know if this will cause any issues in upgrading in the future, as my resources that wont upgrade to 0.14 may not be changed for a long time.

Nope, nobody provided a solution. I am still experiencing same problem…

I’m facing the same issue utilising an Azure Storage account for my backend, I have a around 300 deployments that I upgraded to 0.13.6 (from 0.12.28) without experiencing this issue. During the upgrade to 0.14.6, I noticed that 99% of these weren’t updating the state file (but oddly the occasional one would). I changed a tag within each deployment and re-ran the apply, which did update the state file for the remainder.

I’m now facing the same issue in trying to upgrade to 0.15.3.

@ryanps, the state file is only updated if there are changes to be written. What you are seeing is expected if the data in the state is unchanged.

Thanks for getting back to me. I was under the impression that no change was required to the resources to perform a Terraform version upgrade (as was the case upgrading to 0.13.6). For a very small minority of the upgrades to 0.14.6 (and initial testing with 0.15.3) these also upgraded the state file version with no changes to resources, so something doesn’t seem quite right in any case.

Terraform may update any of the structures within in the state when upgrading, and if the resulting file differs in any way the new version will recorded in the state as well. This all internal implementation details, so allowing terraform to write a new state, whether it changes or not, is all that is required for upgrading.

1 Like

@jbardin made the statement that

the state file is only updated if there are changes to be written. What you are seeing is expected if the data in the state is unchanged" does not make sense in the context of the documented upgrade process.

This statement would be correct for a “normal” terraform apply. However, the document Upgrading to Terraform v0.14 – Before You Upgrade documentation implies otherwise.

When upgrading between major releases, we always recommend ensuring that you can run terraform plan and see no proposed changes on the previous version first, because otherwise pending changes can add additional unknowns into the upgrade process. Terraform v0.14 has the additional requirement of running terraform apply, as described above, because that allows Terraform v0.13 to commit the result of its automatic state format upgrades. [Emphasis added.]

The term “automatic state format upgrades” suggests to me a different use case than the ordinary terraform apply that updates infrastructure to match the definition. By my reading of the document, Terraform v0.14 is supposed to always upgrade a v10.13 state file to the v0.14 format when a terraform apply is run.

The opening sentence of the documentation quoted above recommends having “clean” state before using terraform apply to upgrade state to a newer version. This further suggests to me that the upgrade is intended to take place independently of whether state changes are produced by the terraform apply.

Surely Hashicorp doesn’t envision an “automatic state format upgrade” which requires the end user to fiddle with resource definitions just to trigger a state file rewrite to the new version.

I’ve recently migrated a repository with ~200 infrastructure directories, all of which were “clean” before I ran the terraform apply to v0.14. However, after going through the motions, it’s apparent that some 75 of the state files are “stuck” at 0.13. There seems to be no particular pattern to this odd behavior. Interestingly, even later versions of Terraform (v1.0.5) can read this v0.13 format file.

Hi @JonRoma,

You are emphasizing the “always” portion here, which is still correct. The upgrade process does always happens, but if there are no changes to the state, there is no new state written, which will leave the last compatible version in the version field. These state versions are all forward compatible with no major differences in structure, so there is no problem having 0.13 shown in the state file when using a newer version of the CLI.

@jbardin, thank you for the insight you’ve added in response to my comments.

Your explanation was comforting, and helped explain the anomalies I observed in a migration (sequentially from v0.12 to v0.13 to v0.14, and thence to v1.0) for two GitHub repos containing 200+ infrastructure directories apiece.

Having said that, the decision to leave the minimum compatible version in the terraform_version field in the saved state strikes me as odd – at least, it violates the principle of least surprise.

This design choice is a bit confounding to anyone like me who has several hundred directories to migrate and wants to audit the migration. One would think that this behavior would at least have been documented: When I run the command to upgrade state to v1.0, I would expect the state to report that version!

Interestingly, I had previously tried two different techniques to retrieve the state version, and was a bit puzzled that they yielded different results.

$ terragrunt state pull | jq .terraform_version
$ aws s3 cp s3://remote_state_path/terraform.tfstate - | \
  jq .terraform_version         

The former method didn’t seem trustworthy because the reported result varied depending on which Terraform version I used to do the state pull, so I opted to use the latter command in order to get the state from the horse’s mouth, as it were.

Given your explanation, I would certainly have been less concerned and confused had I made the opposite choice. At least state upgrades will be less tedious going forward. Thanks.

If it helps to try and clarify further, the terraform_version field is an internal implementation detail to show what is essentially the minimum compatible version that can handle the state appropriately, but it’s not intended to be exposed for external use beyond that. If terraform can read the state, then the state is structurally compatible with the running terraform version. This is unrelated to whether any updates needed to be done to the resources within the configuration.

Reading the state automatically updates terraform_version to match the running version, which is stored when any state is written, hence state pull will show whatever version you are running.

Unfortunately there is no direct indication of which major terraform release the resources in the state were last updated by. This would normally be tracked indirectly via the required_version in the config corresponding to the state.

1 Like

@jbardin, for those of us who have used Terraform since its early days, things were a bit rough around the edges, so it’s sometimes been necessary to poke around in the internals. I’m much comforted by your findings, even though the behavior was not necessarily intuitive.

I subsequently found the note at the bottom of the terraform state pull documentation helpful. Its unequivocal statement is what I sought elsewhere and didn’t previously find.

A post was split to a new topic: Terraform_version in state not updated after making changes to infrastructure