Upgrade from 0.11 to latest

Hi There,

I am planning to upgrade from 0.11 to latest. However, I had to upgraded it 0.13 before upgrading to latest. I have followed the steps as in the documentation for the upgrade and I still getting the following error when I run a plan.

Error: Unable to Read Previously Saved State for UpgradeResourceState

There was an error reading the saved resource state using the current resource
schema.

If this resource state was last refreshed with Terraform CLI 0.11 and earlier,
it must be refreshed or applied with an older provider version first. If you
manually modified the resource state, you will need to manually modify it to
match the current resource schema. Otherwise, please report this to the
provider developer:

flatmap states cannot be unmarshaled, only states written by Terraform 0.12
and higher can be unmarshaled

Appreciate any feedback

I’m sorry, but you haven’t. You’ve omitted the mandatory step of upgrading from 0.11 to 0.12, and running an apply, before upgrading to 0.13.

Hi @maxb,

Thanks for sharing it.

So upgrade path is 0.11 β†’ 0.12 β†’ 0.13 β†’ latest ?

Yes, or at least in my experience that is fine.

HashiCorp’s docs make the claim that you need to go via 0.14 as well, but I skipped it, and I have not found any reason to justify why it should be needed.

1 Like

Unfortunately because it’s been so long since Terraform v0.12 was released some other things have changed in the greater ecosystem that impact the upgrade process.

In this particular case, it seems like you are using a relatively new version of one of the providers your configuration depends on, and that new version no longer contains the support code necessary to automatically upgrade the provider-specific parts of the old Terraform state format to the new structure, because from the provider developer’s perspective that upgrade code is no longer needed for modern usage.

To proceed here I’d suggest looking up the release date for whatever version of Terraform CLI you used to see this error, and then look in the provider’s changelog to see which provider version was current at that time. Change your configuration to require exactly that version and try the process again.

Unfortunately software tends to be written for and tested in the world that existed at the time of its development, and most software is dependent on other surrounding systems such as operating systems, plugins, and CPU architectures. If you are running old software then it’ll typically work best to be roughly consistent about the age of everything involved, because old software cannot anticipate changes made to other systems in its environment at a later date.

1 Like

Hi @maxb @apparentlymart ,

I am not sure what I am doing wrong, can you let me know if I am missing anything in following steps.

  1. Created a new branch from master (tf code is in 0.11.15)
  2. ran a terraform plan and terraform 0.12checklist, while terraform version is 0.11.15, everything was ok no errors.
  3. Switched to terraform version 0.12.31
  4. ran terraform 0.12upgrade, updated and no errors
  5. Updated the required_version to 0.12.31 in main.tf
  6. ran terraform init -reconfigure and initialized without any errors
  7. then terraform plan, no errors and plan was complete, β€œNo changes. Infrastructure is up-to-date.”

Than comes the issue, when I delete the .terraform/ and run a
terraform init and terraform plan I get the following error

Error: Unable to Read Previously Saved State for UpgradeResourceState

There was an error reading the saved resource state using the current resource
schema.

If this resource state was last refreshed with Terraform CLI 0.11 and earlier,
it must be refreshed or applied with an older provider version first. If you
manually modified the resource state, you will need to manually modify it to
match the current resource schema. Otherwise, please report this to the
provider developer:

flatmap states cannot be unmarshaled, only states written by Terraform 0.12
and higher can be unmarshaled

am I doing anything wrong.

appreciate your response

Hi @waqifiman,

What you’ve seen here is the same problem I described in my previous reply: you are using a version of a provider that is too new and the provider development team have removed the logic to automatically upgrade the state from v0.11 format to modern format, because the v0.11 format has been obsolete for many years.

To proceed you will need to find an older version of each of the providers you are using which is of a similar age to the Terraform v0.12.x release you used to perform the upgrade, and then specify those provider versions as constraints in your configuration.

I don’t know which provider versions are appropriate to use. If you can share the output of terraform providers when you run it in your initialized working directory then I may be able to suggest a suitable version for each provider you are using by referring to each provider’s release history.

Thanks for the response @apparentlymart,

Please find the output bellow.

# terraform version
Terraform v0.12.31
+ provider.aws v2.70.4
+ provider.local v2.4.0
+ provider.random v3.5.1
+ provider.template v2.2.0
+ provider.vault v2.2.0
# terraform providers
.
β”œβ”€β”€ provider.aws ~> 2.4
β”œβ”€β”€ provider.aws.xxx ~> 2.4
β”œβ”€β”€ provider.aws.xxx
β”œβ”€β”€ provider.random
β”œβ”€β”€ provider.vault 2.2.0
β”œβ”€β”€ module.<postgres-module>
β”‚   β”œβ”€β”€ module.db_instance
β”‚   β”‚   └── provider.aws (inherited)
β”‚   β”œβ”€β”€ module.db_option_group
β”‚   β”‚   └── provider.aws (inherited)
β”‚   β”œβ”€β”€ module.db_parameter_group
β”‚   β”‚   └── provider.aws (inherited)
β”‚   └── module.db_subnet_group
β”‚       └── provider.aws (inherited)
terraform-aws-modules/rds/aws --> 2.3.0

Hi @waqifiman,

My methodology here is to find for each of the providers you are using a version that was current in mid-2019, because that’s the timeframe when most folks were dealing with their v0.12 upgrades and so when the providers are most likely to have support for upgrading.

The versions I found were:

  • aws 2.20.0
  • random 2.2.0
  • vault 2.2.0

I don’t have any v0.11-based states using these providers to test with, so the above are just a best guess from me. If you find problems with these then you could try earlier versions by referring to each provider’s changelog to see when each one introduced Terraform v0.12 support; the release when that was added is the earliest possible version to select, but using one from a month or two later will make it more likely that any weird migration-related bugs were fixed.

If you fix your provider requirements to exactly these versions until you are finished upgrading to Terraform v0.12 then I think this will work better. You can then remove the rigid provider requirements as part of upgrading to Terraform v0.13, since there were no more provider-specific upgrade behaviors in Terraform v0.13.

To add to @apparentlymart 's response:

It’s probably a good strategy to start by looking at the providers for which you’re using a newer major version than the major version from mid-2019.

In particular, the local provider v2.0.0 and random provider v3.0.0 both say:

Upgrade to version 2 of the Terraform Plugin SDK, which drops support for Terraform 0.11.

in their changelogs, which seems potentially relevant, so the first thing to try should be to go back to the last local v1.x.x and last random v2.x.x.

I suspect that might be enough, as the versions of the aws, template, and vault providers you are using are already quite vintage - but if not, also try decreasing those versions as per @apparentlymart 's message.

Oh yes sorry I missed the β€œlocal” and β€œtemplate” providers on my first read because I was reading the terraform providers output instead of the terraform version output.

But @maxb is right that some of the providers do document in their changelogs exactly when they removed the Terraform v0.11 support, which is helpful in this situation because that also implies removing support for upgrading from v0.11 since the upgrade process requires running some code that originated in Terraform v0.11.

Thanks @apparentlymart and @maxb,

I will make the changes and let you know how it went

Thanks @apparentlymart and @maxb,

really appreciate your feedback and helped me to resolve the issue.

main issue was with the random and after setting it to 2.2.0 it looks good.

I had some changes that I need to apply due to the terraform-aws-modules/rds/aws module that I use.

following is the provider output

.
β”œβ”€β”€ provider.aws ~> 2.4
β”œβ”€β”€ provider.aws.xxx
β”œβ”€β”€ provider.aws.xxx
β”œβ”€β”€ provider.random 2.2.0
β”œβ”€β”€ provider.vault 2.2.0
β”œβ”€β”€ module.postgres-module
β”‚   β”œβ”€β”€ provider.aws >= 2.49
β”‚   β”œβ”€β”€ module.db_instance
β”‚   β”‚   └── provider.aws >= 2.49
β”‚   β”œβ”€β”€ module.db_option_group
β”‚   β”‚   └── provider.aws >= 2.49
β”‚   β”œβ”€β”€ module.db_parameter_group
β”‚   β”‚   └── provider.aws >= 2.49
β”‚   └── module.db_subnet_group
β”‚       └── provider.aws >= 2.49