Terraform version upgrades via import block

Scenario:

Let’s say I have created some resources in aws using terraform version 1.1.0
Now on some other environment I’ve setup version 1.9.1.

Now if I use import block to from the new environment and use -generate-config-out flag, it should create a terraform config files for that resource along with new probable state.

So my question is, Is the new state equivalent of upgraded state? So In my next terraform plan/apply run I will get a no-op but internally my state is upgraded to that of version 1.9.1?

In summary, the terraform state and HCL config generated will be based upon the terraform executable version and the aws terraform provider version. But the real-world ‘state’ of the resource in AWS will not change due to using differing versions of the terraform executable and terraform aws provider.

There are broadly three things here that need to be untangled:

The ‘version’ of the resource within your cloud provider (AWS) is typically not defined by Terraform (unless it is an option to pass such a thing as an attribute) but it is whatever the cloud provider creates based upon the API calls that are sent to its management/control plane APIs by the terraform aws provider during apply time. This is largely independent of the provider version (although different versions of a provider may expose more features of the underlying cloud provider’s resources, support different resources etc.) and is independent of the terraform version.

The config that is generated by terraform when using -generate-config-out is based upon whatever the cloud provider (AWS) API exposes to the terraform provider (as in the configuration currently ‘within’ AWS) and the provider version that you have defined in your module and downloaded during terraform init.

The format (which has had some changes during development of Terraform) of the state that is written to your state store does depend upon the terraform executable version you are using, as it is the executable that is ‘in charge’ of generating and persisting the state file.

Hope that helps

Happy Terraforming

Is there a way to create different resources for deprecated arguments that are not being generated as a part of generated resource config?

For Example:

  1. I created a s3 bucket manually from my AWS console.
  2. Created a new directory from terminal and added import block for the same resource.

cat import.tf
import {
to = aws_s3_bucket.test-bucket
id = “test-bucket-poc-01”
}

resource “aws_s3_bucket” “test-bucket” {

}

  1. Ran the command : terraform plan -generate-config-out=somefile.tf

cat somefile.tf

generated by Terraform

Please review these resources and move them into your main configuration files.

generated by Terraform from “test-bucket-poc-01”

resource “aws_s3_bucket” “test-bucket” {

bucket = “test-bucket-poc-01”
bucket_prefix = null
force_destroy = null
object_lock_enabled = false
tags = {
key1 = “val1”
key2 = “val2”
}
tags_all = {
key1 = “val1”
key2 = “val2”
}
}

So, as you can see the arguments present in the aws-s3_bucket.test-bucket resources are the ones which are optional but not marked as “To be deprecated in next major release” in the aws provider docs. As a result I am not getting the full info like versioning, acl info. These deprecated arguments are in turn to be created as a resource themselves.
Ref Link: Terraform Registry

However, I wish to get the various dependencies attached to the bucket because I want to automate this, currently I need to know the infrastructure’s properties beforehand which is quite tricky because, I’ll be needed to create the resource blocks for all probable properties like: versioning, acl, object_lock, acl etc. beforehand. Also the properties/dependencies change for every resource.

So Is there any way to do this such that I get all the resources generated?

Hi @nimiye5967,

I suspect that the config is generated based upon that provider version’s ‘best practice.’ If in 5.x.x the attributes are deprecated as it is expected that those elements of the configuration are defined by separate linked or sub-resources as defined by that provider version’s ‘best-practice’ then that is all you are seeing.

I have not tried in this scenario but you may get the output you desire by using a 4.x.x version of the aws provider. And, technically, you should then be able to bump the aws provider to a 5.x.x version and still have the generated config supported (although the attributes are deprecated).

However, to prepare for the next major release (v6.x.x) then you will need to refactor your module to use the resources indicated in the documentations deprecation notices. This will be a manual step (but might be ‘scriptable’ as from the brief look I have taken at the docs it looks like the existing attributes/blocks have just been lifted into the separate resources as-is)

The other approach is, as you seem to suggest, create an import block for the aws_s3_bucket and each of the other aws_s3_bucket_... ‘sub-resources’ that are indicated in the deprecation notices and then generate the config using the current 5.x.x provider.
This may be more automatable but could well result in a lot of bloat in your module where there is no need to actually define those other ‘sub-resources’ as the defaults are adequate.

I don’t know how terraform/provider decides what should be output as attributes for generating config. It may be completely valid to not output an attribute if the value currently configured within the cloud provider matches the terraform provider default or is not set. But alternatively it may output a verbose config with all attributes even if, were you writing the module from scratch, they would not be included as they are not required / default.

1 Like

The details that you’ve both been discussing above seem good to me and I’m not meaning to disagree with any of it with this comment, but I did want to highlight one specific thing that is often not obvious to folks who are new to Terraform:

Importing something that was previously managed by Terraform in a different place is a potentially-lossy operation.

The most common way I’ve seen people frame this is that they want to use terraform state rm in one configuration and terraform import in another configuration as a way to move the management responsibility for something between configurations. What this read is discussing is a little different to that, but I think has the same consequence which @ExtelligenceIT highlighted:

The config that is generated by terraform when using -generate-config-out is based upon whatever the cloud provider (AWS) API exposes to the terraform provider.

The Terraform state for an object that was originally created with Terraform has some additional information that importing (whether using the terraform import command or import blocks) cannot reproduce, because it isn’t visible through the underlying platform’s API, including but not limited to:

  • The dependencies each object had on other objects in the configuration that was most recently applied. This can be important for destroying objects in the correct order if they are later removed from the configuration, because removing something from the configuration also deletes the dependency information from the configuration and so the copy of that information in the state must be used instead.

  • If a provider allows you to write the same information in multiple ways – for example, if a particular argument takes a case-insensitive string – then the Terraform state remembers exactly how you’d written that string in the original configuration, whereas the remote API might only be able to return a normalized version where everything was converted to lowercase.

    This isn’t necessarily a problem, but if you also used a reference to that name when populating an argument in some other resource then that other resource would currently have the version of the string you wrote in the original configuration, rather than the normalized version that was converted to lowercase, and so after importing the downstream resource configuration would appear to have changed from your original casing to lowercase, which may cause Terraform to propose an update to it.

  • If a provider API has information that can only be written but not read – a database password, for example – then import cannot populate it at all.

These differences are not always a problem, but I’m mentioning them because the question was about whether the result of importing would exactly match the original state, and the answer to that is “not necessarily”.

With all of that said, you should not typically need to perform any importing just because you’ve upgraded to a newer version of Terraform CLI. Terraform CLI v1.9.1 should accept a state snapshot generated by Terraform CLI v1.9.0 without you needing to perform any special steps aside from installing the new Terraform CLI executable on your system and making sure you’re running that new version instead of the old one.

As ever, thanks for the additional detail and context @apparentlymart!