`terraform import` - is it allowed to clean up verbose code(arguments/properties) in the aftermath?

I plan to look into doing terraform import for aws_s3_bucket_object conversion to aws_s3_object, via using import(to allow for going from AWS provider version 3 to 4). If there is an actual guide to this specific change, I’d be glad to see it.

I do not have much experience with import and plan to test, but I did notice on some other resources I saw terraform import used for, that the import can result in quite verbose resource code, and put into the resource code every single little argument/attribute/property of a resource. This can make a .tf file quite long.

My question is, after terraform import is done, can I eliminate and remove some of the arguments/properties of the imported resource in code so that it is less verbose in the file and not taking up so much space? I guess it would be ok as long as the argument/property of the resource is not required, but I’m not sure.

I hope my question makes sense in terms of what I’m asking, and thank you in advance.

terraform import does not generate resource code.

A human writes the resource code first manually, and then executes terraform import to inform Terraform it needs to be linked to an existing resource instead of creating a new one.

hmm, perhaps I’m doing the import wrong.

I had this code from which I made the original aws_s3_bucket_object:

resource "aws_s3_bucket_object" "my_cool_object" {
  key    = "my_cool_object"
  bucket = aws_s3_bucket.my_cool_bucket.id

  content = templatefile("s3files/my_cool_file.txt", {
    db_port     = var.db_port
    db_name     = var.db_name
    db_username = var.db_username
    db_password = random_password.db_password.result
  })
  lifecycle {
    ignore_changes = [content, metadata]
  }
}

resource "aws_s3_bucket" "my_cool_bucket" {
  bucket_prefix = "awesome"
  acl           = "private"

  lifecycle {
    prevent_destroy = true
  }

  versioning {
    enabled = true
  }
}

resource "random_password" "db_password" {
  length  = 16
  special = false
}

Then I upgraded AWS provider from ~> 3.0 to ~> 4.0 and updated the lock file.

Then I added this to my code, an empty new aws_s3_object from AWS 4:

resource "aws_s3_object" "my_cool_object" {
  key    = "my_cool_object"
  bucket = aws_s3_bucket.my_cool_bucket.id
}

I ran

terraform import <s3 object link for aws_s3_bucket_object>

It took and then I had this in state for the new aws_s3_object:

z@Mac-Users-Apple-Computer upgrade-aws-provider-3-to-4 % terraform state show aws_s3_object.my_cool_object
# aws_s3_object.my_cool_object:
resource "aws_s3_object" "my_cool_object" {
    bucket             = "awesome20221007185838388900000001"
    bucket_key_enabled = false
    content_type       = "binary/octet-stream"
    etag               = "1095401de32f9cda9f0a81e6505137c8"
    id                 = "my_cool_object"
    key                = "my_cool_object"
    metadata           = {}
    storage_class      = "STANDARD"
    tags               = {}
    tags_all           = {}
    version_id         = "I0PnGIDWJyvDbqaU2J2vUbYhCtPbxiUB"
}

Did I do my code for the aws_s3_object resource wrong in my main.tf?

I want it to actually look more like my more verbose code that is in the aws_s3_bucket_object resource.

Also AWS console didn’t show a new object, but I guess that’s the idea, my aws_s3_object is referencing an existing resource.

Perhaps I should have had it like this before doing the import:

resource "aws_s3_object" "my_cool_object" {
  key    = "my_cool_object"
  bucket = aws_s3_bucket.my_cool_bucket.id

  content = templatefile("s3files/my_cool_file.txt", {
    db_port     = var.db_port
    db_name     = var.db_name
    db_username = var.db_username
    db_password = random_password.db_password.result
  })
  lifecycle {
    ignore_changes = [content, metadata]
  }
}

I haven’t tried it yet, don’t know if it would work.

I ended up destroying the aws_s3_bucket_object which destroyed the object in AWS console, and was only left with aws_s3_object in terraform state list, but with nothing to show for that aws_s3_bucket_object in AWS.

I have to get this process down better, kinda would like for there to be a guide. Appreciate any tips

I’ve never used the AWS terraform provider myself, but I can extrapolate from Terraform general concepts…

The actual process you need to follow here is:

  1. Change your code, by changing your existing resource "aws_s3_bucket_object" into a resource "aws_s3_object". Do NOT add a new resource, modify the existing one. By having two resource blocks dealing with the same S3 object, you effectively had Terraform trying to manage the same thing twice, leading to weird behaviour.

  2. Tell Terraform to forget the aws_s3_bucket_object from the state, using terraform state rm.

  3. Tell Terraform to re-learn about the existing object this time as an aws_s3_object, using terraform import.

  4. Check the terraform plan to confirm it’s not proposing any unwanted actions.

Hi @maxb, thank you. I plan to try your steps.

I will detail that it is still a bit more intricate, at least for me. I ran this terraform validate command and it’s not just simply changing the aws_s3_bucket_object to a aws_s3_object, but also there are new resources needed inside the aws_s3_object for versioning and the acl properties:

terraform validate -json | jq '.diagnostics[] | {summary: .summary, detail: .detail, address: .address, filename: .range.filename, start_line: .range.start.line}'

{
  "summary": "Argument is deprecated",
  "detail": "Use the aws_s3_bucket_versioning resource instead",
  "address": "aws_s3_bucket.my_cool_bucket",
  "filename": "main.tf",
  "start_line": 16
}
{
  "summary": "Argument is deprecated",
  "detail": "Use the aws_s3_bucket_acl resource instead",
  "address": "aws_s3_bucket.my_cool_bucket",
  "filename": "main.tf",
  "start_line": 18
}
{
  "summary": "Argument is deprecated",
  "detail": "Use the aws_s3_object resource instead",
  "address": "aws_s3_bucket_object.my_cool_object",
  "filename": "main.tf",
  "start_line": 2
}

seems I could create the aws_s3_bucket_object, and then change it to a aws_s3_object by swapping the resource type (and not simply create a separate/new aws_s3_object resource in addition to the aws_s3_bucket_object), and then it looks like I need to add to my main.tf aws_s3_bucket_versioning and aws_s3_bucket_acl resources per the warnings, inside the aws_s3_object? I plan to experiment and try.

Then I can try the apply. I’m tinkering around on my own so no harm, which is good.

Seems like there should be a smoother way to transition to this new resource type instead of destroying the bucket object or these intricacies, but all good.


EDIT: It worked for the aws_s3_object !

I updated the TF AWS provider from 3 to 4, and then removed the lock file, and had my bucket object aws_s3_bucket_object changed to aws_s3_object

resource "aws_s3_object" "my_cool_object" {
  key    = "my_cool_object"
  bucket = aws_s3_bucket.my_cool_bucket.id

  content = templatefile("s3files/my_cool_file.txt", {
    db_port     = var.db_port
    db_name     = var.db_name
    db_username = var.db_username
    db_password = random_password.db_password.result
  })
  lifecycle {
    ignore_changes = [content, metadata]
  }
}

did the terraform state rm aws_s3_bucket_object.my_cool_object, and then did the

terraform import aws_s3_object.my_cool_object <S3 object URL>
and got

aws_s3_object.my_cool_object: Import prepared!
  Prepared aws_s3_object for import
aws_s3_object.my_cool_object: Refreshing state... [id=my_cool_object]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

My terraform plan now shows:

  # aws_s3_object.my_cool_object will be updated in-place
  ~ resource "aws_s3_object" "my_cool_object" {
      + acl                = "private"
      + force_destroy      = false
        id                 = "my_cool_object"
        tags               = {}
        # (9 unchanged attributes hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.
╷
│ Warning: Argument is deprecated
│ 
│   with aws_s3_bucket.my_cool_bucket,
│   on main.tf line 16, in resource "aws_s3_bucket" "my_cool_bucket":
│   16: resource "aws_s3_bucket" "my_cool_bucket" {
│ 
│ Use the aws_s3_bucket_versioning resource instead
│ 
│ (and 3 more similar warnings elsewhere)

Would I do a terraform apply here?

My other issue now is updating the bucket code from:

resource "aws_s3_bucket" "my_cool_bucket" {
  bucket_prefix = "awesome"
  acl           = "private"

  lifecycle {
    prevent_destroy = true
  }

  versioning {
    enabled = true
  }
}

to using the aws_s3_bucket_acl resource and aws_s3_bucket_versioning resource

EDIT: The only thing I would add to your steps @maxb is after step 2, before we can do the import, we need to update from aws ~> 3.0 to ~> 4.0 (or whatever exact version constraint one wants when moving to AWS 4), and then remove the .terraform.lock.hcl file, and then run terraform init again to install AWS 4 provider, which will allow us to actually have access to the new aws_s3_object resource and do the actually terraform import

Separate Response for S3 Bucket Updates

I originally had this for the bucket


resource "aws_s3_bucket" "my_cool_bucket" {
  bucket_prefix = "awesome"
  acl           = "private"

  lifecycle {
    prevent_destroy = true
  }

  versioning {
    enabled = true
  }
}

Now I need to use these two new resources:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_acl
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_versioning

It looks like I can remove from the resource "aws_s3_bucket" "my_cool_bucket" this line

  acl           = "private"

and these lines

  versioning {
    enabled = true
  }

and at the same time add the new acl and versioning resources, giving me this new code:

esource "aws_s3_bucket" "my_cool_bucket" {
  bucket_prefix = "awesome"

  lifecycle {
    prevent_destroy = true
  }
}

resource "aws_s3_bucket_acl" "my_cool_bucket_acl" {
  bucket = aws_s3_bucket.my_cool_bucket.id
  acl    = "private"
}

resource "aws_s3_bucket_versioning" "my_cool_bucket_versioning" {
  bucket = aws_s3_bucket.my_cool_bucket.id
  versioning_configuration {
    status = "Enabled"
  }
}

I plan to try, this, the terraform plan says:

Terraform will perform the following actions:

  # aws_s3_bucket_acl.my_cool_bucket_acl will be created
  + resource "aws_s3_bucket_acl" "my_cool_bucket_acl" {
      + acl    = "private"
      + bucket = "awesome20221008131355233100000001"
      + id     = (known after apply)

      + access_control_policy {
          + grant {
              + permission = (known after apply)

              + grantee {
                  + display_name  = (known after apply)
                  + email_address = (known after apply)
                  + id            = (known after apply)
                  + type          = (known after apply)
                  + uri           = (known after apply)
                }
            }

          + owner {
              + display_name = (known after apply)
              + id           = (known after apply)
            }
        }
    }

  # aws_s3_bucket_versioning.my_cool_bucket_versioning will be created
  + resource "aws_s3_bucket_versioning" "my_cool_bucket_versioning" {
      + bucket = "awesome20221008131355233100000001"
      + id     = (known after apply)

      + versioning_configuration {
          + mfa_delete = (known after apply)
          + status     = "Enabled"
        }
    }

  # aws_s3_object.my_cool_object will be updated in-place
  ~ resource "aws_s3_object" "my_cool_object" {
      + acl                = "private"
      + force_destroy      = false
        id                 = "my_cool_object"
        tags               = {}
        # (9 unchanged attributes hidden)
    }

Plan: 2 to add, 1 to change, 0 to destroy.

what’s even better, the terraform validate gives me no more warnings:

z@Mac-Users-Apple-Computer upgrade-aws-provider-3-to-4 % terraform validate -json | jq '.diagnostics[] | {summary: .summary, detail: .detail, address: .address, filename: .range.filename, start_line: .range.start.line}'

and listing state gives:

z@Mac-Users-Apple-Computer upgrade-aws-provider-3-to-4 % terraform state list
aws_s3_bucket.my_cool_bucket
aws_s3_object.my_cool_object
random_password.db_password

I ran the terraform apply and things look good:

Plan: 2 to add, 1 to change, 0 to destroy.
aws_s3_bucket_acl.my_cool_bucket_acl: Creating...
aws_s3_bucket_versioning.my_cool_bucket_versioning: Creating...
aws_s3_object.my_cool_object: Modifying... [id=my_cool_object]
aws_s3_bucket_acl.my_cool_bucket_acl: Creation complete after 0s [id=awesome20221008131355233100000001,private]
aws_s3_object.my_cool_object: Modifications complete after 0s [id=my_cool_object]
aws_s3_bucket_versioning.my_cool_bucket_versioning: Creation complete after 2s [id=awesome20221008131355233100000001]

Apply complete! Resources: 2 added, 1 changed, 0 destroyed.
z@Mac-Users-Apple-Computer upgrade-aws-provider-3-to-4 % terraform state list
aws_s3_bucket.my_cool_bucket
aws_s3_bucket_acl.my_cool_bucket_acl
aws_s3_bucket_versioning.my_cool_bucket_versioning
aws_s3_object.my_cool_object
random_password.db_password

Appreciate the help in getting me unstuck @maxb. If I made any mistakes or if you have any feedback for improvements, please let me know. After getting the import resolved - and without having to destroy any bucket object - this was really quite simple.

@apparentlymart, I think I’d like to write up a guide for the above process if there isn’t one, transitioning an aws_s3_bucket_object (AWS 3 Provider) to an aws_s3_object (AWS 4 provider).

I was thinking maybe on Medium or an .md doc, or on Hashicorp’s site itself, vetted by folks at Hashicorp. I can go ahead and do it on my own. I think having the above in a more nice format of a doc/article would be good to have for people to hash out.

1 Like

Hi @aaa,

Thanks for the interest in contributing documentation!

The AWS provider team are ultimately responsible for the documentation for that provider and the documentation lives in the same codebase as the provider code, so I think the best way to get started here would be to open an issue in the AWS provider’s repository to discuss what you are planning to do and see how the AWS provider team reacts to it.

I think what you’re describing could be an extension of the existing upgrade guide section for aws_s3_bucket_object, giving more details about the migration steps for that particular resource type. However, I don’t work directly on the AWS provider so I must cede to the AWS provider team for what strategy feels best here; discussing it in an issue in their repository is the best way to discuss different ways to organize this documentation.

Thanks again!

1 Like

Cool. Appreciate the feedback @apparentlymart and I just may do that. Thank you and hope all is well.