Hi @maxb, thank you. I plan to try your steps.
I will detail that it is still a bit more intricate, at least for me. I ran this terraform validate
command and it’s not just simply changing the aws_s3_bucket_object
to a aws_s3_object
, but also there are new resources needed inside the aws_s3_object
for versioning and the acl properties:
terraform validate -json | jq '.diagnostics[] | {summary: .summary, detail: .detail, address: .address, filename: .range.filename, start_line: .range.start.line}'
{
"summary": "Argument is deprecated",
"detail": "Use the aws_s3_bucket_versioning resource instead",
"address": "aws_s3_bucket.my_cool_bucket",
"filename": "main.tf",
"start_line": 16
}
{
"summary": "Argument is deprecated",
"detail": "Use the aws_s3_bucket_acl resource instead",
"address": "aws_s3_bucket.my_cool_bucket",
"filename": "main.tf",
"start_line": 18
}
{
"summary": "Argument is deprecated",
"detail": "Use the aws_s3_object resource instead",
"address": "aws_s3_bucket_object.my_cool_object",
"filename": "main.tf",
"start_line": 2
}
seems I could create the aws_s3_bucket_object
, and then change it to a aws_s3_object
by swapping the resource type (and not simply create a separate/new aws_s3_object
resource in addition to the aws_s3_bucket_object
), and then it looks like I need to add to my main.tf
aws_s3_bucket_versioning
and aws_s3_bucket_acl
resources per the warnings, inside the aws_s3_object
? I plan to experiment and try.
Then I can try the apply
. I’m tinkering around on my own so no harm, which is good.
Seems like there should be a smoother way to transition to this new resource type instead of destroying the bucket object or these intricacies, but all good.
EDIT: It worked for the aws_s3_object
!
I updated the TF AWS provider from 3 to 4, and then removed the lock file, and had my bucket object aws_s3_bucket_object
changed to aws_s3_object
resource "aws_s3_object" "my_cool_object" {
key = "my_cool_object"
bucket = aws_s3_bucket.my_cool_bucket.id
content = templatefile("s3files/my_cool_file.txt", {
db_port = var.db_port
db_name = var.db_name
db_username = var.db_username
db_password = random_password.db_password.result
})
lifecycle {
ignore_changes = [content, metadata]
}
}
did the terraform state rm aws_s3_bucket_object.my_cool_object
, and then did the
terraform import aws_s3_object.my_cool_object <S3 object URL>
and got
aws_s3_object.my_cool_object: Import prepared!
Prepared aws_s3_object for import
aws_s3_object.my_cool_object: Refreshing state... [id=my_cool_object]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
My terraform plan
now shows:
# aws_s3_object.my_cool_object will be updated in-place
~ resource "aws_s3_object" "my_cool_object" {
+ acl = "private"
+ force_destroy = false
id = "my_cool_object"
tags = {}
# (9 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
╷
│ Warning: Argument is deprecated
│
│ with aws_s3_bucket.my_cool_bucket,
│ on main.tf line 16, in resource "aws_s3_bucket" "my_cool_bucket":
│ 16: resource "aws_s3_bucket" "my_cool_bucket" {
│
│ Use the aws_s3_bucket_versioning resource instead
│
│ (and 3 more similar warnings elsewhere)
Would I do a terraform apply
here?
My other issue now is updating the bucket code from:
resource "aws_s3_bucket" "my_cool_bucket" {
bucket_prefix = "awesome"
acl = "private"
lifecycle {
prevent_destroy = true
}
versioning {
enabled = true
}
}
to using the aws_s3_bucket_acl
resource and aws_s3_bucket_versioning
resource
EDIT: The only thing I would add to your steps @maxb is after step 2, before we can do the import, we need to update from aws ~> 3.0
to ~> 4.0
(or whatever exact version constraint one wants when moving to AWS 4), and then remove the .terraform.lock.hcl
file, and then run terraform init
again to install AWS 4 provider, which will allow us to actually have access to the new aws_s3_object
resource and do the actually terraform import