hmm, perhaps I’m doing the import wrong.
I had this code from which I made the original aws_s3_bucket_object
:
resource "aws_s3_bucket_object" "my_cool_object" {
key = "my_cool_object"
bucket = aws_s3_bucket.my_cool_bucket.id
content = templatefile("s3files/my_cool_file.txt", {
db_port = var.db_port
db_name = var.db_name
db_username = var.db_username
db_password = random_password.db_password.result
})
lifecycle {
ignore_changes = [content, metadata]
}
}
resource "aws_s3_bucket" "my_cool_bucket" {
bucket_prefix = "awesome"
acl = "private"
lifecycle {
prevent_destroy = true
}
versioning {
enabled = true
}
}
resource "random_password" "db_password" {
length = 16
special = false
}
Then I upgraded AWS provider from ~> 3.0
to ~> 4.0
and updated the lock file.
Then I added this to my code, an empty new aws_s3_object
from AWS 4:
resource "aws_s3_object" "my_cool_object" {
key = "my_cool_object"
bucket = aws_s3_bucket.my_cool_bucket.id
}
I ran
terraform import <s3 object link for aws_s3_bucket_object>
It took and then I had this in state for the new aws_s3_object
:
z@Mac-Users-Apple-Computer upgrade-aws-provider-3-to-4 % terraform state show aws_s3_object.my_cool_object
# aws_s3_object.my_cool_object:
resource "aws_s3_object" "my_cool_object" {
bucket = "awesome20221007185838388900000001"
bucket_key_enabled = false
content_type = "binary/octet-stream"
etag = "1095401de32f9cda9f0a81e6505137c8"
id = "my_cool_object"
key = "my_cool_object"
metadata = {}
storage_class = "STANDARD"
tags = {}
tags_all = {}
version_id = "I0PnGIDWJyvDbqaU2J2vUbYhCtPbxiUB"
}
Did I do my code for the aws_s3_object
resource wrong in my main.tf
?
I want it to actually look more like my more verbose code that is in the aws_s3_bucket_object
resource.
Also AWS console didn’t show a new object, but I guess that’s the idea, my aws_s3_object
is referencing an existing resource.
Perhaps I should have had it like this before doing the import:
resource "aws_s3_object" "my_cool_object" {
key = "my_cool_object"
bucket = aws_s3_bucket.my_cool_bucket.id
content = templatefile("s3files/my_cool_file.txt", {
db_port = var.db_port
db_name = var.db_name
db_username = var.db_username
db_password = random_password.db_password.result
})
lifecycle {
ignore_changes = [content, metadata]
}
}
I haven’t tried it yet, don’t know if it would work.
I ended up destroying the aws_s3_bucket_object
which destroyed the object in AWS console, and was only left with aws_s3_object
in terraform state list
, but with nothing to show for that aws_s3_bucket_object
in AWS.
I have to get this process down better, kinda would like for there to be a guide. Appreciate any tips