I have the following environment here:
CI server does its build steps, creates a zip file and places it in the “sources” S3 bucket. This cannot be changed easily, especially the “zip all” part in particular because it’s faster to upload one single zip file instead of hundreds of small files.
What I need to achieve with terraform is this:
- retrieve the zip file from the S3 bucket, using the
read_sources
profile - place the unzipped content of the source zip inside a destination S3 bucket, using the
production
profile
The production
profile cannot be given read access to the sources S3 bucket, in particular because there many such profiles that exist, and they are switched via AWS SSO configuration.
I can declare the sources bucket like this:
data aws_s3_bucket product {
bucket = "sources"
provider = aws.read_sources
}
But then I cannot access any of its file when the other resources use the production
profile.
What I have thus done is to retrieve the zip file via a local-exec
provisioner like this:
aws --profile ${var.profile_read_sources} s3 sync s3://${data.aws_s3_bucket.sources.bucket}/${local.s3_sources_key} ${local.s3_sources_sync_dir} --quiet --delete --include "*"
With that, I get the zip file just fine. Then, because there is no way (yet) to enumerate the content of that zip file, I added another command to my local-exec
provisioner that unzips that file.
And finally, in my terraform, I’m using dir/template
to enumerate the files inside the extraction folder to then apply that to a aws_s3_bucket_object
resource like so:
module "template_files" {
source = "hashicorp/dir/template"
base_dir = "${path.root}/unzipped_content"
depends_on = [null_resource.unzipper]
}
resource "aws_s3_bucket_object" "doc_files" {
for_each = module.template_files.files
bucket = aws_s3_bucket.doc.id
key = each.key
content_type = each.value.content_type
source = each.value.source_path
content = each.value.content
etag = each.value.digests.md5
}
As I experimented with various solution, I came up with the conclusion that the above solution works just fine. But this is only true because I never applied that from within a “clean” module.
What I have discovered is that if the ${path.root}/unzipped_content
folder does not yet exist when terraform apply
is started, then no files will be uploaded to the S3 bucket.
This would appear logical to people used to terraform, but to me because I gave a depends_on
, I am a bit taken aback.
What I would like is for a way to indicate “delayed evaluation” on the enumeration so that it does not reads the content of unzipped_content
until it has been prepared by the resource it depends upon.
If that is not possible (yet?), I can see two workarounds:
- Call
terraform apply
twice - Call
aws s3 sync
in thelocal-exec
provisioner once the unzipping has been done.
None of these two options are appealing to me, the first because it does not really make sense to require everybody to call twice, the second because it introduces an external dependency which I would like to avoid as much as possible. I mean, even the documentation says to avoid local-exec
provisioners.
Do you see any other ways to achieve what I want, given my constraints?