Let’s assume I have a module that looks something like this:
resource "null_resource" "deployment_package" {
triggers = {
source_code = base64sha256(file("code/lambda_function.py"))
}
provisioner "local-exec" {
command = "code/create_deployment_package.sh"
}
}
Where the code (lambda_function.py & create_deployment_package.sh) is located in the “code” directory inside the terraform module.
The module itself is hosted in a git repo, so I use it like that:
module "foo" {
source = "git::git@mygitlab.my.domain:terraform-modules/my_module.git"
}
Now when I terraform init
a cache is created at .terraform/modules/foo/
.
Is there a way to reference that directory in code?
The Problem is code/lambda_function.py
of course won’t find the correct file, I had to use .terraform/modules/foo/code/lambda_function.py
. But “foo” here obviously depends on how I name my module.
So how would I resolve this issue?
Is there a way to determine the cache directory dynamically or to otherwise reference files in the module code?
Well, I found a workaround … but I’m not really happy with it:
resource "null_resource" "deployment_package" {
provisioner "local-exec" {
command = "find .terraform/modules/ -name create_deployment_package.sh -exec {} \\;"
}
}
This will find the executable in .terraform
cache, but will have issues if there is a “create_deployment_package.sh” in another module or such …
So still, if there are better approaches, any help is appreciated.
I guess I found what I was looking for.
If anyone else has this problem, I now have the following code:
resource "null_resource" "deployment_package" {
triggers = {
code_checksum = base64sha256(file(format("%s/code/lambda_function.py", path.module)))
deps_checksum = base64sha256(file(format("%s/code/requirements.txt", path.module)))
}
provisioner "local-exec" {
command = format("%s/code/create_deployment_package.sh", path.module)
}
}
Hi @Heiko-san,
path.module
is indeed the way to do this. It’s more commonly written like this:
command = "${path.module}/code/create_deployment_package.sh"
… but the two are functionally equivalent, so you can use either depending on which you find to be most readable.
Ok unfortunately there is another problem still left.
The null_resource is working now as expected, but of course this is just the first step.
After creating the deployment ZIP I want to deploy the lambda function using the AWS provider with “aws_lambda_function”.
However even if I define a depends_on, terraform dies by saying the ZIP file does not exist.
It seems it evaluates this before running any “resources”.
So I would have to create the ZIP file using a “data source” , not a “resource”.
I found this https://www.terraform.io/docs/providers/archive/d/archive_file.html.
However it does not suit my needs since I have to pip install
some dependencies along with my .py file.
Is there any way to do this?
resource "null_resource" "deployment_package" {
triggers = {
code_checksum = base64sha256(file(format("%s/code/lambda_function.py", path.module)))
deps_checksum = base64sha256(file(format("%s/code/requirements.txt", path.module)))
force_rebuild = var.force_rebuild
}
provisioner "local-exec" {
command = format("%s/code/create_deployment_package.sh", path.module)
}
}
resource "aws_lambda_function" "lambda" {
...
# .build/bm_send_status.zip is created by create_deployment_package.sh
filename = ".build/bm_send_status.zip"
source_code_hash = filesha256(".build/bm_send_status.zip")
depends_on = [null_resource.deployment_package]
}
Hi @Heiko-san,
If you can make your shell script produce JSON output then you could execute it using the external
data source instead of a local-exec
provisioner, and then you can make the program itself produce the hash of the zip file it creates, so you can then use it in the same way you would with archive_file
.
Please note that Terraform is not intended as a build orchestration tool and so it’s more common to build artifacts like this .zip
file as a separate step, e.g. using a CI/CD system, and then use Terraform only for the deployment. A common way to achieve that is to have the CI/CD process upload the generated .zip
file into an S3 bucket and then use the s3_bucket
, s3_key
, and s3_object_version
arguments to aws_lambda_function
to deploy directly from the S3 bucket, rather than uploading a new file from Terraform.
This approach allows you to use software better suited to creating versioned build artifacts – for example, it can retain old build artifacts in case you need to refer to them for debugging or roll back for some reason – while using Terraform only for the deployment part, which it is better suited to.