I am trying to create a lambda_function which builds using a python script kept in s3 (specific code below). When executing the terraform code my pipeline keeps returning the error:
Call to function “filebase64sha256” failed: no file exists at
24 test.py.zip.
The bucket policy and file all seem to be fine but for some reason this error keeps returning. I have tested creating the exact same Lambda using the same code with “filename” and storing it locally within the repo, instead of creating it from s3 and it works as expected.
Does anyone have any idea what could be causing this error? Have I got some format error in the “source_code_hash” parameter?
All of Terraform’s file... family of functions will look in the current working directory by default, unless you specify another directory. Because of that, one common reason for encountering this error is if the file is in a different directory, such as in the current module directory. If it’s in the current module directory then you can specify the path like this:
Thanks for your response. But if the path to the file is in an s3 bucket, how does that work? I had previously tried setting the string like, but to no avail:
The filebase64sha256 function only works with files on local disk, distributed as part of your configuration. If you are trying to deploy a file that was uploaded into S3 by some other process then you’ll need to arrange for that process to also save the hash somewhere that Terraform can read it, and then insert that result into source_code_hash.
The AWS provider (in its implementation of aws_lambda_function uses source_code_hash to determine whether it needs to update the function code, because otherwise it doesn’t have access to the source code to compare with: when deploying a lambda function from S3 it’s the Lambda service that accesses S3, not the Terraform AWS provider.
I don’t have a ready answer to suggest that would just “drop in” to your configuration without introducing something else into your process. A possible answer which would avoid introducing any new infrastructure components would be to have your build process also write the hash into the S3 bucket, as a separate object alongside the zip file, and then you could read it using the aws_s3_bucket_object data source.
The above assumes that your S3 bucket would contain both test.py.zip and another object test.py.zip.hash, where the second one contains a suitable value for source_code_hash which your build process has written.
[user@2bb86b96541c root_module]$ terraform apply -auto-approve
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# module.test_layers.aws_lambda_layer_version.pymysql_lambda_layer[0] will be created
+ resource "aws_lambda_layer_version" "pymysql_lambda_layer" {
+ arn = (known after apply)
+ compatible_runtimes = [
+ "python3.8",
]
+ created_date = (known after apply)
+ filename = "pymysql_layer.zip"
+ id = (known after apply)
+ layer_arn = (known after apply)
+ layer_name = "pymysql_layer"
+ signing_job_arn = (known after apply)
+ signing_profile_version_arn = (known after apply)
+ skip_destroy = false
+ source_code_hash = "c9OnFVBU7Yko/jhEEL7R3P/cvkVx/k5tLEUuGw5lMq0="
+ source_code_size = (known after apply)
+ version = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
module.test_layers.aws_lambda_layer_version.pymysql_lambda_layer[0]: Creating...
module.test_layers.aws_lambda_layer_version.pymysql_lambda_layer[0]: Creation complete after 5s [id=arn:aws:lambda:us-east-1:xxxxxxxxxxxx:layer:pymysql_layer:1]
[user@2bb86b96541c root_module]$ terraform apply -auto-approve
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# module.test_layers.aws_lambda_layer_version.pymysql_lambda_layer[0] will be created
+ resource "aws_lambda_layer_version" "pymysql_lambda_layer" {
+ arn = (known after apply)
+ compatible_runtimes = [
+ "python3.8",
]
+ created_date = (known after apply)
+ filename = "pymysql_layer.zip"
+ id = (known after apply)
+ layer_arn = (known after apply)
+ layer_name = "pymysql_layer"
+ signing_job_arn = (known after apply)
+ signing_profile_version_arn = (known after apply)
+ skip_destroy = false
+ source_code_hash = "c9OnFVBU7Yko/jhEEL7R3P/cvkVx/k5tLEUuGw5lMq0="
+ source_code_size = (known after apply)
+ version = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
module.test_layers.aws_lambda_layer_version.pymysql_lambda_layer[0]: Creating...
╷
│ Error: Unable to load "pymysql_layer.zip": open pymysql_layer.zip: no such file or directory
│
│ with module.test_layers.aws_lambda_layer_version.pymysql_lambda_layer[0],
│ on ../child_module/main.tf line 17, in resource "aws_lambda_layer_version" "pymysql_lambda_layer":
│ 17: resource "aws_lambda_layer_version" "pymysql_lambda_layer" {
│
╵