Lambda function error - Call to function "filebase64sha256" failed: no file exists

Hi there,

I am trying to create a lambda_function which builds using a python script kept in s3 (specific code below). When executing the terraform code my pipeline keeps returning the error:

Call to function “filebase64sha256” failed: no file exists at
24 test.py.zip.

The bucket policy and file all seem to be fine but for some reason this error keeps returning. I have tested creating the exact same Lambda using the same code with “filename” and storing it locally within the repo, instead of creating it from s3 and it works as expected.

Does anyone have any idea what could be causing this error? Have I got some format error in the “source_code_hash” parameter?

Any advice is welcomed. Thanks a lot.

resource "aws_lambda_function" "test_function" {
  function_name                  = "test-lambda"
  s3_bucket			             = var.s3_bucket["scripts_bucket"]
  s3_key			             = "test.py.zip"
  description                    = "test lambda 1"
  role                           = var.role_arn
  runtime                        = var.runtime["py37"]
  layers                         = [aws_lambda_layer_version.txa_lambda_layer.arn]
  publish                        = var.publish
  handler                        = var.handler["test"]
  memory_size                    = var.memory_size["small"]
  reserved_concurrent_executions = var.concurrency["small"]
  timeout                        = var.lambda_timeout["medium"]
  source_code_hash               = filebase64sha256("test.py.zip")

  tags         = {
    Name        = "test-lambda"
    Project     = var.project
    Environment = var.environment
    Department  = var.department
    Team        = var.team

  }
}

Could you double check that?

Hi tbugfinder,

I am using var.s3_bucket as a map and calling the scripts_bucket value:

variable "s3_bucket" {
  type = map
  default = {
  }
}
s3_bucket = {
  data_bucket      = "dev.s3.txa.data"
  model_bucket     = "dev.s3.txa.model"
  scripts_bucket   = "dev.s3.txa.scripts"
}

I should also mention I have tried using the s3:// path as well as the bucket arn, with no success.

Hi @CatoW13,

All of Terraform’s file... family of functions will look in the current working directory by default, unless you specify another directory. Because of that, one common reason for encountering this error is if the file is in a different directory, such as in the current module directory. If it’s in the current module directory then you can specify the path like this:

  source_code_hash = filebase64sha256("${path.module}/test.py.zip")

Hi @apparentlymart,

Thanks for your response. But if the path to the file is in an s3 bucket, how does that work? I had previously tried setting the string like, but to no avail:

source_code_hash = filebase64sha256("${var.s3_bucket[“scripts_bucket”]}/test.py.zip")

Do you happen to know how I could point the module to the actual specific path of my s3?

Any advice greatly welcomed.

Hi @CatoW13,

The filebase64sha256 function only works with files on local disk, distributed as part of your configuration. If you are trying to deploy a file that was uploaded into S3 by some other process then you’ll need to arrange for that process to also save the hash somewhere that Terraform can read it, and then insert that result into source_code_hash.

The AWS provider (in its implementation of aws_lambda_function uses source_code_hash to determine whether it needs to update the function code, because otherwise it doesn’t have access to the source code to compare with: when deploying a lambda function from S3 it’s the Lambda service that accesses S3, not the Terraform AWS provider.

I don’t have a ready answer to suggest that would just “drop in” to your configuration without introducing something else into your process. A possible answer which would avoid introducing any new infrastructure components would be to have your build process also write the hash into the S3 bucket, as a separate object alongside the zip file, and then you could read it using the aws_s3_bucket_object data source.

locals {
  s3_object= "test.py.zip"
}

data "aws_s3_bucket_object" "hash" {
  bucket = var.s3_bucket["scripts_bucket"]
  key    = "${local.s3_object}.hash"
}

resource "aws_lambda_function" "test_function" {
  function_name                  = "test-lambda"
  s3_bucket			             = var.s3_bucket["scripts_bucket"]
  s3_key			             = local.s3_object
  description                    = "test lambda 1"
  role                           = var.role_arn
  runtime                        = var.runtime["py37"]
  layers                         = [aws_lambda_layer_version.txa_lambda_layer.arn]
  publish                        = var.publish
  handler                        = var.handler["test"]
  memory_size                    = var.memory_size["small"]
  reserved_concurrent_executions = var.concurrency["small"]
  timeout                        = var.lambda_timeout["medium"]
  source_code_hash               = data.aws_s3_bucket_object.hash.body

  tags = {
    Name        = "test-lambda"
    Project     = var.project
    Environment = var.environment
    Department  = var.department
    Team        = var.team
  }
}

The above assumes that your S3 bucket would contain both test.py.zip and another object test.py.zip.hash, where the second one contains a suitable value for source_code_hash which your build process has written.

Ah amazing. That answers my question perfectly. Thanks a lot for your help @apparentlymart.