Lambda function error - Call to function "filebase64sha256" failed: no file exists

Hi there,

I am trying to create a lambda_function which builds using a python script kept in s3 (specific code below). When executing the terraform code my pipeline keeps returning the error:

Call to function “filebase64sha256” failed: no file exists at
24 test.py.zip.

The bucket policy and file all seem to be fine but for some reason this error keeps returning. I have tested creating the exact same Lambda using the same code with “filename” and storing it locally within the repo, instead of creating it from s3 and it works as expected.

Does anyone have any idea what could be causing this error? Have I got some format error in the “source_code_hash” parameter?

Any advice is welcomed. Thanks a lot.

resource "aws_lambda_function" "test_function" {
  function_name                  = "test-lambda"
  s3_bucket			             = var.s3_bucket["scripts_bucket"]
  s3_key			             = "test.py.zip"
  description                    = "test lambda 1"
  role                           = var.role_arn
  runtime                        = var.runtime["py37"]
  layers                         = [aws_lambda_layer_version.txa_lambda_layer.arn]
  publish                        = var.publish
  handler                        = var.handler["test"]
  memory_size                    = var.memory_size["small"]
  reserved_concurrent_executions = var.concurrency["small"]
  timeout                        = var.lambda_timeout["medium"]
  source_code_hash               = filebase64sha256("test.py.zip")

  tags         = {
    Name        = "test-lambda"
    Project     = var.project
    Environment = var.environment
    Department  = var.department
    Team        = var.team

  }
}

Could you double check that?

Hi tbugfinder,

I am using var.s3_bucket as a map and calling the scripts_bucket value:

variable "s3_bucket" {
  type = map
  default = {
  }
}
s3_bucket = {
  data_bucket      = "dev.s3.txa.data"
  model_bucket     = "dev.s3.txa.model"
  scripts_bucket   = "dev.s3.txa.scripts"
}

I should also mention I have tried using the s3:// path as well as the bucket arn, with no success.

Hi @CatoW13,

All of Terraform’s file... family of functions will look in the current working directory by default, unless you specify another directory. Because of that, one common reason for encountering this error is if the file is in a different directory, such as in the current module directory. If it’s in the current module directory then you can specify the path like this:

  source_code_hash = filebase64sha256("${path.module}/test.py.zip")

Hi @apparentlymart,

Thanks for your response. But if the path to the file is in an s3 bucket, how does that work? I had previously tried setting the string like, but to no avail:

source_code_hash = filebase64sha256(“${var.s3_bucket[“scripts_bucket”]}/test.py.zip”)

Do you happen to know how I could point the module to the actual specific path of my s3?

Any advice greatly welcomed.

Hi @CatoW13,

The filebase64sha256 function only works with files on local disk, distributed as part of your configuration. If you are trying to deploy a file that was uploaded into S3 by some other process then you’ll need to arrange for that process to also save the hash somewhere that Terraform can read it, and then insert that result into source_code_hash.

The AWS provider (in its implementation of aws_lambda_function uses source_code_hash to determine whether it needs to update the function code, because otherwise it doesn’t have access to the source code to compare with: when deploying a lambda function from S3 it’s the Lambda service that accesses S3, not the Terraform AWS provider.

I don’t have a ready answer to suggest that would just “drop in” to your configuration without introducing something else into your process. A possible answer which would avoid introducing any new infrastructure components would be to have your build process also write the hash into the S3 bucket, as a separate object alongside the zip file, and then you could read it using the aws_s3_bucket_object data source.

locals {
  s3_object= "test.py.zip"
}

data "aws_s3_bucket_object" "hash" {
  bucket = var.s3_bucket["scripts_bucket"]
  key    = "${local.s3_object}.hash"
}

resource "aws_lambda_function" "test_function" {
  function_name                  = "test-lambda"
  s3_bucket			             = var.s3_bucket["scripts_bucket"]
  s3_key			             = local.s3_object
  description                    = "test lambda 1"
  role                           = var.role_arn
  runtime                        = var.runtime["py37"]
  layers                         = [aws_lambda_layer_version.txa_lambda_layer.arn]
  publish                        = var.publish
  handler                        = var.handler["test"]
  memory_size                    = var.memory_size["small"]
  reserved_concurrent_executions = var.concurrency["small"]
  timeout                        = var.lambda_timeout["medium"]
  source_code_hash               = data.aws_s3_bucket_object.hash.body

  tags = {
    Name        = "test-lambda"
    Project     = var.project
    Environment = var.environment
    Department  = var.department
    Team        = var.team
  }
}

The above assumes that your S3 bucket would contain both test.py.zip and another object test.py.zip.hash, where the second one contains a suitable value for source_code_hash which your build process has written.

Ah amazing. That answers my question perfectly. Thanks a lot for your help @apparentlymart.

@apparentlymart I too facing the same issue and could not fix it any way. Could you please help me on this?

Using Terraform v1.0.11

Working Code:

[user@2bb86b96541c test]$ tree
.
├── child_module
│   └── main.tf
└── root_module
    ├── archive.tf
    └── pymysql_layer.zip

2 directories, 3 files
[user@2bb86b96541c test]$ cat child_module/main.tf
variable "archive_type" {
}

variable "runtime" {
}

variable "lambda_layer_name" {
}

variable "add_layers" {
}

##############################################
# Create Lambda Layers or Deployment Package #
##############################################

resource "aws_lambda_layer_version" "pymysql_lambda_layer" {
  count               = var.add_layers ? 1 : 0
  filename            = "${var.lambda_layer_name}.${var.archive_type}"
  layer_name          = var.lambda_layer_name
  source_code_hash    = filebase64sha256("pymysql_layer.zip")
  compatible_runtimes = var.runtime
}


[user@2bb86b96541c test]$ cat root_module/archive.tf
variable "archive_type" {
  default = "zip"
}

variable "runtime" {
  default = ["python3.8"]
}

variable "lambda_layer_name" {
  default = "pymysql_layer"
}

variable "add_layers" {
  default = true
}


module "test_layers" {
  source            = "../child_module"
  runtime           = var.runtime
  add_layers        = var.add_layers
  archive_type      = var.archive_type
  lambda_layer_name = var.lambda_layer_name
}

Result:

[user@2bb86b96541c root_module]$ terraform apply -auto-approve

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.test_layers.aws_lambda_layer_version.pymysql_lambda_layer[0] will be created
  + resource "aws_lambda_layer_version" "pymysql_lambda_layer" {
      + arn                         = (known after apply)
      + compatible_runtimes         = [
          + "python3.8",
        ]
      + created_date                = (known after apply)
      + filename                    = "pymysql_layer.zip"
      + id                          = (known after apply)
      + layer_arn                   = (known after apply)
      + layer_name                  = "pymysql_layer"
      + signing_job_arn             = (known after apply)
      + signing_profile_version_arn = (known after apply)
      + skip_destroy                = false
      + source_code_hash            = "c9OnFVBU7Yko/jhEEL7R3P/cvkVx/k5tLEUuGw5lMq0="
      + source_code_size            = (known after apply)
      + version                     = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.
module.test_layers.aws_lambda_layer_version.pymysql_lambda_layer[0]: Creating...
module.test_layers.aws_lambda_layer_version.pymysql_lambda_layer[0]: Creation complete after 5s [id=arn:aws:lambda:us-east-1:xxxxxxxxxxxx:layer:pymysql_layer:1]

Non-working code:

[user@2bb86b96541c test]$ tree
.
├── child_module
│   ├── lamda_layer
│   │   └── pymysql_layer.zip
│   └── main.tf
└── root_module
    └── archive.tf

3 directories, 3 files
[user@2bb86b96541c test]$ cat root_module/archive.tf
variable "archive_type" {
  default = "zip"
}

variable "runtime" {
  default = ["python3.8"]
}

variable "lambda_layer_name" {
  default = "pymysql_layer"
}

variable "add_layers" {
  default = true
}


module "test_layers" {
  source            = "../child_module"
  runtime           = var.runtime
  add_layers        = var.add_layers
  archive_type      = var.archive_type
  lambda_layer_name = var.lambda_layer_name
}

[user@2bb86b96541c test]$ cat child_module/main.tf
variable "archive_type" {
}

variable "runtime" {
}

variable "lambda_layer_name" {
}

variable "add_layers" {
}

##############################################
# Create Lambda Layers or Deployment Package #
##############################################

resource "aws_lambda_layer_version" "pymysql_lambda_layer" {
  count               = var.add_layers ? 1 : 0
  filename            = "${var.lambda_layer_name}.${var.archive_type}"
  layer_name          = var.lambda_layer_name
  source_code_hash    = filebase64sha256("${path.module}/lamda_layer/pymysql_layer.zip")
  compatible_runtimes = var.runtime
}

Result:

[user@2bb86b96541c root_module]$ terraform apply -auto-approve

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.test_layers.aws_lambda_layer_version.pymysql_lambda_layer[0] will be created
  + resource "aws_lambda_layer_version" "pymysql_lambda_layer" {
      + arn                         = (known after apply)
      + compatible_runtimes         = [
          + "python3.8",
        ]
      + created_date                = (known after apply)
      + filename                    = "pymysql_layer.zip"
      + id                          = (known after apply)
      + layer_arn                   = (known after apply)
      + layer_name                  = "pymysql_layer"
      + signing_job_arn             = (known after apply)
      + signing_profile_version_arn = (known after apply)
      + skip_destroy                = false
      + source_code_hash            = "c9OnFVBU7Yko/jhEEL7R3P/cvkVx/k5tLEUuGw5lMq0="
      + source_code_size            = (known after apply)
      + version                     = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.
module.test_layers.aws_lambda_layer_version.pymysql_lambda_layer[0]: Creating...
╷
│ Error: Unable to load "pymysql_layer.zip": open pymysql_layer.zip: no such file or directory
│
│   with module.test_layers.aws_lambda_layer_version.pymysql_lambda_layer[0],
│   on ../child_module/main.tf line 17, in resource "aws_lambda_layer_version" "pymysql_lambda_layer":
│   17: resource "aws_lambda_layer_version" "pymysql_lambda_layer" {
│
╵

Looks like you’d also need ${path.module} here.

Thanks a lot. This works fine and I missed this.

${path.module} works great.

filename = “{path.module}/mo_usr_1.zip" source_code_hash = filebase64sha256("{path.module}/mo_usr_1.zip”)