`filebase64sha256` Function Running At Wrong Time

I have a local module which has depends_on set:

module "api_lambdas" {
  source     = "./modules/api_lambdas/"
  depends_on = [module.dynamodb_table, module.sqs_queues]
  for_each   = local.lambdas

  name             = each.key
  application_name = var.application_name
  specification    = each.value
  tags             = module.tags
}

Inside that module (./modules/api_lambdas), I am using the filebase64sha256 function somewhere:

resource "aws_lambda_function" "this" {
  depends_on = [data.archive_file.this]

  # Taking out details for clarity
  ...
  filebase64sha256("${path.module}/${var.name}.zip")
  ...
}

Running a plan gives me:

│   on ../../modules/api_lambdas/main.tf line 22, in resource "aws_lambda_function" "this":
│   22:   source_code_hash               = filebase64sha256("${path.module}/${var.name}.zip")
│     ├────────────────
│     │ while calling filebase64sha256(path)
│     │ path.module is "../../modules/api_lambdas"
│     │ var.name is "with-dynamo"
│ 
│ Call to function "filebase64sha256" failed: open ../../modules/api_lambdas/with-dynamo.zip: no such file or directory.

If I don’t use the filebase64sha256 function, I do not get this error.

If I remove the depends_on, run a plan, then put it back in, it also works.

It almost feels like depends_on is preventing module.api_lambdas from running, except for the function call, which runs regardless (and then errors because data.archive_file.this that you can see it depends on hasn’t run).

Hi @afrazo,

Like all functions in Terraform, this function runs immediately when Terraform is evaluating the configuration, and so the dependency graph does not constrain it.

Terraform has a number of functions that read files from disk as a pragmatic way to help use files that are distributed as part of the Terraform module. But if you are trying to create the file dynamically during either the plan and apply phase then those functions are not appropriate to use.

For side-effects that need to be scheduled in relation to other side-effects you must always use resources, since that is the primitive in Terraform’s model that models externally-visible side effects.

Although I would typically recommend against having a Terraform module create files dynamically at runtime – it’s typically better to do that as a separate “build” step outside of Terraform and then pass the file in as an external dependency – you can do it as a last resort using the features of the hashicorp/local provider, which treats the local filesystem as if it were a remote API supporting CRUD operations.

https://registry.terraform.io/providers/hashicorp/local/latest/docs

In particular, you can use its local_file data source to read a file from disk in a data block.

For a non-text file like a zip archive the read content will be useless (the data source exposes it as a Unicode string) but you can use the content_base64sha256 attribute to get the same kind of hash that the similarly-named function would return.

Thanks! That makes sense.

Good to know about local_file, I wasn’t aware of that one.

I noticed that my aws_lambda_function.this dependency (data.archive_file) has an output called output_base64sha256, which seems to do the job too. Haven’t fully tested it yet, but it doesn’t throw errors :slight_smile:

Oh yes, that’s also a good point: the hashicorp/archive provider’s archive_file data source was originally designed specifically for packaging up code for systems like AWS Lambda, so it has some extra features like the output_base64sha256 to help with using it in that way.

Creating local files on disk is still not really something Terraform is designed for though, so although it’s possible to make this work this is slightly outside of Terraform’s intended scope and so you might find the workflow a little quirky, such as little differences appearing when your coworkers use the same configuration on different machines with different filesystem layout. Terraform’s model assumes that everyone working with a particular configuration has an equivalent view of all of the objects being managed, which is typically true when the objects are in a remote network service but not typically true for local filesystems, unless you are using a network filesystem like NFS (which has its own caveats).

If you can arrange for building your zip file as a separate build step (with Terraform then acting only as the “deploy” step for the pre-built artifact) then I’d recommend that as a more “Terraform-flavored” approach.