Data "external" program fails when using templatefile()

I am using an external program to run an AWS CLI query. This external program is used to populate an aws_lambda_function data source, because instead of using Terraform we’re using dotnet-lambda deploy. I’m using Terraform just to build CICD, alarms, dashboards, SNS. Basically everything but the Lambda functions themselves.

Anyway, whenever I try to use the external bash script as a template file() so I can send in variables, it fails. If I hard code the variables, it works.

Here’s all the stuff:

get-lambdas.sh.tftpl"

#!/bin/bash
echo $(aws lambda list-functions --region ${REGION} --query 'Functions[?starts_with(FunctionName, `${ENVIRONMENT}`) == `true`].FunctionName' --output json | jq '[ range(0; length) as $r | { ( .[$r] | split("-")[2]) : .[$r] } ] | add')

*.tf:

data "external" "get_lambdas" {
  program = ["bash", templatefile("${path.cwd}/Templates/get-lambdas.sh.tftpl", { ENVIRONMENT = upper(local.environment), REGION = var.region })]
}

data "aws_lambda_function" "lambda" {
  for_each = data.external.get_lambdas.result

  function_name = each.value
}

var.tf:

locals {
  environment = terraform.workspace
}

variable "region" {
  default = "us-east-2"
}

Here is the error. It appears that it is rendering the template correctly, but I am getting exit code 127.

│ Error: External Program Execution Failed
│
│   with data.external.get_lambdas,
│   on lambda.tf line 2, in data "external" "get_lambdas":
│    2:   program = ["bash", templatefile("${path.cwd}/Templates/get-lambdas.sh.tftpl", { ENVIRONMENT = upper(local.environment), REGION = var.region })]
│
│ The data source received an unexpected error while attempting to execute the program.
│
│ Program: /bin/bash
│ Error Message: bash: #!/bin/bash
│ echo $(aws lambda list-functions --region us-east-2 --query 'Functions[?starts_with(FunctionName, `DEV`) == `true`].FunctionName' --output json | jq '[ range(0; length) as $r | { ( .[$r] |
│ split("-")[2]) : .[$r] } ] | add'): No such file or directory
│
│ State: exit status 127

Hi @wblakecannon,

The configuration you wrote here is asking the external data source, in effect, to behave as if you’d written the following at a normal shell prompt:

bash '#!/bin/bash
echo $(aws lambda list-functions --region us-east-2 --query '\''Functions[?starts_with(FunctionName, `DEV`) == `true`].FunctionName'\'' --output json | jq '\''[ range(0; length) as $r | { ( .[$r] | │ split("-")[2]) : .[$r] } ] | add'\'')'

(I manually added the quoting and escaping there, so I might not have got that quite right, but this is just for illustration purposes.)

Bash expects its first argument to be the name of a script file which Bash itself will read from disk and execute. We can see from the error message that indeed Bash tried to use this long string as a filename, leading to the error “No such file or directory” because you have no file with that name.

However, Bash does have an option -c which tells it to treat the next argument as direct script text to be executed and so I think it might work to pass this template result as the argument to -c to ask Bash to treat it as source code rather than as a filename:

data "external" "get_lambdas" {
  program = [
    "bash",
    "-c", templatefile("${path.cwd}/Templates/get-lambdas.sh.tftpl", {
      ENVIRONMENT = upper(local.environment),
      REGION = var.region
    })
  ]
}

I added some extra newlines here just to help me read the expression to understand it, but that’s optional; the extra "-c" element in the list of arguments is the important addition here.

Note that since you’re running Bash explicitly here the #!/bin/bash line at the start of your template result isn’t being interpreted in any special way, and so you could just omit it. That sort of interpreter specification line is only needed when you’re directly executing a script, so that the system knows to run it through an interpreter rather than trying to treat it as a native executable. Keeping it there won’t do any harm, but removing it might help clarify to a future reader what’s going on here.


If you haven’t already, you may wish to upvote the following AWS provider issue to record interest in a first-class data source for fetching multiple Lambda functions:

Thank you for your very, very, thorough answer.

Yes I found that provider issues when I was searching on how to bring in all the lambdas, so that’s why I had to write the funky AWS CLI script.

Everything is working now in DEV. Thanks.