Running CDKTF on AWS Lambda

Hi,

I’m attempting to run cdktf synth on an AWS Lambda function, targeting the writeable tmp folder for the output as all other folders on the Lambda are read-only. However, running the command throws an error where cdktf seemingly attempts to write something to the home folder : “Error: EROFS: read-only file system, mkdir ‘/home/sbx_user1051’”.

Is this expected behavior?

The full command: cdktf synth --output ~/tmp/cdktf/cdktf.out

Hi @nbaju1 :wave:

This is probably the caching of the version check that’s happening. It writes to the CDKTF_HOME directory (which by default is ~/.cdktf if I remember correctly). You can either set that directory to a writable one or try disabling the version check via the env var DISABLE_VERSION_CHECK=true.

– Ansgar

Thanks, @ansgarm!

I ended up disabling CDKTF_DISABLE_PLUGIN_CACHE_ENV and setting CDKTF_HOME to the writable tmp directory on the Lambda, which did the trick.

Hey @nbaju1. Are you able to share more details of your setup? Having issues running cdktf deploy in an aws lambda, based on a Amazon Linux 2023 (node 20) image. The error I’m getting is

Error: forkpty(3) failed.

Narrowed it down to the apply stage.

Thanks

The lambda image is custom built from public.ecr.aws/docker/library/python:3.10-slim-bookworm, where I install the various dependencies. Not sure what else would be relevant other than that.

Thank you for your reply.

Are you running cdktf deploy from a python script? If so could you please share it with me?

I’ve used a sh based bootstrap (came with the base docker image) and an sh script to trigger cdktf deploy.

Many thanks!

I’m running cdktf synth to generate the config file, then standard terraform commands (init, plan, apply) using the subprocess module.

Example:
command= ["terraform", "apply", "-input=false", "/tmp/plan.tfplan"]
result = subprocess.run(command, check=False, capture_output=True)

1 Like

Many thanks, coming back to this now I got it to work as per your suggestion.

1 Like