Utilize hashicorp/terraform image as CLI

Trying to use hashicorp/terraform:light as my CLI instead of installing everything locally on my computer using docker-compose.

The following configuration is what I had initially:

# docker-compose.yml
version: "3.7"

services:
  terraform:
    image: hashicorp/terraform:light

When I run the following command from the official documentation

docker-compose run terraform plan main.tf

I get

stat main.tf: no such file or directory

Since I didn’t copy my templates inside the image, the command is not able to find main.tf template.

I read https://hub.docker.com/r/hashicorp/terraform

And I don’t see any documentation about it, so I did the following.

# docker-compose.yml
version: "3.7"

services:
  terraform:
    image: hashicorp/terraform:light
    volumes:
    - ./terraform:/terraform

Notice that I mounted ./terraform as /terraform inside the container.

Now you can the command again with the file path relative to /terraform.

docker-compose run terraform plan /terraform/main.tf

I am not sure if /terraform is a folder I should be using for this since there is no documentation about it, but it is working so far.

I hope you find this useful since I didn’t find any docs online at the moment,

The existing problem, using local module references as follow:

module "digitalocean" {
  source = "../digitalocean"
}

Causes the following issue:


This module is not yet installed. Run "terraform init" to install all modules
required by this configuration.

I am stuck at this step and would be great if I get any help on it.

Cheers.

Hi @alchemist_ubi,

While it is technically possible to use the docker images in this way, it’s often quite complicated to do so because the natural isolation created by docker is kinda at odds with convenient CLI usage that tends to depend on access to local files in the host.

In your case it seems like the problem is that you’ve exposed the root module into the container but that module expects to find a sibling directory digitalocean which is not exposed into the container. To make this work, you’ll need to arrange for your entire configuration tree to be available in the container, and then you can use the working_dir option to ensure that Terraform is running in whatever directory contains your root module.

With all of that said, Terraform is distributed just as a single static executable, and so it’s often easier to just download and extract that executable onto your system than to fuss with docker. The docker images are there primarily to help with automation scenarios built around docker, rather than for interactive use. As a result, they will always require extra arguments to customize them for whatever automation environment they are running in.

The images themselves are literally just alpine linux images with the terraform executable added at /bin/terraform, a custom entrypoint referring to it, and git and ssh installed from the Alpine package manager. Any further customization and configuration is left to the user, because what is needed depends a lot on how the image is being integrated into a broader system.

Yeah, I gave up on the idea honestly, I wanted to keep adding AWS CLI and other tools.

I got it working, but then the SSH forwarding still an issue for Docker, so it was a pain downloading private modules.

Thanks a lot for the help