Packer and Terraform chicken/egg problem

I want terraform to create my GCP project from scratch, and deploy an image created with packer. Is there any way to have terraform create the project, then run the packer build, then deploy the packer build?

Perhaps use the external data source to run a wrapper that executes the Packer build and grabs the image ID from the output (or maybe just from a cache file if nothing has changed for it’s inputs).

Hi @larsenqec,

Generally-speaking I would suggest thinking of running packer build as being a “build” step in your pipeline, distinct from the “deploy” step which you are implementing using Terraform.

I’m guessing that the subtext of your question is that for GCP in particular you need to already have a “project” created before you can run Packer – because everything in GCP lives inside a project – and so you’re hoping to use Terraform to manage that project too.

I don’t have direct experience with using GCP in production so I’m not able to give GCP-specific advice here, but I have encountered similar situations with other cloud computing vendors in the past so I’m going to describe a general answer which you can hopefully adapt to your GCP setup.

In some systems I used to help maintain (before I started working on Terraform itself at HashiCorp) we had a setup very simliar to what you are describing, where certain services were represented as machine images constructed with Packer and then deployed with Terraform. This included one Packer configuration and one or more Terraform configurations per service, and then one additional separate Terraform configuration to manage shared infrastructure.

The shared configuration was responsible for producing the baseline set of infrastructure that was required to do anything else. In AWS that was virtual network and IAM setup, for example. If we’d been using GCP, I would’ve included the GCP project in this configuration too. The objects described in this configuration are expected to live indefinitely and change very infrequently. Our first step in system bringup was therefore to apply this configuration and get all of those shared objects in place for the subsequent steps.

The Packer configurations would then be set up to work within a portion of the shared infrastructure set aside for build-related work. Again using AWS for example, we had a separate virtual network set aside for builds so that the temporary VMs launched by Packer would be segregated from the VMs running the real production workloads. The result is a machine image belonging to the shared account/project/etc. The automated pipeline step running Packer would then take the id of that generated machine image and publish it somewhere to be consumed by downstream steps. (In our case we were using Consul as a key/value store for this sort of thing, but it can be anything Terraform has a data source to read from.)

The service-specific Terraform configuration(s) then read the current image id from the location where the previous step wrote it and use that as part of the configuration for the main virtual machine scaleset for the application.

In the above I’ve used “service” in the sense of “thing that can be separately built and deployed”, not in the end-user-facing sense of the term. For example, I’m talking about service in the sense of “database server” or “main application web server”, not in the sense of all of those things working together to produce something for end-users. If your overall system is relatively simple then you may consider both of those meanings of “service” to be synonymous, in which case your version of the above could consist of exactly two Terraform configurations: the one for the long-lived infrastructure (which changes infrequently) and the one for the versioned parts that will be replaced each time you produce a new image with Packer. The key here is that the first one is applied before you use Packer, while the second one is applied each time you use Packer (assuming it succeeded and produced a new artifact).