Terraform, artifacts, and an incremental build pool

Some sites I’ve been at use static/persistent build servers and incremental builds in order to speed build time (or sometimes as a cost alternative to per-minute billing if they have spare bare metal lying around). In these scenarios, I’m finding it very difficult to split planning from applying in Terraform as I can’t guarantee identical absolute paths within the build pool per the recommendation here:

Running Terraform in Automation

Before running apply, obtain the archive created in the previous step and extract it at the same absolute path.

Basically as jobs build up over time, and the servers don’t clean between builds, a specific path a job will run in effectively becomes random.

That said, when looking at the zipping process, I’m merely grabbing the plan file and all the plugins downloaded by init. As long as the terraform file versions its required modules, does anyone know if you can get away with simply backing up the plan file and “re-initing” before apply? So essentially on a build/test vs deploy:


  • terraform init
  • terraform plan -out=tfplan
  • send tfplan to artifact repo


  • terraform init
  • download tfplan artifact
  • terraform apply tfplan

Idea here being that I’m avoiding potential pathing issues that a plugin may have by simply re-initiating/installing said plugins so paths are valid. I would need to do a bit of tracking on the commit to ensure pulled code got rolled up/back to the matching version, but it should be doable.

Am I getting too clever here? My other option is to make one task sequence that has both all init/plan/apply and use a condition to “break” the process as needed (skip deploy during a PR or if plan comes back as no changes). It would generate no artifacts, so in order to roll a deploy back I’d need to literally roll it back in git … but that might not be a bad thing.

Just asking around on how others have done this in a CICD scenario.

Hi @Justin-DynamicD!

The main reason for that cautionary note in the “Running Terraform in Automation” guide is that a configuration containing references to local files can end up creating a plan file that includes paths to those files on the system that created the plan. When those are later re-evaluated on the system handling the apply, if Terraform sees a different result then it will halt the apply because the action during apply is inconsistent with the action during planning.

Armed with that context, you can potentially avoid this restriction by being careful about how the configuration is written to ensure that no expressions contain absolute paths that make sense only on the system running terraform plan.

This is a little easier in Terraform 0.12 vs. prior versions because path.module is now a path relative to the current working directory rather than an absolute path, and so if you can ensure that everything within the configuration appears at the same paths relative to the current working directory then things should be able to work as desired.

It’s important to avoid using either the abspath function or path.cwd because both of those will introduce absolute paths into the plan.

If you can ensure that all configurations in your environment treat the filesystem as immutable during planning and avoid using absolute paths then you should be able to use the source code as present in your version control to apply the saved plan.

When it comes to rolling back, indeed in Terraform a “roll back” is normally instead a “roll forward” to a new state that is functional. Consider a change that introduces both a new provider block and a resource block using that provider: rolling back to the previous commit after instances have been created from that resource would not work then, because Terraform will need the provider block in order to destroy the instances it created.

Thank you for this information. This helps tremendously in planning how to leverage Terraform in a CICD scenario. In general, avoided absolute paths in modules I’d file under “good idea” anyway :slight_smile: