Some sites I’ve been at use static/persistent build servers and incremental builds in order to speed build time (or sometimes as a cost alternative to per-minute billing if they have spare bare metal lying around). In these scenarios, I’m finding it very difficult to split planning from applying in Terraform as I can’t guarantee identical absolute paths within the build pool per the recommendation here:
Before running apply, obtain the archive created in the previous step and extract it at the same absolute path.
Basically as jobs build up over time, and the servers don’t clean between builds, a specific path a job will run in effectively becomes random.
That said, when looking at the zipping process, I’m merely grabbing the plan file and all the plugins downloaded by init. As long as the terraform file versions its required modules, does anyone know if you can get away with simply backing up the plan file and “re-initing” before apply? So essentially on a build/test vs deploy:
- terraform init
- terraform plan -out=tfplan
- send tfplan to artifact repo
- terraform init
- download tfplan artifact
- terraform apply tfplan
Idea here being that I’m avoiding potential pathing issues that a plugin may have by simply re-initiating/installing said plugins so paths are valid. I would need to do a bit of tracking on the commit to ensure pulled code got rolled up/back to the matching version, but it should be doable.
Am I getting too clever here? My other option is to make one task sequence that has both all init/plan/apply and use a condition to “break” the process as needed (skip deploy during a PR or if plan comes back as no changes). It would generate no artifacts, so in order to roll a deploy back I’d need to literally roll it back in git … but that might not be a bad thing.
Just asking around on how others have done this in a CICD scenario.