Terraform multiproject scenario good practicies?

Hi @robert-ingeniousio,

The details of this tend to vary depending on what you’re using to orchestrate your pipeline, but the general idea I’ve seen many times is to have early pipeline steps publish their results somewhere that the later pipeline steps can retrieve them and then use that to create the necessary dataflow through the steps.

Some automation systems have an explicit way to attach metadata or files to a job and to retrieve it downstream. If yours does then I’d suggest starting with that, because it’d then give the best visibility via that system’s UI as to how the data is flowing.

For systems that don’t have such an explicit mechanism, you can often “fake it” by e.g. making the build job publish the latest image location in some well-known location (e.g. in a key/value store with a known key) and the deploy job then fetch that value. If your deploy step is running Terraform then indeed it will likely pass the value it retrieved into the Terraform configuration as an input variable, using one of the various mechanisms.

Another thing to consider here are what the “rollback” approach will be, if any. If you’ve taken the approach of publishing the latest image location somewhere then one way to roll back would be to reset that back to the old image and then run only the “deploy” step. Some automation systems have more prescriptive answers to this, particularly if they have an explicit idea of build vs. deploy rather than just treating all pipeline steps as generic scripts to run.

With all of that said, Terraform itself doesn’t include a solution to this because it’s outside of Terraform’s scope. Generally we expect that pipeline/automation tools are the better layer to handle this sort of connectivity, and so Terraform is intended to be a thing that the automation runs rather than the automation itself.