Yes, in your case it seems like the requirement would be for each of the build steps to publish their results somewhere and then have your deploy step, prior to running Terraform, fetch the values from that same location and pass them in to Terraform as one or more input variables.
The key result of that design is that this external data store will “remember” the most recent result from each of the jobs, and so you can run the deployment job at any time and it will, if nothing changed upstream, just pass the same values to Terraform again and then ideally the relevant provider will notice that nothing changed and so Terraform will propose no changes. If you rebuild one particular image then the Terraform plan should only include the changes related to that one image, because Terraform can see that all of the other images are the same as recorded in the previous state snapshot.
I’m not familiar with GitLab CI/CD in particular, but I did quickly refer to its documentation and it seems to use terminology I’m familiar with, so hopefully I understood correctly how it works and so the following would make sense:
I think I would try to model each of your build steps as a separate “pipeline” in GitLab, and then represent the single multi-image deployment as its own separate “pipeline”. I think that means you could use Multi-project pipelines to configure it so that if any of the build pipelines run they will each trigger a run of the same downstream deployment pipeline.
I think in your comment you were referring to the fact that when you have one pipeline trigger another it can pass variables down to the downstream pipeline but it can only pass its own data in, and so as you say by that strategy there wouldn’t be any way for the deployment project to find the images from the other pipelines. I think I would try to address that by taking a pull rather than a push strategy: you mentioned that you’re pushing images to a registry, which suggests that you could design your build pipeline to directly access the registry to find the current/latest image for each component and then pass those ids in to Terraform as one or more variables.
This is, therefore, using the package registry as the “external data store” in what I described: the build steps writes to it and the deploy step reads from it. If we’re talking about Docker container images, I’d think about using a specific mutable tag like latest
to represent the “current version” of each image, and then if you find you need to roll back to an earlier version of the image then you can use some other process to change latest
to refer back to an existing image and then re-run the deployment pipeline without re-running any of the build pipelines.
I hope that’s useful! I don’t think I’d be able to go into any more detail on this because I’m already well past my limit of knowledge about GitLab, but I hope I at least got enough of the terminology right here that you can see what I’m talking about and think about how to adapt it into a real solution using the GitLab building blocks.