Do you have best practices for automating the creation of remote state resources for the numerous, multivariant Terraform projects at your (big-ish) company?
Do you have best practices for sharing variables like I propose to do in a config
module, and trying to DRY and create single source of truth in your Terraform projects?
I’ve got a basic terraform root module/project that I ultimately want to use remote S3 state with dynamodb locking. There will be numerous other tf projects following suit, and numerous teams/environments using them, so I want to automate not only the creation of the state but make it easy and automatic for tf projects to refer to the s3 backend automatically (using bucket/table naming conventions)…
I implemented a separate module to take care of the creation and configuration of the S3 bucket and dynamodb resources.
The root module project invokes the module to create the state resources.
Is there a way to have terraform try to use remote backend, and if it fails to use local as a fallback?
Obviously I have a chicken/egg issue, unless I can figure out a way to make terraform use local state long enough to run the project to create the remote state resources, then henceforth use the remote state backend.
The purpose of this approach was to provide an s3-backend
module to standardize and automate how teams at my company create Terraform state resources. This module also served as the single source of truth for obtaining the names of the s3/dymanodb_table backend resources which follow a convention based on team/project/etc input variables. To my knowledge, Terraform doesn’t have anything like global or shared variables aside from module outputs, so it becomes hard to follow DRY - the answer is always “create a module”.
I had thought that I could invoke the s3-backend
module from the various main tf projects (the ones creating infra targets), but after thinking this through some more, I see that any inclusion of these S3 backend resources (via my s3-backend
module) in the main tf project will include them in the plan/state, and therefore put them potentially on the chopping block at destroy time. A prevent_destroy=true
lifecycle clause would prevent that, but it would also prevent the destruction of everything else in the project most of the time, which would be problematic. Looking here I see that what I’m trying to do can’t be done with Terraform - by design the only way to create a resource using Terraform, then refer to it from Terraform while fully protecting it from being destroyed is to completely separate the tf project in question.
I’m but a veteran Software Engineer cum DevOps Engineer, trying to wrap my mind around the Terraform way of doing things and cringing a bit here.
I obviously could create the state resources manually and move on with my life. The only reason I’m trying to use terraform to create terraform state files is because this will be done many many times by many many teams at this company, and I wanted to have some automation in place so this wasn’t something a human has to do every time.
The other primary motivation here is to provide a mechanism to enable other modules to fetch the names of some common variables whose values derive from the account/team/project/etc which are input variables. I had planned on doing this using my s3-backend
module, which not only calculates the output variables and question but also contains the resources themselves . This latter thing is the problem, because as soon as a main module uses this module then the S3 backend resources could potentially be on the chopping block at destroy time.
I can see now that the best way forward is to create a completely separate “S3 backend” tf project to automate creating the s3 backend resources (invoking my s3-backend
module) and provide a new config
module whose purpose in life is to calculate output variables based on account/team/project inputs. This module is used by the aforementioned standalone S3 backend project, as well as the main tf project(s) that are responsible for creating infra.
In the end, the documentation for teams will say (and their CI/CD pipelines will do) “apply the “S3 backend” project up front, once, then apply your other tf infra project (which will be automatically configured to use the created remote state resources) to create your infra”.
Thanks!