In terraform docs suggested use modules, to describe separate infrastructure components logically organized together – agreed. What i did to follow this approach in our AWS infrastructure:
- created a data module with subnets ids, vpc ids and so on. What are common for many other infrastructure parts
- created terraform config for one of the many projects – project-A, for example
Important thing –
project-A use workspace dev and prod, to be almost identically present in dev and prod, common case. And it’s state file stored in S3 bucket with a key
I was confused and feel did something wrong, because when i tried to deploy another project, let’s say
terraform plan suggest to overwrite resources defined by
project-A and this is very logically, because project-A and project-B, both use same workspace (env) - Dev, as they need to be present in Dev env and use same state file –
terraform.tfstate in same S3 bucket.
Terraform somehow didn’t figure out that these projects (modules) are parts of some root module and should be added one to other. Is it possible to define all infrastructure for Dev in one state file? And update it on demand with ability update only one module in it?
Or it’s a wrong way and better use unique state files per project? Or better use unique workspaces per projects with same state file names?
How you guys cope with that? I mean support different projects. Totally confused, maybe lack of knowledge.
Thanks in advance.
I’m a little bit confused exactly what you have, but I think you are describing having two root modules (which are the directories from where you run
terraform from within) but with the same bucket & key in S3.
Each root module has to have a unique bucket/key for the backend storage.
If instead you are actually wanting to have two root modules you could implement this by having a module for project A and one for project B. Neither would contain any backend details, with that always stored in the root module.
And dir structure will be like this right?
Where S3 backend storage will be defined only in root module.
But having that, if I want deploy only module-A I can do that by
terraform --target module-A apply, not by run apply command from directory module-A? Terraform don’t see there backend config and can’t track root module from childs.
Feel that maybe it’s bad idea keep all projects in one state file, because they are completely separated. One brake state file by deploy projectA and projectB also can’t be deployed until projectA fixed his part…
Maybe better organise each module as root? Then question: separate them by different state files or by different workspaces? Saw in Adobe’s class they use workspaces with unique names including env, project like:
That structure would be right yes.
While you can use
-target to only apply part of the code at a time, it isn’t good practice to do that as a matter of course. I would suggest seeing a root module as a “unit of deployment” where you apply all changes at the same time. If you have resources which you want to be managed separately you would have a different root module (and therefore a different backend location for the state file).
How many root modules to have is really up to you, although I would consider things like ownership, change cadence, risk and refresh speed when deciding how to split things. Over time that is likely to change, so you might need to move things between state files at various points. Using modules for code sharing, as well as the remote state data source and/or data sources in general can be really useful for keeping things loosely coupled while retaining code reuse.
Thanks a lot, Stuart. You bring more order to mine and our team ideas.
Yes, root module as “unit of deployment” what we need and enough level of decoupling for our infrastructure.
Thanks, will move forward with it then.