Share Resources Between Modules

Hello,

Right now, I’m breaking up our monolithic repo into small modules. Currently, there is a “resources” folder hosting all resource shared by all modules. example code like:
values = [
{file("{path.module}/resources/providers/aws.yaml”)}"
]

If I break up modules into different repos, it’s not a good idea that I duplicate this “resources” folder in every new one.

Any words from wisdom?

Thanks,

Hi,

There is not a single way to achieve refactoring. Here are some thoughts and personnal experience. Not sure its wisdom, I let you the judge of that.

1/ Try to make as less as possible reference to elements in the local deployment environment. Your code example looks like it’s using a YAML data source. I don’t know what’s in there but if you could use a data statement with a web API that would be better to transition later to a continuous deployment system if that’s the plan.

2/ Consider different options for repository and in repository code layout. There is no one size fit them all in my experience and things change (teams grow in size and skills hopefully).

3/ Think about collaboration and deployment methods a lot. You might have a CLI driven workflow and, with the maturing of tools, a VCS or API driven workflow could be the your future goal actually. Don’t try to solve a problem you don’t have but take it into consideration if you know that you will have to solve that problem later.

For a CLI driven workflow I have been using a rigid structure that helped a lot team collaboration. Here’s what I did. Not saying it’s perfect but I find that it is more important to have a command understanding among collaborators than a maybe better technical solution misunderstood or too free to interpretation by many.

1/ Standardize repository names

Deployment repository: terraform-<provider>-<team>

Examples:

  • terraform-aws-infrastructure
  • terraform-pagerduty-infrastructure
  • terraform-aws-analytics
  • terraform-github-shared

Module repository: terraform-module-<provider>-<feature>

Examples:

  • terraform-module-aws-vpc
  • terraform-module-azure-vnet

Pros: anyone knowing this rule can find the repository if he/shes knows what provider it is deployed on and which teams own it (tagging resource on the provider can help)
Cons: multi provider module and stack are forced into different repos.

I am not a big fan of 1 stack per deployment repositories, this is too much git’ing around in my taste. A repo per module works fine with me while not a hardrequirement in Terraform tooling. It seems to work well with pushing them to registries and testing.

2/ Have template repositories

A template repository can be cloned/forked to include things like .gitignore, .editorconfig, a template README.md

bonus: there might be a Terraform provider for your VCS to make this rule easy to enforce and live with.

3/ Standardize directory layout in repositories

A deployment repository has a provider dependant layout.

For AWS, then in the terraform-aws-<team> repo, there is a 4 level hierarchy: <component or app>/<environment>/<region>/<stack>

Example:

  • myapp/prod/eu-west-1/vpc
  • myapp/test/eu-west-1/vpc
  • shared/prod/eu-west-1/vpc

For GitHub, then the terraform-github-<team> repo, there is a 2 level hierarchy: <owner>/<stack>

Example

  • myteam/repositories
  • myorg/users

4/ Standardize file layout

For a stack (aka root module), there should be:

  • main.tf => terraform and provider blocks
  • variables.tf => variables statements
  • outputs.tf => outputs statements
  • modules.tf => module statements
  • resources.tf or r_<service>.tf => resource statements
  • data.tf or d_<service>.tf => data statements

A stack should be a collection of modules or resources. When it is mixed, make the resources local to the stack in a submodule unambiguously refering to the name of the module. So at any level, you have either modules or resources.

Declare data sources and resources as a single file resources.tf or data.tf or broken down into service or type when it’s too big. Example split data.tf into d_iam_policy_document.tf and d_remote_state.tf

Those things seems easy and obvious but help a lot to find your way around and deliver more and add tooling. One of the thing that it leads to is how to layout your state files on a remote backend. If your remote state backend is S3, mirror the layout of your stacks in the deployment repository on your team backend. If that’s Terraform Cloud, just name your workspace following the layout.

I appreciate for your reply for sure. Indeed, there are a lot wisdom!

We are moving from CLI driven workflow to VCS and Terraform Enterprise Module Registry. That’s the reason that I need to breakup current monolithic repo to modules.

Regarding the resource file, it’s hard to describe. But it is impossible to use data call. Let’s assume this is database configure file and used by all modules.

One way I think of is create a repo for resource files and publish as a module. So terraform init will download source into local drive(.terraform folder) and I use file() to open these config files.

appreciate for your help!

that would add versionning to it too. could be a good idea depending on your use case. otherwise, you can store it on S3 (be sure it’s ok with the limitation noted in the docs https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/s3_bucket_object), systems manager parameter store, dynamodb could also be implementing some kind of data source otherwise. are they just parameters or do that contain sensitive values too?