Supporting gigantic stacks

Hi.
I am trying to design development to production terraform pipelines for a portfolio of suites of applications. The portfolio has 3 major groups applications each with many micro services database clusters, … supported by 100-200 developers ( probably 20 two pizza team effort)

we need to break the terraform stacks up and have one stack reference resources created by other stacks. I think I use “output” to make the values available to other running stacks

if in stack 1 I have
output app1_alb_arn {
value = aws_alb.app1_alb.arn
}

In stack 2 how can

variable app1_alb_arn {
default = stack1.app1_alb_arn
}

You can use the remote state data source to get access to the outputs from other state files: The terraform_remote_state Data Source | Terraform by HashiCorp

ok… just to wrap my mind around it. I am a beginner. sorry for beginner questions. Stack overflow and Server Fault want more concrete questions/answere

We are using S3 buckets for remote state. Currently each app is broken up into separate chunks of work with state files in different locations for each AWS account.

what you are saying is use the same state file for all of the the chunks. That would imply 20-100 Jenkins pipelines trying to modify the same state. Certainly 2 executing at any given time…

we have a team that is creating template modules. The analytical team uses the aurora module to make aurora clusters in all dev, qa, prod. and they define a module “analytics” that has a DB in it that is build from module.blueprint.auorora
and publish the writer endpoint with an output statement

output “aurora_writer_endpoint”

the inventory team could then get the writer endpoint by using “module.analyitcs.aurora_writer_endpoint”

No you have to have a different state file for each root module (which could be a different S3 bucket or just a different key within the same bucket).

You were asking about how those totally independent root modules can then interoperate, as you likely have some IDs, etc. that need to be shared (for example a VPC ID or cluster ARN, etc.).

Three ways this could be done:

  1. Create a module that contains those values (just as fixed strings/maps/lists) and then include that module wherever needed.

Advantages: Things are decoupled, so you don’t need to have any access to other state files or AWS accounts (which can be very useful for security) as well as not needing to coordinate output names. etc.
Disadvantages: If anything changes you need to update the central module otherwise the wrong details will keep being used

  1. Use the remote state datasource, so one root module exposes outputs that another root module can then consume

Advantages: The data is always up to date as you are looking directly at the other module’s state file, while not needing access to the AWS account that the resources actually live in
Disadvantages: You need access to the state file and you need to coordinate to ensure the right outputs are included (as well as handling changes)

  1. Use other data sources to find the IDs you want (could be used in conjunction with 1 & 2)

Advantages: Gives you access to loads of information about a resource (not just an ID) which is always up to date. without needing access to state files from other root modules.
Disadvantages: Needs access to the AWS accounts containing the resources, plus a way to find what you need - you will need to agree on a mechanism, such as tagging and then also deal with the possibility of more than one or zero found resources (e.g. if a tag changes or there is a permission issue)