Recommended approach for multi-env, multi-stack layout

Hey all,

I’ve looked through the documentation and examples, but I can’t seem to find a pattern recommended for the infrastructure that contains the following layout:

Environments:

  • dev
  • dev-
  • prod

And each of the environments has:

  • vpc
  • apps
    • api
    • web
  • db
    • RDS
    • DynamoDB.

I have some assumptions, but I’m not sure if I’m thinking in a right direction. Can someone share what the most current recommendation is?

Thanks!

We have example projects that sounds very similar to what your looking for, you can take a look at the Typescript version here. The same project has also been re-implemented in Java and Python.

Even if you’re not using Java or Python I’d recommend taking a look at either one as they both have far more descriptive READMEs in regards to the design patterns of the project than the Typescript implementation.

Thanks, Mark,

Yes, I saw those, and I can see the inclination is to have flat list of stacks:

  • dev-vpc
  • dev-apps
  • dev-db
  • prod-vpc
  • prod-apps
  • prod-db

It makes sense, but then it is unclear what’s the best way to differentiate between differences across environments.

For example, a bucket would have a different lifecycle policy or number of zones in a
VPC would be different.

The only way I can think of is using conditionals, is that the recommended way?

In addition, the approach of user suffixes while having common stack names seems to be prone to unexpected resource deletions during a true CI/CD, when, for example, a non-user dev environment (dev-posts stack, in the python example) would be applied and sls-posts-automationd Dynamo table would be considered for deletion (as it would be missing in the dev-posts stack synthed).

If I’m understanding it correctly, one of the solutions would be to add user component into the stack id here.

To re-iterate, I’m solely trying to understand the design concepts assumed & confirm my assumptions.

Conditionals could be an option, though I can envision some flakiness if it’s done directly through the passed environment string. If going this route, I’d personally use an enum to add a level of strictness on whats allowable for the environment. Could potentially help to make future changes and any subsequent problems they might create more easily visible prior to an apply.

Another way it could be done is through the passing of pre-determined options (something like devOptions and prodOptions) object into the respective stacks in the main file, and then accessing the options object where it’s needed to account for differences in the environments. Though this would have it’s own downsides. Depending on the complexity of your project, or rather the complexity of the differences between environments, having configuration details that live outside of the space in which they are to configure could prove to be awkward and/or difficult to work with.

In total, it’s a trade off between added complexity of the applying based on conditionals and configuration existing separate to what its configuring.

In terms of the approach of using user suffix in CI/CD, I foresee the same potential issue as you. To my understanding, the inclusion of the user in to the stack id as you suggest would mitigate such problems.

Thanks for all your input. It makes sense, although I’ll probably need to build it and feel it out to get a full understanding of pros/cons.

I have another alternative in mind for env structuring, do you think it would make any sense to split cdktf directories/projects on the env level? Let’s say a directory for dev (with user-based envs in it) and directory for prod? Or would it be a severe antipattern?

Or maybe split the cdktd.json into two different directories/files with different context config section? Let’s say:

  1. cdktf.dev.json contains information about the “dev” super environment with it’s own state backend
  2. cdktf.prod.json contains information about the “prod” super environment with it’s own state backend
    I guess the feature and a downside here would be that it would loose continuity from dev to prod completely (let’s say provider/module upgrade)

I can imagine you’d run into some issues having a split config for the same project, though with issues there are usually work-arounds to be found. Main thing I see you already pointed out– having a general discontinuity between environments. Additionally having naming that differs from cdktf.json would likely cause issues with the CLI and more generally the cdktf package. As it stands it’d be hard to recommend this approach as you will be working against the current intentions of our design– but I’ll leave it up to you to evaluate trade-offs between the approaches. Happy to be proven wrong!

If you have the chance, I’d be interested to hear about your experience building out a solution to your use case. Either in the cdk.dev slack, or a github issue if you have any ideas on improving support for your use case. Hands-on experience from practitioners is always appreciated :grin:

Example of structured setup can be found in GitHub - vsuzdaltsev/arranger