Hi.
There’s not much/anything on the documentation about strategies to deploying to different cloud environments such as dev, staging, prod, etc.
Since these envs are mostly the same, it makes sense to share all the code by default and only do conditions when you have differences.
However, I don’t want to deploy changes I have made to every env at the same time – I want to be able to deploy changes to dev only or staging to verify things work before I deploy to prod. So, there needs to be a way to define which env gets deployed (or diff’d or synth’d).
Am I supposed to use different stacks? Or should I just use ENV variables: export CLOUD_ENV=prod
and cdktf deploy
(and then do if-conditions based on env variables)?
Thanks for the ideas!
I think there are several ways to accomplish having multiple environments. Currently there aren’t really any built into cdktf, but it is certainly possible.
Here @cmclaughlin describes using environment variables and templated names to reuse the same stack code in multiple environments. That could certainly be expanded to if conditions for resources that vary between environments.
If you have existing pipelines set for handling this with Terraform, it should be possible to replicate functionality, though you’ll likely need to use escape hatches since not everything is natively supported.
An issue with my approach of using env variables and env-prefixed resources is that now that I’m deploying to staging, Terraform actually wants to also removed all of the dev resources:
Diff: 30 to create, 0 to update, 30 to delete.
As soon as I use the ENV=staging and I create staging-***
-resources, Terraform wants to create all the staging resources, but also remove all of them for dev. This happens, because the synth’d code now has staging-x instead of dev-x for all the resources.
If I update my code to create resources for every environment in a loop, each with their own env prefix, then it will indeed create all staging resources and keep the dev resources. However, if I now make changes to dev resources and choose to apply my changes, it will also apply them to staging and prod! I’m trying to come up with a way to selectively only deploy to one environment and not all of them at once.
Any ideas on this?
You want to make sure your state storage is also parameterized with the the environment. You can do that with local state, but I’d recommend using a remote one. Take a look here for more information.
@kaisellgren
Since these envs are mostly the same, it makes sense to share all the code by default and only do conditions when you have differences.
Out of curiosity, do you have a case where you have differences in terms of resources for the different stages? So, something where pure parameterization with variables wouldn’t cut it? I’m asking, since I’m putting together an RFC for https://github.com/hashicorp/terraform-cdk/issues/35
Is it not enough to do:
new GcsBackend(this, {
bucket: 'myproject-terraform-state',
prefix: process.env.ENV,
})
This correctly creates a folder structure like:
When I do cdktf deploy
with env variable ENV=staging
, it wants to both create the staging environment resources as well as destroy the dev environment resources that I had created earlier. Even if I use a different stack name.
I get it that the code ends up synthesizing only staging resources therefore dev resources are no longer in the output folder, but why would it be any different from manually created resources? If I create a resource manually, cdktf won’t delete it upon deployment as you’d expect. The state files are clearly separate from what I can tell? Is not enough to specify the GcsBackend & GoogleProvider?
Edit: This is confusing to me. I’m using a completely different bucket for GcsBackend, and I call terraform plan
in the cdktf.out
folder, and Terraform wants to delete the previously created resources that my statefile shouldn’t even be aware of? Here’s the relevant part of synth’d json:
"terraform": {
"required_providers": {
"google": "=3.30.0"
},
"backend": {
"gcs": {
"bucket": "randomtest-xxxx.fi",
"prefix": "staging"
}
}
},
Some how it’s looking into the other state file that I haven’t mentioned anywhere in the code and then decides the resources are no longer in cdk.tf.json
and wants to delete them.
Out of curiosity, do you have a case where you have differences in terms of resources for the different stages? So, something where pure parameterization with variables wouldn’t cut it?
I see at least these needs:
Global resources:
- DNS Record Set may be shared among all environments, but each env add their own A/CNAME records in there.
- Some records may be global like domain verification TXT records.
- Some IAM resources can be global as well and not per environment.
Partially shared resources:
- Some resources like secrets may be shared among multiple environments, but not all. For example a secret to 3rd party service may be shared among dev, test & staging, but prod has its own secret. I suppose you could just duplicate the same secret for every environment.
Conditional resources:
- Some resources like Google Cloud Armor can cost a lot. It costs $5 per security policy per month. So if you have several policies it will incur costs. This is fine and maybe even a necessity for the production environment depending on your needs/goals, but having that WAF operating in test and dev environments is rather pointless. If you want to ensure WAF does not cause any problems with your product, you could deploy it to staging as well, but there’s little reason to have it in test and dev environments.
However, most resources are shared across environments with differences in resource names only and maybe their parameters.
A project generally starts out with a local terraform.tfstate
file. I wonder if that is somehow still being used.
Maybe rename the extension and see what happens.
I removed the .terraform/terraform.tfstate
file and it did the trick! It seems it was still using it despite the GcsBackend configuration.
I have to say that the strategy of using env variables works quite well indeed, but I noticed that I need to do rm -rf cdktf.out
everytime before the diff/deploy/wtvr or otherwise it ends up mixing different environments and their resources. This, even though I have configured a GcsBackend with a remote state file. I wonder if this is a bug or a feature. I’m using the latest version as of today 0.0.17
.
Seems like a bug. Could you please file it here?
Sure. I filed it: https://github.com/hashicorp/terraform-cdk/issues/384
Will add more info if needed. For now I’ll stick to rm -rf cdktf.out
.