Hi all,
I have a problem where I have multiple terraform modules that are included in a terraform file with a for loop over a config file. So the resulting terraform file as like 100+ modules. In each module, I’m calling a Get API on AWS Lambda which is limited to 100 calls per second. Quite often, i hit this limit as Terraform is trying to apply all the modules at the same time.
Is there a way i can tell Terraform, even if it’s a global setting, to only try applying Terraform modules say 50 at a time?
Hi @omar.el.said,
The usual approach for this would be to split your configuration into smaller parts that each manage a more reasonable number of objects (per the remote API’s definition of “reasonable”) and then you can work with each one separately.
Because you are dynamically generating a large configuration from an external source this will be a little more complicated, but I think there are still ways to do it. For example, if you could annotate the external data with an extra property that allows you to partition the dataset into smaller parts then you could use a for
expression with an if
clause to discard items that are not in whatever group the current configuration belongs to.
If you intend to use exactly the same configuration source code for each of these “groups” then an alternative would be to use multiple workspaces (using the terraform workspace ...
) subcommands where the workspace name controls which subset of objects it active, using terraform.workspace
in your root module to determine the current workspace name.
However, because this isn’t exactly what workspaces were intended for this workspace approach won’t work so well in Terraform Cloud; it can work in local Terraform CLI only because the non-Cloud subset of the workspaces functionality is relatively unopinionated and so is easier to use for unintended purposes like this.