We have developed a terraform provider for an internal API that has recently implemented some aggressive rate limiting functionality. Setting parallelism to 1 avoids hitting the rate limit as it’s currently implemented.
Is it possible to disable parallelism per provider via terraform configuration?
Is it possible to modify a terraform provider to work serially?
Terraform does not have any fine-grained concurrency controls like that, but if you are developing the provider yourself, you can implement the rate limiting within your provider. This won’t account for multiple instances of the provider, but given a single provider configuration all calls will be made through the same provider process.
To expand further on what @jbardin mentions, here are two common solutions in this space:
- Implementing known rate limiting responses within the API (e.g. HTTP headers or bespoke HTTP status codes) and ensuring the API client used in the provider implements a backoff/retry algorithm based on those rate limiting details.
- Implementing mutex/semaphore based execution within each of the pertinent resource operations. This can generally be achieved with Go standard library functionality (e.g.
sync.Mutex) or community Go modules.