ReadWrite Lock in a Provider?

In the API client that I’m using for a new provider, there is a single endpoint that returns a large JSON with many configuration settings (it’s a map[string]map[any]). I would like to break each of the configuration settings into separate Terraform resources. It looks like this:

  "data": {
    "attributes": {
      "config": {
        "service_1_settings": { // this would be a Terraform resource
          "setting_1_1": false,
          "setting_1_2": false,
          "setting_1_3": 6,
        "service_2_settings": { // this would be another Terraform resource
          "setting_2_1": ["string", "string"],
          "setting_2_2": true,
          "setting_2_3": true,
          "setting_2_4": false
        "service_3_settings": { // again, another Terraform resource
          "setting_3_1": false,
          "setting_3_2": null,
          "setting_3_3": false

But, as I’m thinking through this, I am wondering how I can prevent race conditions (multiple resources representing their own configuration setting trying to update the endpoint). There is no PATCH on this endpoint, only PUT. So each resource would essentially read the JSON, update the targeted configuration setting, and then PUT the entire JSON back to the endpoint.

resource "service_1_setting" "default" {
    setting_1_1 = true
    setting_1_2 = true

resource "service_2_setting" "default" {
    setting_2_3 = true
    setting_2_4 = false

Is there a way to prevent these kinds of race conditions in the provider (like a RWMutex)? If not, assuming that you cannot change the API client, how is this generally approached?

Hi @gene.sexton,

A mutex would work just fine in a situation like this.
A single provider configuration in Terraform creates a single instance of that provider. So as long as you don’t have multiple providers working on the same objects (which would be very unusual), the mutex can serialize access to the API.

1 Like

hi @jbardin,

as long as you don’t have multiple providers working on the same objects

I’ve been considering using a provider-managed sync.Mutex for reasons similar to the ones described by @gene.sexton.

Clearly all bets are off if the user has multiple terraform projects working against the same API objects at any given moment…

I think there’s a risk within a single project when it comes to modules written by the user because these will run concurrently, each with its own sync.Mutex.

Am I right to be concerned about this, and is there a good pattern for mitigating that risk?

Best I’ve thought of is to use a filesystem-based mutex (yuck)


Multiple modules don’t necessarily mean there are multiple provider instances. Providers should be configured within the root module and are usually shared between modules. Most of the time when there are different provider configurations they don’t overlap in the services they manage, but of course there are always exceptions.

Some services may also offer a distributed, consistent data store which can be used to coordinate actions between multiple provider instances.

1 Like

Thanks for confirming. I have yet to do profiling on the change, but it looks like it is working well in tests.

I was thinking that a module might include a baked-in version constraint. If so, the count of provider instances (and mutexes) is in the hands of the end user.

Maybe I imagined this? It was enough to make me nervous about counting on mutexes, anyway.

Concretely, Terraform starts one provider plugin process per provider block, and then uses it to handle all of the resources associated with that provider configuration. Therefore they can all work with the same mutex.

If a module includes its own provider block (which is a deprecated pattern, but still supported) then it would have a separate plugin process, and so would not have access to the same mutex.

With all that said then: if a particular provider relies on a mutex to synchronize conflicting operations on different resource instances then it should probably mention in its docs that it’s safe only if all of the resource instances are associated with the same provider block.

Thank you for the additional detail, @apparentlymart.

Given the deprecated pattern, the fact I’m considering a mutex for a new provider, and my conscience cleared via documentation, I’m more comfortable with this pattern that I was before.

This helped.