Enforce Resource Replace in Read Plugin Framework

Hello,
I develop a new provider with the Plugin Framework.
My provider has a resource that creates two instances on Azure, both of which are required by the resource for it to work.
The resource only requires two attributes topic_name and endpoint_name to create both of them.
Now in the Read function when I detect that one of those instance are missing, I want to enforce a replacement of the resource, but as the existence of the instances themselves is not tracked in the state I cannot tell Terraform to replace the resource.

I tried to track the status of the resource with a computed attribute status with the RequiresReplace PlanModifier, set it at creation to “ok” and if something is missing in Read set it to “error”, but if no other attribute changed in the state, the computed attribute will be ignored and Terraform says nothing has to be done.

Currently my only idea to solve this is to delete and clean up on Azure in the Read function and invalidate the state with resp.State.RemoveResource(ctx) if one Instance is missing.

I found this question on SO regarding the same problem that inspired the approach with the somputed status attribute, but I am not sure if could work with the Plugin Framework.

Does anyone have a better idea how to approach this?

Hi @MacGurk :wave:

Are you able to share the Terraform configuration (*.tf), the provider resource schema and the relevant CRUD functions to provide further context?

Can you expand on your description of having a resource that creates 2 instances? Perhaps this will become clearer if you can supply the TF configuration, schema and CRUD functions but in the first instance it sounds a little odd that the resource is creating instances but the “existence of the instances themselves is not tracked in the state”.

Hey @bendbennett

Thanks for your response!

I could make something work with computed bool attributes that get modified in Read when something doesn’t exists anymore and two PlanModifiers that set an Attribute, that tells the Update function to create the instance on Azure if it doesnt exists.

I can share the repository of the provider witch you.
In the resource schema you can see the computed Attributes queue_exists, endpoint_exists, should_create_queue and should_create_endpoint and the PlanModifiers below.

I also have an example main.tf file for how the configuration should look like.

Mind the code is still a bit messy and work in progress, but currently it works with in a probably hacky way…

I am aware that the resource does not follow the best practices of a Terraform provider, but it was the requirement that the provider abstract in one resource the construct on Azure for a framework that works on top of that construction.

Hey @MacGurk :wave:

Thank you for sharing the links to the schema and CRUD functions along with an example for main.tf. Great to hear that you got this working.

I have a couple of questions:

  • You mentioned that "in the Read function when I detect that one of those instance are missing". What is causing the instances that have been created to be removed/deleted? Is this removal/deletion of the queue and/or subscriptions taking place outside of Terraform? Ordinarily, the Delete function which makes up one of the CRUD functions associated with a resource would be used to manage the lifecycle of the resource.
  • I’m guessing that you’ve already evaluated using the Terraform modules for Azure Service Bus Queue and have determined that you have additional/other requirements?
  • As an aside, I’ve pulled the repo to take a look through the implementation. Within the Create function it appears that you have the following calls:
	err := r.client.CreateEndpointQueue(ctx, model.EndpointName, model.QueueOptions)
	for _, queue := range model.AdditionalQueues {
		err := r.client.CreateEndpointQueue(ctx, queue, model.QueueOptions)
		/* ... */
	err = r.client.CreateEndpointWithDefaultRule(ctx, model)
	for i := 0; i < len(plan.Subscriptions); i++ {
		err := r.client.CreateEndpointSubscription(ctx, model, plan.Subscriptions[i])
		/* ... */

Each of these calls itself makes a call on the Azure client to perform the following actions:

  • r.client.CreateEndpointQueue() => client.CreateQueue()
  • for _, queue := range model.AdditionalQueues.... => client.CreateQueue()
  • r.client.CreateEndpointWithDefaultRule() => client.CreateSubscription()
  • for i := 0; i < len(plan.Subscriptions); i++ { .... => client.CreateRule()

As each of the “Create” operations called on the Azure client represents a managed resource, each would ordinarily be handled individually within Terraform so that the lifecycle for a “Queue”, “Subscription” or “Rule” could be more simply tracked and managed. I believe that this represents the recommended approach.

However, I also note that you stated:

I am aware that the resource does not follow the best practices of a Terraform provider, but it was the requirement that the provider abstract in one resource the construct on Azure for a framework that works on top of that construction.

Given this restriction, I’m wondering whether it’s worth considering handling the Terraform configuration and associated schema slightly differently. For instance, in the case of creating queues, you appear to be using the following in config:

resource "dgservicebus_endpoint" "dev" {
  endpoint_name = "dg-nservicebus-test-endpoint"
  additional_queues = [
    "another_queue",
    "a_further_queue",
  ]
}

With the associated schema:

			"endpoint_name": schema.StringAttribute{
				Required:    true,
				Description: "",
				PlanModifiers: []planmodifier.String{
					stringplanmodifier.RequiresReplace(),
				},
			},
			/* ... */
			"additional_queues": schema.ListAttribute{
				Optional:    true,
				ElementType: types.StringType,
				Description: "",
			},

As each of these entries represent queue resources in their own right you could consider using a ListNestedAttribute, or something along those lines, so that you had something like the following in the configuration:

"dgservicebus_endpoint" "dev" {
  queues = [
    {
      name = "queue_1"
      /* ... */
    },
    {
      name = "queue_2"
      /* ... */
    }
  ]
}

The schema might then look something like:

			"queues": schema.ListNestedAttribute{
				Required: true,
				NestedObject: schema.NestedAttributeObject{
					Attributes: map[string]schema.Attribute{
						"name": schema.StringAttribute{
							Required: true,
						},
						/* ... */
					},
				},
				/* ... */
			},

I think the main question is really around management of the resources (i.e., queue, subscription, rule), what is causing these to be removed/deleted, and whether this removal/deletion is happening outside of Terraform.