Computed attributes and plan modifiers

It looks like I initially posted the question to the wrong category :frowning:

So the question…

Is a computed attribute always unknown in plan if user doesn’t specify some value for it (if the attribute is also optional)? It looks like it is, so, it means that a provider has to “guess” a correct value for the attribute in its plan modifier, otherwise Terraform outputs non-empty plan when there is no change in configuration (due to unknown value of the computed attribute). And if the guess is wrong, Terraform reports an error in the provider (Error: Provider produced inconsistent result after apply).

Do I miss something?

Kind regards,
Dmitry

Correction. I think plan modifier doesn’t have to always predict correct value but shall at least specify a correct value if there is no changes to config (it should be enough, though more precise behavior can be welcomed).

So the question is - does the Framework provide means to understand that configuration has not been changed (taking into account computed attributes whose values aren’t specified in configuration)?

I guess a possible implementation is to walk through plan and compare it to state recursively ignoring computed attributes if their values are not specified in configuration (so they are unknown in plan).

I’m sorry if I miss something and the Framework already provides something similar.

Hi @dimuon :wave: Thank you for asking about this.

Unknown (“known after apply”) values during plans should only occur in these ways from a provider written in the framework:

  • If resource is not being destroyed and there are any configuration changes, mark all Computed: true schema attributes without a known configuration value as an unknown (“known after apply”) value
  • Provider logic explicitly sets the value to unknown

To provide some additional context, the current implementation for resource planning (PlanResourceChange RPC) in the framework is:

  • Copy Terraform’s proposed new state (“plan”) to the planned state (“state”) response
  • If resource is not being destroyed and there are any configuration changes, mark all Computed: true schema attributes without a known configuration value as an unknown (“known after apply”) value
  • If resource is not being destroyed, run any provider-defined attribute plan modifiers
  • If resource defines a ModifyPlan method, runs it

This logic should ensure that unknown (“known after apply”) attribute marking occurs only when the configuration is being changed. If that is not happening correctly, there could be a potential edge case in that logic or the framework’s data handling that it would be great to get a bug report. We’ve also seen the opposite ask, which is running attribute plan modifiers and ModifyPlan before the unknown marking for consistency. That is being tracked here: Plan Modifiers Not Executed Before Plan Computed-Only Unknown Marking · Issue #183 · hashicorp/terraform-plugin-framework · GitHub

That issue goes into a little more details about why the unknown marking is there at all, if that’s helpful.

To more directly answer your last question though, the current way to check for entire resource plans without updates would be checking the resource plan value is equal to the resource state value:

// Prevent further action when the plan has no changes
if req.Plan.Raw.Equal(req.State.Raw) {
  return
}

Attribute plan modifiers can also potentially use the same logic, although that may imply that other resource or attribute logic is not correctly implemented. For completeness though, attribute plan modifiers can check their associated attribute’s plan/state values via:

// Prevent further action when the attribute has no changes
if req.AttributePlan.Equal(req.AttributeState)
  return
}

Hope this helps and please reach out if you have any further questions.

Hi @bflad , thank you for the details - it helped me to debug the issue.

I think I managed to figure out why I observed a lot of changes in terraform plan for a configuration without changes.

We used nested attributes in our scheme that looks something like the snippet below.

tfsdk.Schema{

		Attributes: map[string]tfsdk.Attribute{
			"id": {
				Type:                types.StringType,
				Computed:            true,
			},
			// ...
			"nested_data1": {
				Optional:    true,
				Attributes: tfsdk.ListNestedAttributes(map[string]tfsdk.Attribute{
					"nested_data2": {
						Optional:    true,
						Attributes: map[string]tfsdk.Attribute{
						// ...								
						},
					},
					// other attributes
					// ...
				}),
			},
		},	
}

The problem is that Plan function (github.com/hashicorp/terraform-plugin-framework@v0.14.0/internal/fromproto6/plan.go) gets empty values for all fields in nested_data2 that are not specified in the configuration. The provider’s Read function does return these fields values.

So resp.PlannedState.Raw.Equal(req.PriorState.Raw) in PlanResourceChange RPC indeed returns false and all the diffs are the fields in the nested_data2 structure (that should be filled up with data from Read function). It looks like proto6 MsgPack struct contains nil instead of the needed data.

Once I replaced nested attributes with blocks, the diff becomes zero as expected. However I’m not sure how to deal with blocks that are optional, e.g. if nested_data1 is converted to Block and is not specified in the configuration, terraform apply complains about unexpected new value as block count changed from 0 to 1 after apply.

@bflad, I really appreciate your help.

Kind Regards,
Dmitry

The problem is that Plan function (github.com/hashicorp/terraform-plugin-framework@v0.14.0/internal/fromproto6/plan.go) gets empty values for all fields in nested_data2 that are not specified in the configuration. The provider’s Read function does return these fields values.

So resp.PlannedState.Raw.Equal(req.PriorState.Raw) in PlanResourceChange RPC indeed returns false and all the diffs are the fields in the nested_data2 structure (that should be filled up with data from Read function). It looks like proto6 MsgPack struct contains nil instead of the needed data.

:slightly_frowning_face: That sounds like a bug somewhere, but I’m having a hard time reproducing the behavior you are seeing. I created a resource with the following GetSchema and Create methods, which just reflects Terraform’s plan data into the created resource state:

func (r *ExampleResource) GetSchema(ctx context.Context) (tfsdk.Schema, diag.Diagnostics) {
	return tfsdk.Schema{
		// This description is used by the documentation generator and the language server.
		MarkdownDescription: "Example resource",

		Attributes: map[string]tfsdk.Attribute{
			"list_nested_attribute": {
				Attributes: tfsdk.ListNestedAttributes(map[string]tfsdk.Attribute{
					"nested_list_nested_attribute": {
						Attributes: tfsdk.ListNestedAttributes(map[string]tfsdk.Attribute{
							"double_nested_bool_attribute": {
								Optional: true,
								Type:     types.BoolType,
							},
							"double_nested_string_attribute": {
								Optional: true,
								Type:     types.StringType,
							},
						}),
						Optional: true,
					},
				}),
				MarkdownDescription: "List nested attribute",
				Optional:            true,
			},
			"id": {
				Computed:            true,
				MarkdownDescription: "Example identifier",
				PlanModifiers: tfsdk.AttributePlanModifiers{
					resource.UseStateForUnknown(),
				},
				Type: types.StringType,
			},
		},
	}, nil
}

func (r *ExampleResource) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
	var data *ExampleResourceModel

	// Read Terraform plan data into the model
	resp.Diagnostics.Append(req.Plan.Get(ctx, &data)...)

	if resp.Diagnostics.HasError() {
		return
	}

	// If applicable, this is a great opportunity to initialize any necessary
	// provider client data and make a call using it.
	// httpResp, err := d.client.Do(httpReq)
	// if err != nil {
	//     resp.Diagnostics.AddError("Client Error", fmt.Sprintf("Unable to create example, got error: %s", err))
	//     return
	// }

	// For the purposes of this example code, hardcoding a response value to
	// save into the Terraform state.
	data.Id = types.StringValue("example-id")

	// Write logs using the tflog package
	// Documentation: https://terraform.io/plugin/log
	tflog.Trace(ctx, "created a resource")

	// Save data into Terraform state
	resp.Diagnostics.Append(resp.State.Set(ctx, &data)...)
}

Then applied an example configuration:

resource "doublenested_example" "example" {
  list_nested_attribute = [
    {
      nested_list_nested_attribute = [
        {
          double_nested_bool_attribute = true
        }
      ]
    },
  ]
}

Terraform showed a correct plan:

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # doublenested_example.example will be created
  + resource "doublenested_example" "example" {
      + id                    = (known after apply)
      + list_nested_attribute = [
          + {
              + nested_list_nested_attribute = [
                  + {
                      + double_nested_bool_attribute = true
                    },
                ]
            },
        ]
    }

Plan: 1 to add, 0 to change, 0 to destroy.

And the applied Terraform state looks correct (double_nested_string_attribute is null):

{
  "version": 4,
  "terraform_version": "1.3.3",
  "serial": 2,
  "lineage": "e16c9270-a21a-5e9b-b23e-688d9f69727d",
  "outputs": {},
  "resources": [
    {
      "mode": "managed",
      "type": "doublenested_example",
      "name": "example",
      "provider": "provider[\"registry.terraform.io/hashicorp/doublenested\"]",
      "instances": [
        {
          "schema_version": 0,
          "attributes": {
            "id": "example-id",
            "list_nested_attribute": [
              {
                "nested_list_nested_attribute": [
                  {
                    "double_nested_bool_attribute": true,
                    "double_nested_string_attribute": null
                  }
                ]
              }
            ]
          },
          "sensitive_attributes": []
        }
      ]
    }
  ],
  "check_results": []
}

Changing the double_nested_bool_attribute value to false to trigger a plan does what I’d expect in terms of the plan and applied state.

You’re mentioning your Read method occasionally here, but it is difficult to discern if that might be influencing the errant behavior you are seeing without seeing the logic that saves that particular attribute.

In general, nested attributes should be preferred over blocks with protocol version 6, so it would be nice to figure out what is going on!

@bflad , thank you for the response. It looks like we cannot proceed with blocks - they cannot be computed that is a blocker for us.

Anyway, we still experience the described error with nested attributes when they reside inside other nested attributes. It can be a bit challenging to create a contrived example that reproduce the error but we managed to reproduce it in our unit test. The test fails after apply due to non-empty plan. However the resource and implementation is quite big so it can take some time to look into.

@bflad , I managed to reproduce the problem with nested attributes on the hashicups example with some modifications to the resource schema.
The problem happens when a nested attribute is computed.

This reproduces the problem (hashicups docker-compose should be up):

TF_ACC=1 go test -count=1 -v ./hashicups

1 Like

Thanks so much for the followup, @dimuon :+1: I’ll try to take a look at this later today.

1 Like

@bflad, I looked at the issue again - I tend to think that there is a defect either in terraform client or, maybe, something is wrong with decoding MsgPack values in some cases.

Is it possible to switch to JSON instead of MsgPack in communication between client and server (provider) just to check the hypothesis?

It looks like the issue is a blocker for us - we have a lot of nested computed and optional attributes. We are really short in time, may I kindly ask you to give it a priority?

Real quick, the short answer here is not at the moment. Terraform core sends MsgPack data on the protocol in almost all cases and we reciprocate on our side. You can forcibly dump the MsgPack data though by setting the TF_LOG_SDK_PROTO_DATA_DIR environment variable and using tooling (such as fq) to inspect the that binary data.

I have reached out to the product manager. You may want to escalate via your normal partnership contacts in the meantime.

2 Likes

@bflad , there is one more question if you don’t mind.

Do I get it correctly that if at least one computed and optional attribute is modified by user, TF marks all computed attributes as “known after apply” even if a plan modifier for the ‘culprit’ attribute sets some value for it, so terraform plan omits output for the culprit attribute but shows “known after apply” for all other computed attributes? If so, is there any way to tell TF that there is no change in the configuration besides providing plan modifiers for all computed attributes?

If the framework detects a change between the planned state and prior state when the PlanResourceChange RPC is received, it will automatically mark Computed attributes with a null configuration value as unknown (known after apply) in the plan response. This is to prevent Terraform inconsistent result after apply errors that cannot be avoided by practitioners should any applied values differ from the plan after resource update. Instead, the plan output is a little “noisy” until providers implement the UseStateForUnknown() plan modifier for attributes which are known to never change after update or customized plan modifiers for situations such as default values.

This behavior differs from terraform-plugin-sdk because Terraform enforces this class of inconsistent data error for non-terraform-plugin-sdk SDKs and provider developers often ran into situations they didn’t want the value preservation behavior, so this was the tradeoff in the framework design.

1 Like