Computed attributes and plan modifiers

It looks like I initially posted the question to the wrong category :frowning:

So the question…

Is a computed attribute always unknown in plan if user doesn’t specify some value for it (if the attribute is also optional)? It looks like it is, so, it means that a provider has to “guess” a correct value for the attribute in its plan modifier, otherwise Terraform outputs non-empty plan when there is no change in configuration (due to unknown value of the computed attribute). And if the guess is wrong, Terraform reports an error in the provider (Error: Provider produced inconsistent result after apply).

Do I miss something?

Kind regards,
Dmitry

Correction. I think plan modifier doesn’t have to always predict correct value but shall at least specify a correct value if there is no changes to config (it should be enough, though more precise behavior can be welcomed).

So the question is - does the Framework provide means to understand that configuration has not been changed (taking into account computed attributes whose values aren’t specified in configuration)?

I guess a possible implementation is to walk through plan and compare it to state recursively ignoring computed attributes if their values are not specified in configuration (so they are unknown in plan).

I’m sorry if I miss something and the Framework already provides something similar.

Hi @dimuon :wave: Thank you for asking about this.

Unknown (“known after apply”) values during plans should only occur in these ways from a provider written in the framework:

  • If resource is not being destroyed and there are any configuration changes, mark all Computed: true schema attributes without a known configuration value as an unknown (“known after apply”) value
  • Provider logic explicitly sets the value to unknown

To provide some additional context, the current implementation for resource planning (PlanResourceChange RPC) in the framework is:

  • Copy Terraform’s proposed new state (“plan”) to the planned state (“state”) response
  • If resource is not being destroyed and there are any configuration changes, mark all Computed: true schema attributes without a known configuration value as an unknown (“known after apply”) value
  • If resource is not being destroyed, run any provider-defined attribute plan modifiers
  • If resource defines a ModifyPlan method, runs it

This logic should ensure that unknown (“known after apply”) attribute marking occurs only when the configuration is being changed. If that is not happening correctly, there could be a potential edge case in that logic or the framework’s data handling that it would be great to get a bug report. We’ve also seen the opposite ask, which is running attribute plan modifiers and ModifyPlan before the unknown marking for consistency. That is being tracked here: Plan Modifiers Not Executed Before Plan Computed-Only Unknown Marking · Issue #183 · hashicorp/terraform-plugin-framework · GitHub

That issue goes into a little more details about why the unknown marking is there at all, if that’s helpful.

To more directly answer your last question though, the current way to check for entire resource plans without updates would be checking the resource plan value is equal to the resource state value:

// Prevent further action when the plan has no changes
if req.Plan.Raw.Equal(req.State.Raw) {
  return
}

Attribute plan modifiers can also potentially use the same logic, although that may imply that other resource or attribute logic is not correctly implemented. For completeness though, attribute plan modifiers can check their associated attribute’s plan/state values via:

// Prevent further action when the attribute has no changes
if req.AttributePlan.Equal(req.AttributeState)
  return
}

Hope this helps and please reach out if you have any further questions.

Hi @bflad , thank you for the details - it helped me to debug the issue.

I think I managed to figure out why I observed a lot of changes in terraform plan for a configuration without changes.

We used nested attributes in our scheme that looks something like the snippet below.

tfsdk.Schema{

		Attributes: map[string]tfsdk.Attribute{
			"id": {
				Type:                types.StringType,
				Computed:            true,
			},
			// ...
			"nested_data1": {
				Optional:    true,
				Attributes: tfsdk.ListNestedAttributes(map[string]tfsdk.Attribute{
					"nested_data2": {
						Optional:    true,
						Attributes: map[string]tfsdk.Attribute{
						// ...								
						},
					},
					// other attributes
					// ...
				}),
			},
		},	
}

The problem is that Plan function (github.com/hashicorp/terraform-plugin-framework@v0.14.0/internal/fromproto6/plan.go) gets empty values for all fields in nested_data2 that are not specified in the configuration. The provider’s Read function does return these fields values.

So resp.PlannedState.Raw.Equal(req.PriorState.Raw) in PlanResourceChange RPC indeed returns false and all the diffs are the fields in the nested_data2 structure (that should be filled up with data from Read function). It looks like proto6 MsgPack struct contains nil instead of the needed data.

Once I replaced nested attributes with blocks, the diff becomes zero as expected. However I’m not sure how to deal with blocks that are optional, e.g. if nested_data1 is converted to Block and is not specified in the configuration, terraform apply complains about unexpected new value as block count changed from 0 to 1 after apply.

@bflad, I really appreciate your help.

Kind Regards,
Dmitry

The problem is that Plan function (github.com/hashicorp/terraform-plugin-framework@v0.14.0/internal/fromproto6/plan.go) gets empty values for all fields in nested_data2 that are not specified in the configuration. The provider’s Read function does return these fields values.

So resp.PlannedState.Raw.Equal(req.PriorState.Raw) in PlanResourceChange RPC indeed returns false and all the diffs are the fields in the nested_data2 structure (that should be filled up with data from Read function). It looks like proto6 MsgPack struct contains nil instead of the needed data.

:slightly_frowning_face: That sounds like a bug somewhere, but I’m having a hard time reproducing the behavior you are seeing. I created a resource with the following GetSchema and Create methods, which just reflects Terraform’s plan data into the created resource state:

func (r *ExampleResource) GetSchema(ctx context.Context) (tfsdk.Schema, diag.Diagnostics) {
	return tfsdk.Schema{
		// This description is used by the documentation generator and the language server.
		MarkdownDescription: "Example resource",

		Attributes: map[string]tfsdk.Attribute{
			"list_nested_attribute": {
				Attributes: tfsdk.ListNestedAttributes(map[string]tfsdk.Attribute{
					"nested_list_nested_attribute": {
						Attributes: tfsdk.ListNestedAttributes(map[string]tfsdk.Attribute{
							"double_nested_bool_attribute": {
								Optional: true,
								Type:     types.BoolType,
							},
							"double_nested_string_attribute": {
								Optional: true,
								Type:     types.StringType,
							},
						}),
						Optional: true,
					},
				}),
				MarkdownDescription: "List nested attribute",
				Optional:            true,
			},
			"id": {
				Computed:            true,
				MarkdownDescription: "Example identifier",
				PlanModifiers: tfsdk.AttributePlanModifiers{
					resource.UseStateForUnknown(),
				},
				Type: types.StringType,
			},
		},
	}, nil
}

func (r *ExampleResource) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
	var data *ExampleResourceModel

	// Read Terraform plan data into the model
	resp.Diagnostics.Append(req.Plan.Get(ctx, &data)...)

	if resp.Diagnostics.HasError() {
		return
	}

	// If applicable, this is a great opportunity to initialize any necessary
	// provider client data and make a call using it.
	// httpResp, err := d.client.Do(httpReq)
	// if err != nil {
	//     resp.Diagnostics.AddError("Client Error", fmt.Sprintf("Unable to create example, got error: %s", err))
	//     return
	// }

	// For the purposes of this example code, hardcoding a response value to
	// save into the Terraform state.
	data.Id = types.StringValue("example-id")

	// Write logs using the tflog package
	// Documentation: https://terraform.io/plugin/log
	tflog.Trace(ctx, "created a resource")

	// Save data into Terraform state
	resp.Diagnostics.Append(resp.State.Set(ctx, &data)...)
}

Then applied an example configuration:

resource "doublenested_example" "example" {
  list_nested_attribute = [
    {
      nested_list_nested_attribute = [
        {
          double_nested_bool_attribute = true
        }
      ]
    },
  ]
}

Terraform showed a correct plan:

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # doublenested_example.example will be created
  + resource "doublenested_example" "example" {
      + id                    = (known after apply)
      + list_nested_attribute = [
          + {
              + nested_list_nested_attribute = [
                  + {
                      + double_nested_bool_attribute = true
                    },
                ]
            },
        ]
    }

Plan: 1 to add, 0 to change, 0 to destroy.

And the applied Terraform state looks correct (double_nested_string_attribute is null):

{
  "version": 4,
  "terraform_version": "1.3.3",
  "serial": 2,
  "lineage": "e16c9270-a21a-5e9b-b23e-688d9f69727d",
  "outputs": {},
  "resources": [
    {
      "mode": "managed",
      "type": "doublenested_example",
      "name": "example",
      "provider": "provider[\"registry.terraform.io/hashicorp/doublenested\"]",
      "instances": [
        {
          "schema_version": 0,
          "attributes": {
            "id": "example-id",
            "list_nested_attribute": [
              {
                "nested_list_nested_attribute": [
                  {
                    "double_nested_bool_attribute": true,
                    "double_nested_string_attribute": null
                  }
                ]
              }
            ]
          },
          "sensitive_attributes": []
        }
      ]
    }
  ],
  "check_results": []
}

Changing the double_nested_bool_attribute value to false to trigger a plan does what I’d expect in terms of the plan and applied state.

You’re mentioning your Read method occasionally here, but it is difficult to discern if that might be influencing the errant behavior you are seeing without seeing the logic that saves that particular attribute.

In general, nested attributes should be preferred over blocks with protocol version 6, so it would be nice to figure out what is going on!

@bflad , thank you for the response. It looks like we cannot proceed with blocks - they cannot be computed that is a blocker for us.

Anyway, we still experience the described error with nested attributes when they reside inside other nested attributes. It can be a bit challenging to create a contrived example that reproduce the error but we managed to reproduce it in our unit test. The test fails after apply due to non-empty plan. However the resource and implementation is quite big so it can take some time to look into.

@bflad , I managed to reproduce the problem with nested attributes on the hashicups example with some modifications to the resource schema.
The problem happens when a nested attribute is computed.

This reproduces the problem (hashicups docker-compose should be up):

TF_ACC=1 go test -count=1 -v ./hashicups

1 Like

Thanks so much for the followup, @dimuon :+1: I’ll try to take a look at this later today.

1 Like

@bflad, I looked at the issue again - I tend to think that there is a defect either in terraform client or, maybe, something is wrong with decoding MsgPack values in some cases.

Is it possible to switch to JSON instead of MsgPack in communication between client and server (provider) just to check the hypothesis?

It looks like the issue is a blocker for us - we have a lot of nested computed and optional attributes. We are really short in time, may I kindly ask you to give it a priority?

Real quick, the short answer here is not at the moment. Terraform core sends MsgPack data on the protocol in almost all cases and we reciprocate on our side. You can forcibly dump the MsgPack data though by setting the TF_LOG_SDK_PROTO_DATA_DIR environment variable and using tooling (such as fq) to inspect the that binary data.

I have reached out to the product manager. You may want to escalate via your normal partnership contacts in the meantime.

2 Likes

@bflad , there is one more question if you don’t mind.

Do I get it correctly that if at least one computed and optional attribute is modified by user, TF marks all computed attributes as “known after apply” even if a plan modifier for the ‘culprit’ attribute sets some value for it, so terraform plan omits output for the culprit attribute but shows “known after apply” for all other computed attributes? If so, is there any way to tell TF that there is no change in the configuration besides providing plan modifiers for all computed attributes?

If the framework detects a change between the planned state and prior state when the PlanResourceChange RPC is received, it will automatically mark Computed attributes with a null configuration value as unknown (known after apply) in the plan response. This is to prevent Terraform inconsistent result after apply errors that cannot be avoided by practitioners should any applied values differ from the plan after resource update. Instead, the plan output is a little “noisy” until providers implement the UseStateForUnknown() plan modifier for attributes which are known to never change after update or customized plan modifiers for situations such as default values.

This behavior differs from terraform-plugin-sdk because Terraform enforces this class of inconsistent data error for non-terraform-plugin-sdk SDKs and provider developers often ran into situations they didn’t want the value preservation behavior, so this was the tradeoff in the framework design.

1 Like

Hi @bflad
Are there any updates on this issue? We are also running into the same issue for a SetNestedAttribute (Computed and Optional) with a NestedAttributeObject when actually there shouldn’t be any changes detected.

What is the recommended way to fix this during migration from SDKv2?

 ~ api_keys  = [
          - {
              - api_key_id = "abcdef...." -> null
              - role_names = [
                  - "GROUP_OWNER",
                ] -> null
            },
        ] -> (known after apply)

Could you expand on what you are looking for? If you have a bug report, please open an issue with a reproducing configuration in the GitHub repository. There have been numerous changes to the framework’s code since this topic was raised.

It is difficult to assist without seeing your schema, the prior configuration of the attribute, the prior state of the attribute, and the current configuration. The framework will automatically mark Computed attributes with null configuration as unknown (known after apply) during the plan, which is intentional to prevent Terraform data consistency errors. Using the UseStateForUnknown plan modifier is typically the schema implementation required if you want to preserve the prior SDK’s automatic Computed behavior of copying the prior state value when the configuration is null. You should be able to get additional information about the framework’s planning decisions in this regard by enabling logging with the TF_LOG environment variable, e.g. TF_LOG=trace.

@bflad I understand. Here’s the schema for this attribute (root level):
Framework:

"api_keys": schema.SetNestedAttribute{
				Optional:           true,
				Computed:           true,
				NestedObject: schema.NestedAttributeObject{
					Attributes: map[string]schema.Attribute{
						"api_key_id": schema.StringAttribute{
							Required: true,
						},
						"role_names": schema.SetAttribute{
							Required:    true,
							ElementType: types.StringType,
						},
					},
				},
			},

SDKv2:

			"api_keys": {
				Type:       schema.TypeSet,
				Optional:   true,
				Computed:   true,
				Elem: &schema.Resource{
					Schema: map[string]*schema.Schema{
						"api_key_id": {
							Type:     schema.TypeString,
							Required: true,
						},
						"role_names": {
							Type:     schema.TypeSet,
							Required: true,
							Elem: &schema.Schema{
								Type: schema.TypeString,
							},
						},
					},
				},
			},

Prior/current state consists:

 "api_keys": [
              {
                "api_key_id": "....",
                "role_names": [
                  "GROUP_OWNER"
                ]
              }
            ]

Issue is that after migration the api_keys SetNestedAttribute (Computed and Optional) with a NestedAttributeObject is being detected as changed (known after apply) and nested attributes marked to null even though it has not been updated:

~ api_keys  = [
          - {
              - api_key_id = "abcdef...." -> null
              - role_names = [
                  - "GROUP_OWNER",
                ] -> null
            },
        ] -> (known after apply)

Is UseStateForUnknown plan modifier still the way to go for this SetNestedAttribute?
Are there any examples you could share?

If the api_keys attribute is not being configured, the backing API computes a value for it when its not given, and the value is not expected to change, then adding the UseStateForUnknown plan modifier should be what you are looking for here. e.g.

"api_keys": schema.SetNestedAttribute{
	Optional:           true,
	Computed:           true,
	NestedObject: schema.NestedAttributeObject{
		Attributes: map[string]schema.Attribute{
			"api_key_id": schema.StringAttribute{
				Required: true,
			},
			"role_names": schema.SetAttribute{
				Required:    true,
				ElementType: types.StringType,
			},
		},
	},
	PlanModifiers: []planmodifier.Set{
		setplanmodifier.UseStateForUnknown(),
	},
},

The Plugin Development - Framework: Plan Modification | Terraform | HashiCorp Developer page discusses more about the framework’s handling of the Terraform planning process and ways that providers can influence the plan before it is displayed by Terraform.

Thanks @bflad

But I am now running into Value conversion error for this during Create(). I came across several issues related to this error but I haven’t been able to find any example that clearly explains how to convert NestedAttributes to/from terraform types to Go types.

Here’s the structs:

type model {
    ApiKeys   []apiKey     `tfsdk:"api_keys"`
}

type apiKey struct {
	ApiKeyID  types.String `tfsdk:"api_key_id"`
	RoleNames types.Set    `tfsdk:"role_names"`
}

From the error:

Received unknown value, however the target type cannot handle unknown values.
        Use the corresponding `types` package type or a custom type that handles
        unknown values.
        
        Path: api_keys
        Target Type: []provider.apiKey
        Suggested Type: basetypes.SetValue
  1. I have tried changing ApiKeys type to types.Set but I am unable to figure out a decent way to convert that to/from schema to these structs and iterate over them. I am a bit new to Go so would appreciate some guidance here.

  2. I assume to be running into similar issue for data sources as well. In order to migrate data sources to the framework without introducing any breaking changes, what is the best way to migrate schema.TypeList with nested object elements? Do we migrate those existing nested lists/sets to ListNestedAttribute/SetNestedAttribute or using Blocks? I’m unable to find this in documentation.

Hey @maastha :wave:

  1. You’re correct in your assumption that you should be using types.Set for:
type model {
    ApiKeys   types.Set     `tfsdk:"api_keys"`
}

In terms of iterating over them. Once you have called req.Plan.Get():

	var data model

	diags := req.Plan.Get(ctx, &data)
	resp.Diagnostics.Append(diags...)

	if resp.Diagnostics.HasError() {
		return
	}

I believe you should then be able to call ElementsAs() to obtain a slice of apiKey:

var apiKeys []apiKey

diags = data.ApiKeys.ElementsAs(ctx, apiKeys)

resp.Diagnostics.Append(diags...)

if resp.Diagnostics.HasError() {
	return
}

Structs in Go don’t natively have the concept on an Unknown value, so while your code may work when a value is configured, once it encounters an Unknown value you will always receive this error. Here are the rules for converting from Framework types to Go types that mention this: Plugin Development - Framework: Conversion Rules | Terraform | HashiCorp Developer

The tls provider contains examples of the usage of ElementsAs().

  1. Can you provide some details on how you are defining schema.TypeList in the SDK version of your provider. There is some information in the migration guide regarding migrating blocks, but without further information it’s difficult to say whether this covers your use-case.

Thank you @bflad

I already tried using ElementsAs() to convert Terraform model to object array (during Create handler), however, could you please share an example of how we can convert the object array back to Terraform model types.Set to write into the state during Read?

I understand there are methods like types.SetValueFrom() etc but I don’t completely understand their usage so an example would be great.
Maybe looking at this function can help you to get a better idea of how we want to handle this but open to suggestions

  1. Our SDKv2 implementations consist of schema.TypeList with nested AND computed object elements. I assume since they are computed (in data source), we should migrate them as ListNestedAttribute for no user impact, please confirm:
SDKv2 schema list attribute:

"api_keys": {
				Type:     schema.TypeList,
				Computed: true,
				Elem: &schema.Resource{
					Schema: map[string]*schema.Schema{
						"api_key_id": {
							Type:     schema.TypeString,
							Computed: true,
						},
						"role_names": {
							Type:     schema.TypeList,
							Computed: true,
							Elem: &schema.Schema{
								Type: schema.TypeString,
							},
						},
					},
				},
			},

In terms of using types.SetValueFrom(), a rudimentary example could look something like:

	type apiKey struct {
		ApiKeyID types.String `tfsdk:"api_key_id"`
	}

	s, diags := types.SetValueFrom(ctx, types.ObjectType{
		AttrTypes: map[string]attr.Type{
			"api_key_id": types.StringType,
		},
	}, []apiKey{
		{
			ApiKeyID: types.StringValue("1"),
		},
		{
			ApiKeyID: types.StringValue("2"),
		},
	})

There are further examples in some of the HashiCorp providers, for instance terraform-provider-aws.

In relation to your second point. I believe that the section on migrating Blocks with Computed Fields in the migration guide indicates the approach that should be taken depending on whether you’re using protocol version 5 (no nested attributes) or protocol version 6 (nested attributes available).

As an aside, you should also check that data.ApiKeys is not null or unknown before calling ElementsAs() above.

If there are further questions could you start a new discussion topic as this one is starting to veer away from the original question.