Is it possible to have StateFunc-like behavior with the Plugin Framework?

I have in the past used the old Plugin SDK’s StateFunc to store hashes of potentially large content in state instead of copying the content from the configuration. But I can’t find a way to emulate this behavior with the Plugin Framework.

I have tried to substitute the configured value inside the resource’s Create method:

    data.Contents = Base64String.Value(
        fmt.Sprintf("%x", sha256.Sum256(contents))

But Terraform stores the configuration value in state anyway. Note that one needs to specify a CustomType (Base64String in my example) in order to write its semantical equality rules and avoid consistency errors. When I force an error in the equality comparison, the intended (hashed) value gets stored. But that is far from pratical.

I have published some sample code for this at in case anyone wants to have a look.

Has anyone managed to reproduce StateFunc using the newer Plugin Framework? Maybe someone can point to some opensource provider that does it.

Hey there @thiagoarrais :wave:! Thanks for providing that example.

So you’re running into Terraform’s strict data consistency rules, that for the old Plugin SDK were previously allowed to be broken. We have a dedicated page in the SDKv2 documentation that describes the how/what/why of that, which I’ll link below, but I’ll also pull some excerpts out that are specific to your example.

So, as of Terraform 0.12, it’s required that:

  • Resources should never set or change an attribute value without the schema Computed flag.
  • Resources should always set an attribute state value to the exact configuration value or prior state value, if not null.

Both of these rules are being violated with that usage of StateFunc and are the main reason a similar functionality cannot be achieved with Plugin Framework. If you try to change a configured value in Plugin Framework you’ll get an error like:

examplecloud_thing.this: Modifying...
│ Error: Provider produced inconsistent result after apply
│ When applying changes to examplecloud_thing.this, provider "provider[\"TYPE\"]" produced an unexpected
│ new value: .word: was cty.StringVal("value"), but now cty.StringVal("VALUE").
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

The only real solution to this problem is to follow Terraform’s data consistency rules. To migrate that type of schema from SDKv2 to Framework you would need to create a new Computed attribute that stores the SHA hash of the contents coming from the config. There is a similar example of this in the local_file resource:

With your schema:

func (*FileResource) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) {
	resp.Schema = schema.Schema{
		Attributes: map[string]schema.Attribute{
			"id": schema.StringAttribute{
				Computed: true,
				PlanModifiers: []planmodifier.String{
			"path": schema.StringAttribute{
				Required: true,
				PlanModifiers: []planmodifier.String{
			"contents": schema.StringAttribute{
				Required:   true,
				CustomType: Base64String,
			"contents_sha256": schema.StringAttribute{
				Computed: true,

Unfortunately, solving it this way doesn’t address your original goal, as you still are required to store anything from config in state, byte-for-byte. That’s a Terraform core design decision and isn’t something that the Plugin Framework can step around.

I’d be interested in an example of this, as that sounds like a potential bug either in Plugin Framework or Terraform core.

When I add an error to the diags returned by the custom type StringSemanticEquals method:

        "This is a forced error",
        "Nothing is wrong, but we're returning an error anyway"

Terraform obviously displays it when applied:

$ terraform apply
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes Creating...
│ Error: This is a forced error
│   with,
│   on line 10, in resource "sha256sum_file_v6" "tf":
│   10: resource "sha256sum_file_v6" "tf" {
│ Nothing is wrong, but we're returning an error anyway

After that the state file contains the computed hash:

cat terraform.tfstate                                                      
      "provider": "provider[\"\"]",
      "instances": [
          "status": "tainted",
          "schema_version": 0,
          "attributes": {
            "contents": "d36ab9e63f8264a50e8ef446aa4de667fc890cbd63bf28a14ae34258e1fe2a35",

I can’t say if that is necessarily a bug because the resource status is marked in the state as tainted and in a subsequent apply Terraform will try to replace the resource.

The modified code with the forced error is available in the “force-error” branch.

Yeah! You’re totally right. Storing an extra hash will only get us further from the goal of minimizing storage.

I see. But can’t we relax that for values that aren’t exactly the same as found in the configuration but are semantically equal? As I see a whitespace-only change to a (custom-typed) JSON value in configuration, for instance, won’t get written to state because it won’t get detected as a semantically significant change. I think I need to understand why (and if) that is supported for changes made by the pratictioner but not by the provider.

Ah, I misunderstood your original post about the error. The behavior of the resource being tainted on error makes sense, and the data consistency assertions made by Terraform core don’t run as it’s already being marked as tainted.

Terraform core needs providers to be consistent with values remaining constant between configuration, planning, and the applied value in a remote system. A value provided via configuration by a practitioner is explicitly stating the intended value/behavior. Since all Terraform configuration and planned values are accessible to other resources, relaxing the design of enforcing this consistency would make it difficult for Terraform to guarantee practitioners their intentions, as providers could change it.

It mostly comes down to the tradeoff of Terraform core’s design favoring accuracy and consistency over reducing storage size.

With this I’m understanding that I was mistaken when I assumed that the semantically equivalent JSON value would not get copied to state. Maybe Terraform updates the state silently when it finds that kind of difference? I think I need to do some testing…

Thanks for the clarifications!

Hi @austin.valle @bflad

I’m also trying to migrate an attribute with StateFunc to the Plugin Framework. I have tried the recommended solution mentioned here and implemented a custom type with Semantic equality.

As per the example in the migration documentation “the semantic equality implementation below would resolve the resource drift and error”. However the drift detection is still happening for us for any existing configurations:

db_major_version         = "6"


db_major_version         = "6.0"

terraform plan:

  ~ db_major_version         = "6.0" -> "6"

Could you please help us figure out what could be wrong? Do we also have to implement a plan modifier to handle this? This isn’t clear from the documentation.

Thank you!

Hey there @maastha,

The StringSemanticEquality function is used for Plugin Framework’s own computation, and if it returns true, it essentially just discards the value from the provider in favor of the prior value (which should be config or prior state).

Semantic equality doesn’t allow providers to suppress Terraform core’s data consistency rules, which will still plan to store config values in state. The specific rule is:

Resources should always set an attribute state value to the exact configuration value or prior state value, if not null.

I believe you may have seen this documentation, but for future readers, this doc describes the type of problem space of what you’re running into: Resources - Data Consistency Errors | Terraform | HashiCorp Developer

If a resource was initially written in SDKv2, and you store a value in state that doesn’t match the configuration (which as of Terraform 0.12, is considered invalid), when you convert that resource to Plugin Framework, Terraform core will still plan that the value in state be updated to match the config (and as you noticed, will show drift).

You could attempt to use a plan modifier to prevent this and use the prior state, as this still satisfies Terraform core’s data consistency rules. However, it’d propagate the confusing behavior that Terraform’s design is attempting to prevent (config value not matching state value).