Terraform WARN: Provider ... produced an invalid plan ... legacy plugin SDK

Hi guys,

Recently, when I turned on TF_LOG to debug another terraform problem, I saw quite a few [WARN] messages like the following, when I turned off the TF_LOG, the messages disappears.

I googled complete definitions of the Terraform resources, the list of confusing errors from downstream operations are in fact the attributes / fields of the Terraform resources, which are not explicit defined in my .tf terraform codes (and they are supposed to use whatever default values).

The things that concerns me the most is ...an invalid plan ... legacy plugin SDK. Even I downloaded the lastest 5.0.1 aws provider, The error still persists.

Any recommendations and suggestions are more than welcomed, Thanks,


2023-05-29T16:20:20.017Z [WARN] Provider “Terraform Registry” produced an invalid plan for aws_kms_key.kms_key[“sre”], but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .custom_key_store_id: planned value cty.StringVal(“”) for a non-computed attribute
- .bypass_policy_lockout_safety_check: planned value cty.False for a non-computed attribute
- .is_enabled: planned value cty.True for a non-computed attribute
- .customer_master_key_spec: planned value cty.StringVal(“SYMMETRIC_DEFAULT”) for a non-computed attribute
- .tags: planned value cty.MapValEmpty(cty.String) for a non-computed attribute
- .enable_key_rotation: planned value cty.False for a non-computed attribute
- .key_usage: planned value cty.StringVal(“ENCRYPT_DECRYPT”) for a non-computed attribute


After googling the issue and related pages, I ended up this link on new/old Terraform Plugin SDK:

Based on the above page, aws provider, even at the latest version 5.0.1, is yet to conforming to new Hashicorp Plugin SDK, right? If so, that might explain the [WARN] message in the original post.

Also, does it means that we as terraform and aws users, can only wait and do nothing, until the aws provider to improve/upgrade to new Hashicorp plugin framework. right? Thanks,

Terraform has changed a lot since the early days.

Some behaviours which are nowadays considered requirements, aren’t followed by existing providers.

In order to not break them all, which would be hugely disruptive, providers written with terraform-plugin-sdk get an special waiver to ignore some of the new rules.

For users of Terraform, this largely does not matter.

(I say largely, because I did come across an actual user-facing bug that I think was caused by this, once - Regression in 3.10.0: Error: Provider produced inconsistent final plan · Issue #1690 · hashicorp/terraform-provider-vault · GitHub)

The log messages you found, are really only aimed at people doing provider development, and can generally be safely ignored.

Currently, and in general, migrating existing providers to terraform-provider-framework is assumed to be the eventual way forward. Although, having looked at the very specific error messages you quote:

I do wonder why the existing terraform-plugin-sdk doesn’t just automatically declare any attribute with a default, as computed… that feels like a simple and obvious fix… but maybe I’m missing some detail.

Hi maxb, the Terraform main code is pretty straight, all the attributes reported, are the attributes not value-assigned explicitly in code (and will get default value then).

variable “kms_keys” {
default =
type = list
}

resource “aws_kms_key” “kms_key” {
for_each = toset(var.kms_keys)
description = each.key
deletion_window_in_days = 30
policy = try(templatefile(“${path.module}/policy/${each.key}.json”,
{
aws_account_id = var.aws_account_id,
aws_partition = var.aws_partition
}
), templatefile(“${path.module}/policy/default.json”,
{
aws_account_id = var.aws_account_id,
aws_partition = var.aws_partition
}))
}

resource “aws_kms_alias” “kms_alias” {
for_each = toset(var.kms_keys)
name = “alias/${each.key}”
target_key_id = aws_kms_key.kms_key[each.key].key_id
}

Indeed, and in general anything you see when you set the TF_LOG environment variable is intended for developers of Terraform rather than users of Terraform… the information intended for end-users is in the UI by default.

As @maxb noted, this warning is here so that when someone opens a bug report against Terraform or one of its providers (and therefore includes their trace logs as requested in the bug report template) the developer trying to explain the bug will see this clue that the provider is not correctly implementing the plugin protocol, and then they can decide (using context Terraform itself does not know) whether the problem described in the warning is a possible cause of the bug or not.

The hashicorp/aws provider is a very large provider where most of it was written a very long time ago, and so I don’t imagine any time in the foreseeable future where it will be entirely using the modern plugin framework. The provider development team will opportunistically update to the new framework when they are making significant changes to existing resource types, but there is relatively little benefit (and significant risk) to just unilaterally porting existing functionality that is stable and working.

Unless you are seeing a “confusing error” from a downstream resource then you can just ignore this warning. If you do find a bug that is caused by one of these historical small misbehaviors and you open a bug report about it then the provider developers will use this information to help diagnose and fix the problem.


For what it’s worth, errors with the summary “Provider produced inconsistent final plan” are often traced back to the kinds of misbehavior this warning is talking about.

The reason for this is that if you have two resources A and B, and one of B’s arguments includes a value from A, during planning Terraform assumes that any known values returned by A are final and plans B using those values.

If A suddenly changes one of those values during its final plan or apply steps then it has violated the protocol rules. For a provider written with the modern plugin framework Terraform Core will correctly blame provider A for violating the rules.

On the other hand, if A is written with the legacy plugin SDK then it will tend to violate the rules constantly – usually in ways that don’t matter – and so Terraform must proceed optimistically. However, in this situation the problem with A is significant, because it will now change the final plan for B, causing the “Provider produced inconsistent final plan” error incorrectly attributed to B, which is an example of the sort of “confusing error from a downstream operation” the warning message is talking about.

Looking into the code… couldn’t we just change the code here:

to report back to Terraform core, that an attribute is Computed, additionally if the SDK has a Default or a DefaultFunc defined for that attribute?

I’m sure there are many other issues with the SDK, but it seems this would fix the specific rather common case for the log messages reported in this topic.

Unfortunately nothing in the legacy SDK can be safely done by “just changing the code”.

In developing the SDK shims and behavior of terraform for v0.12, to maintain compatibility with existing providers and configuration we needed to not only be compatible at the overall protocol level, but be compatible at the bug-level too. So much of the existing behavior of providers was based on their own inconsistencies and the internal inconsistencies of the legacy SDK, that changing any single piece of code almost always resulted in regressions elsewhere.

Considerable effort was made at the time to find a compromise of maximum compatibility and utility, and once a usable compromise was found, it had to essentially be frozen to ensure consistency moving forward. Perhaps that schema change would have been better in hindsight, or maybe it was forced due to other constraints, but the risk of changing behavior for legacy providers means it’s not something we can really consider either way.

1 Like

Crikey. Well, thank you for the explanation anyway, I didn’t realise it had been quite that fraught a migration!