Is there a Schema config for "ComputedOnce"?

I’m writing a provider and it has a computed attribute “path”. This path is Computed:true, and is determined by the upstream API at Create. It doesn’t change until a destroy + recreate.

Is there a way to communicate this to terraform?
Currently if my resource has to update then downstream dependencies are seeing “path” as changing and so are reporting as needing to update. (known after apply)

If it’s not possible, then a workaround I have for now is defining an additional datasource which takes the original id (which doesnt change until recreations) and outputs the path, but this is adding an additional layer which seems messy. Or do you think this is an ok workaround?

Or of course, am I just doing something wrong!

Hi @saltxwater,

I’m not sure if I’m fully understanding the situation you are describing so I want to first restate what I think you are seeing, so you can correct me if I’m wrong, and then I’ll try to answer it but of course you can ignore my answer if I misunderstood the question! :upside_down_face:

You’ve described a situation where your path attribute is appearing as (known after apply) in the plan, even though you have system-specific knowledge that allows you to know what that value will be and that it actually shouldn’t be changing.

Unless a provider developer writes something special in the CustomizeDiff function, a (known after apply) value should appear only for a “create” operation, so I think what you are seeing is that you’ve changed the configuration for an argument that has ForceNew set, which is then causing the SDK to tell Terraform that the provider plans to replace the object, rather than to update the object.

Consequently, all of the “computed” attributes return to being (known after apply) again, because a “replace” destroys the existing object and creates a new one. You can recognize the situation I’m describing by looking for the -/+ or +/- symbols against the action on the resource you’re looking at, both of which represent “replace” actions, as opposed to ~ which represents an “update” action.


If the above matches what you are seeing then I think there are two possible paths forward here, and you might possibly be able to do both of them if you can meet the requirements for them.

The first and generally easier path is to look for opportunities to remove ForceNew from some or all of your arguments and thus handle more changes as Update rather than as Delete+Create. Computed attributes are preserved in the state across an Update (unless the provider logic intentionally changes them), so your path should therefore not show as (known after apply), but this is appropriate only if the remote system itself can also handle these changes as updates, because it would be confusing and unsafe to report to the user that the provider will do an update but then actually secretly delete the original object, assuming that there’s a user-visible difference between updating and recreating in this system.

The other path, which is a little more complex, is to think about whether you can use the documented behavior of the remote system to have your provider predict a value for path during the plan phase. The typical situation where this is possible is when the computed attribute is systematically constructed from other arguments set in the configuration, such as if the remote system constructs path by transforming a user-provided name argument in a way that the provider code can replicate.

For the second path you’ll need to add a CustomizeDiff function to your resource type implementation if you don’t already have one, and then include in it some logic to first check whether all of the required arguments are known (since they might themselves be populated from (known after apply) values) and them, if so, set a known value for path which will then appear in the plan for the resource instance as a concrete value, instead of (known after apply). For example:

    CustomizeDiff: func(d *ResourceDiff, meta interface{}) error {
        // If the name is known then we can predict the path
        if d.NewValueKnown("name") {
            d.SetNew("path", "/example/" + d.Get("name").(string))
        } else {
            d.SetNewComputed("path")
        }
    }

The Terraform SDK has a built-in default behavior which you’ve already observed which is, in effect, to call d.SetNewComputed on each computed attribute when planning a “create” action, and to preserve the prior value when planning an “update” action. The above overrides those defaults so that path can have a known value whenever name has a known value, and additionally that if the user changes name for an existing object the provider will also plan to change path to match it, allowing the new path value to propagate to downstream resources immediately during planning.

Hi @apparentlymart
Thank you very much for your response and for the detail you have given me!

a (known after apply) value should appear only for a “create” operation

This is the behaviour I was hoping for! But was not what I was observing! So my original request was to try and get this.

Now that you’ve told me this was the expected behaviour I knew something must be wrong with my provider, so I created some test resources to see. Sure enough, when the parent resources is only an update I am not seeing the child needing any change (ie, the computed value has stayed the same).

I was able to recreate my ‘problem’ by introducing a datasource in the middle:

    resource "saltxwater_test1" "foo" {
        force_new_attr = "Foo"
        update_attr        = "hello3"
        // outputs path = "FooFooFoo"
    }
    data "saltxwater_test2" "hello" {
        // sets id to = <my_input >DS
        my_input = saltxwater_test1.foo.path
        // outputs my_output = "FooFooFoo FooFooFoo"
    }
    resource "saltxwater_test1" "bar" {
        force_new_attr = "Bar"
        update_attr        = data.saltxwater_test2.hello.my_output
    }

Now when I change saltxwater_test1.foo.update_attr to a new value (eg. “hello4”) then I get plan:

  # data.saltxwater_test2.hello will be read during apply
  # (config refers to values not yet known)
 <= data "saltxwater_test2" "hello"  {
      ~ id        = "FooFooFooDS" -> (known after apply)
      ~ my_output = "FooFooFooFooFooFoo" -> (known after apply)
        # (1 unchanged attribute hidden)
    }

  # saltxwater_test1.bar will be updated in-place
  ~ resource "saltxwater_test1" "bar" {
        id             = "BarBar"
      ~ update_attr    = "FooFooFooFooFooFoo" -> (known after apply)
        # (2 unchanged attributes hidden)
    }

  # saltxwater_test1.foo will be updated in-place
  ~ resource "saltxwater_test1" "foo" {
        id             = "FooFoo"
      ~ update_attr    = "hello2" -> "hello3"
        # (2 unchanged attributes hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.

Do it seems to be the presence of the datasource which changes things!
Am I correct in thinking that if the parent has even just an UPDATE then any dependant datasources will treat any computed attributes of the parent as potentially changed?
Whereas dependant resources will assume the computed attribute of a parent will stay the same?

I hope this makes sense.
My example is maybe not the easiest to follow. If it’s still causing confusion then perhaps I can clean it and push an example provider up to github.

As for WHY my code has the datasource in the middle:
I’m writing a terraform provider for Dremio (my first TF provider, so I’m sure I’ll be making mistakes!). I create a resource “SOURCE” (HTTP POST). Dremio will then create the source and automatically compute various subdirectories that exist.
I was then using a datasource to take an input of the SOURCE id, and a relative path (manually configured in tf) to follow to get to a sub folder. This then queried the http (helping to check the sub folder exists) and outputs various attributes on the subfolder which I’m then using in other resources.
Another approach I could take would be that the downstream resource could take the upstream resource and relative path as inputs and do the navigation itself… I just thought it might be cleaner with the DS in the middle but perhaps not!

Thanks again for your help

Hi @saltxwater,

Thanks for the additional information!

I think the main question I have after seeing what you saw here is why Terraform concluded that it needed to wait until apply time to read that data source. Terraform normally does that only if at least one of the configuration arguments is unknown (that is, (known after apply)), but it seems that my_input is the only argument here and the plan shows it as unchanged and thus I don’t really understand how it can be unknown. (Only the new value for an argument can be (known after apply), because the old value would’ve been decided by the previous apply.)

I don’t have a ready explanation from that behavior based on what I can see in your comment here. I think I’d need to see what each of these resource types does in its implementation in order to understand better what’s happening here that’s causing that data resource to get deferred until apply time.

The data source is being deferred there because of the resolution to Implicit Reference Ordering with Data Sources in 0.13 · Issue #25961 · hashicorp/terraform · GitHub. The direct reference to a managed resource from the data source is treated like depends_on. If the behavior is not desired, you can indirect the reference through a local value.