Run triggers needing to be discarded when no changes

I have setup run triggers on a workspace and it runs every time as expected. Fantastic. In the future I will have many workspaces with a prefix ‘environment-‘ that will have the same run trigger. I can see this being a problem and a lot of maintenance because even though it finds no changes… It sits there and waits for you to discard it. The thing I would like is, if no changes are found in the plan. Discard the run.

I contacted support but they said this works in v12 (and I am using 13 for this new project).

Is this a v13 bug in TF Cloud?

Hi @jurgenweber! It sounds like you’re seeing runs in the “Needs Confirmation” state, but they have no changes.

I’m not able to reproduce this on Terraform Cloud with 0.13.0-rc1. Any runs which have no changes transition to the “Planned” state automatically, and I don’t have the option to discard the run.

What does the output of the plan look like on Terraform Cloud?

Also, if you can give me the support ticket number I’ll try to follow up with our support team.

Hi

Interesting, the ticket is here: https://support.hashicorp.com/hc/requests/31608 . It has details of the exact workspaces, etc.

The run shows a lot of refreshes;

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
 <= read (data resources)

Terraform will perform the following actions:

  # data.aws_availability_zones.available will be read during apply
  # (config refers to values not yet known)
 <= data "aws_availability_zones" "available"  {
        group_names = [
            "ap-southeast-2",
        ]
      ~ id          = "2020-07-28 13:21:37.801693612 +0000 UTC" -> "2020-07-28 13:22:08.091318162 +0000 UTC"
        names       = [
            "ap-southeast-2a",
            "ap-southeast-2b",
            "ap-southeast-2c",
        ]
        state       = "available"
        zone_ids    = [
            "apse2-az3",
            "apse2-az1",
            "apse2-az2",
        ]
    }

Plan: 0 to add, 0 to change, 0 to destroy.

and then some outputs.

Hi @jurgenweber,

I think we previously saw a similar behavior with a different data source in the AWS provider here:

The original requester of that issue indicated that it was fixed in beta3. I’m not sure exactly what fixed it (there were a number of fixes in the same general area for beta3) but maybe using beta3 will improve things for you too. My guess would be that the behavior has changed so that a data source change alone is not sufficient to be considered a change to be applied, and instead as a compromise it would be included only if it causes a change to a resource or to an output value.


This data source is misbehaving a little when it comes to the expected contract for data sources, because it seems to be changing its id value on every read even if nothing significant has changed in the data you requested. Although we seem to have now changed Terraform to work around it, hopefully we can also get the AWS provider updated to only change the result of this data source (and probably some others) when the upstream data has changed in a meaningful way.

well, I am on rc1… so that should be inclusive of any fixes in beta3… I would like to think. :wink:

Yea, I did wonder if it was that data output and something to do with the provider… is that then considered a change from TFC’s PoV and hence I get asked the question to apply or not?

Indeed, although it’s actually Terraform itself deciding that there’s a change, and Terraform Cloud is trusting it. This would presumably be true when running terraform apply locally too: Terraform would prompt for you to type yes to confirm, rather than exiting immediately. (If you don’t see the same behavior running Terraform locally then that’d be interesting information to help understand what’s going on here!)


Error: Apply not allowed for workspaces with a VCS connection

A workspace that is connected to a VCS requires the VCS-driven workflow to
ensure that the VCS remains the single source of truth.

not sure what a VCS is, but it wont let me apply locally. :slight_smile:

Hi @jurgenweber,

It seems that simultaneously to this discussion there was another report of this over in the AWS provider repository:

Some of my colleagues determined that this odd behavior was the result of a subtle bug in the implementation of this data source, where the provider’s behavior isn’t matching it’s declared schema. That seems to be causing a bit of “undefined behavior” in Terraform Core which results in this confusing plan result, because the plan renderer isn’t correctly understanding what’s going on.

The good news is that there’s a fix on the way for aws_availability_zones in particular, so there will hopefully be a new AWS provider release in the near future that avoids this problem.

The Terraform Core team is also now aware of this undefined behavior situation, and so hopefully we’ll be able to make it defined behavior in a future release and hopefully do something some reasonable when this situation arises, in case there are similar bugs lurking in other data sources.

1 Like

nice one @apparentlymart really appreciate your efforts in following this up.