Datasource output forcing an in-place upgrade in terraform

Trying to use dynamic block with datasource output results in unexpected inplace upgrade. The cause is datasource output is inserting a new value as a second item instead at the end. Not finding a way to preserve the insertion order. I have tried both map and lists way and it didn’t help.

Datasource block,

data "oci_logging_log_group" "log_group" {
  for_each = toset(local.log_group_ids)
  log_group_id = each.key
}

The dynamic block section,

      dynamic "log_sources" {
        for_each = data.oci_logging_log_group.log_group
        content {
          log_group_id = log_sources.value.id
          compartment_id = log_sources.value.compartment_id
        }
      }

Output example,

 # oci_sch_service_connector.vcn_service_connector will be updated in-place
  ~ resource "oci_sch_service_connector" "vcn_service_connector" {
        id             = "ocid1.serviceconnector.oc1.......o6yqe7xfa"
        # (8 unchanged attributes hidden)

      ~ source {
            # (1 unchanged attribute hidden)

          ~ log_sources {
              ~ compartment_id = "ocid1.compartment.oc1.......egll4dcja" -> "ocid1.compartment.oc1..aaaaa.......fdcdh6wrla"
              ~ log_group_id   = "ocid1.loggroup.oc1.....nzlpa" -> "ocid1.loggroup.oc1.phx......6wxl3dlq" 
     }
            }
          + log_sources {
              + compartment_id = "ocid1.compartment.oc1..aaaaaa......egll4dcja"
              + log_group_id   = "ocid1.loggroup.oc1.phx.amaaaaaa.......xqnzlpa"
            }
            # (1 unchanged block hidden)
        }

Due to the insertion of new item in the middle terraform/provider consider that as change and trying to replace it. But the expectation is it adds only the newly found item from datasource and leaving the rest untouched.

Can you please suggest a possible way to do the same.

Hi @ulags.n,

If this oci_sch_service_connector resource type is actually sensitive to what order the log_sources are declared in then unfortunately I think this data source may just be incompatible with it: using an unordered (or differently-ordered) collection to generate an ordered sequence of blocks isn’t possible in general; you would need to either impose an order on the data resource result explicitly or select a different data source that can provide data in the order you need.

However, I think it would be good to investigate whether the provider is actually sensitive to the order of these blocks, or if this is just the result of an inaccurate schema in the provider.

The provider protocol allows provider developers to choose between several different internal representations of nested blocks to accommodate different API designs, and it seems like the developers of this provider decided that Terraform should treat log_sources blocks as an ordered list, which in turn causes Terraform’s plan renderer to show an insertion into the middle of the list as a mix of in-place updates and additional items as you saw here.

The plan UI is only describing how the new value will differ from the old, and not describing exactly how the provider will implement that change. If the provider implements this change by two different API calls – one to modify the current log_sources object and another to add a new log_sources object – then this may indeed fail. But if instead the provider just sends this entire list to the remote API as part of updating the overall object then the modification and the addition will both happen atomically as a single action and so there won’t be any separate step where only one of those results is visible in the remote system.

I’m not familiar with this provider and so I can’t say which of these is true, but I would suggest trying it with a test configuration (targeting separate temporary infrastructure where a service interruption won’t matter) to see if it achieves the result you need.

Hi @apparentlymart ,
Thank you for reply. I have tried running this in a temporary infra and it fails does it mean the provider implements this change by two different API calls ?

Even if provider expects a ordered list for log_sources is there a work around where we can generate the order based on created time from data_sources for the log_sources block ?. It seems impossible to generate that order as i have tried different approaches already though.

Thanks