Managed disk changes after failback from failover causes Terraform to destroy and recreate disk

We set up an Azure VM with attached managed data disk and configured for site recovery to a secondary region. The full build was done with Terraform. We did a full DR test and failed the VM over to a secondary region and failed it back.

Now Terraform detects changes to the data disk and wants to destroy and recreate it along with the site recovery settings. This is due to the create_option and source_resource_id changing from the failover.

How do we account for changes after failover with Terraform?