Docker container wants to be replaced for each run

Everytime I’m deploying my container, terraform wants to recreate my mounts and my ports. Very strange, is this intended some how?
So every run ends with “Apply complete! Resources: 1 added, 0 changed, 1 destroyed.”

It seems very confused when polling the existing ports, looks like its sorting them wrong or something when diffing.

Terraform v0.12.19

  • provider.docker v2.6.0

% terraform apply -var-file=dockerinfra03.tfvars

docker_container.consul must be replaced

-/+ resource “docker_container” “consul” {
attach = false
+ bridge = (known after apply)
command = [
“consul”,
“agent”,
“-server”,
“-ui”,
“-datacenter=hestcorp”,
“-node=consul-server03”,
“-config-file=/consul/config/encryption.json”,
“-config-file=/consul/config/tls_03.json”,
“-config-file=/consul/config/ui.json”,
“-data-dir=/consul/data”,
“-bootstrap-expect=3”,
“-client=0.0.0.0”,
“-client=0.0.0.0”,
]
+ container_logs = (known after apply)
+ exit_code = (known after apply)
~ gateway = “172.17.0.1” → (known after apply)
~ id = “0484eefe45c9441ec31fe9147953c8a49a6242609101d8b0ef6f8c35f19955ee” → (known after apply)
image = “consul:1.6.2”
~ ip_address = “172.17.0.2” → (known after apply)
~ ip_prefix_length = 16 → (known after apply)
log_driver = “json-file”
logs = false
must_run = true
name = “consul-server03”
~ network_data = [
- {
- gateway = “172.17.0.1”
- ip_address = “172.17.0.2”
- ip_prefix_length = 16
- network_name = “bridge”
},
] → (known after apply)
publish_all_ports = false
read_only = false
restart = “always”
rm = false
start = true

  ~ ports {
      ~ external = 8300 -> 8301 # forces replacement
      ~ internal = 8300 -> 8301 # forces replacement
        ip       = "0.0.0.0"
      ~ protocol = "tcp" -> "udp" # forces replacement
    }
  ~ ports {
        external = 8301
        internal = 8301
        ip       = "0.0.0.0"
      ~ protocol = "udp" -> "tcp" # forces replacement
    }
  ~ ports {
      ~ external = 8301 -> 8302 # forces replacement
      ~ internal = 8301 -> 8302 # forces replacement
        ip       = "0.0.0.0"
      ~ protocol = "tcp" -> "udp" # forces replacement
    }
  ~ ports {
        external = 8302
        internal = 8302
        ip       = "0.0.0.0"
      ~ protocol = "udp" -> "tcp" # forces replacement
    }
  ~ ports {
      ~ external = 8302 -> 8501 # forces replacement
      ~ internal = 8302 -> 8501 # forces replacement
        ip       = "0.0.0.0"
        protocol = "tcp"
    }
  ~ ports {
      ~ external = 8501 -> 8600 # forces replacement
      ~ internal = 8501 -> 8600 # forces replacement
        ip       = "0.0.0.0"
      ~ protocol = "tcp" -> "udp" # forces replacement
    }
    ports {
        external = 8600
        internal = 8600
        ip       = "0.0.0.0"
        protocol = "tcp"
    }
  ~ ports {
      ~ external = 8600 -> 8300 # forces replacement
      ~ internal = 8600 -> 8300 # forces replacement
        ip       = "0.0.0.0"
      ~ protocol = "udp" -> "tcp" # forces replacement
    }

  - volumes {
      - container_path = "/consul/config" -> null
      - host_path      = "/var/dockerdata/consul/config" -> null
      - read_only      = false -> null
    }
  + volumes {
      + container_path = "/consul/config"
      + host_path      = "/var/dockerdata/consul/config"
    }
  - volumes {
      - container_path = "/consul/data" -> null
      - host_path      = "/var/dockerdata/consul/data" -> null
      - read_only      = false -> null
    }
  + volumes {
      + container_path = "/consul/data"
      + host_path      = "/var/dockerdata/consul/data"
    }
}

Plan: 1 to add, 0 to change, 1 to destroy.

Probably this bug that should be fixed, but isn’t: https://github.com/terraform-providers/terraform-provider-docker/issues/110