[OVH][COUNT] Add another server without replace older's

Hi,

This is my problem today.
I have an ovh infrastructure deployed with terraform.
I used a count to deploy three same instances but when I want to scale another one, Terraform wants to replace the three instances because some configurations has changed.

Here the output of terraform plan:

    An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # openstack_compute_instance_v2.swarmnode[0] must be replaced
-/+ resource "openstack_compute_instance_v2" "swarmnode" {
      ~ access_ip_v4        = "***" -> (known after apply)
      ~ access_ip_v6        = "[***]" -> (known after apply)
      ~ all_metadata        = {} -> (known after apply)
      ~ all_tags            = [] -> (known after apply)
      ~ availability_zone   = "nova" -> (known after apply)
      ~ flavor_id           = "***" -> (known after apply)
        flavor_name         = "b2-15"
        force_delete        = false
      ~ id                  = "***" -> (known after apply)
      ~ image_id            = ""***" -> (known after apply)
      ~ image_name          = "Image not found" -> "Ubuntu 18.04" # forces replacement
        key_pair            = "key_pair"
        name                = "int-swarmnode-001"
        power_state         = "active"
      ~ region              = "GRA7" -> (known after apply)
        security_groups     = [
            "int-backend",
        ]
        stop_before_destroy = false
      - tags                = [] -> null
      ~ user_data           = "7a3c7f112e97010e2e5dc37f3b86ce15920df5e0" -> "9b9be33f6e07597350ed760b4e19b2bc4fd59da3" # forces replacement

      ~ network {
            access_network = false
          ~ fixed_ip_v4    = "***" -> (known after apply)
          ~ fixed_ip_v6    = "***" -> (known after apply)
          + floating_ip    = (known after apply)
          ~ mac            = "***" -> (known after apply)
            name           = "Ext-Net"
          + port           = (known after apply)
          ~ uuid           = "***" -> (known after apply)
        }
      ~ network {
            access_network = true
          ~ fixed_ip_v4    = "***" -> (known after apply)
          + fixed_ip_v6    = (known after apply)
          + floating_ip    = (known after apply)
          ~ mac            = ""***" -> (known after apply)
            name           = "privnet_saas_int"
          + port           = (known after apply)
          ~ uuid           = "***" -> (known after apply)
        }
    }


  # openstack_compute_instance_v2.swarmnode[3] will be created
  + resource "openstack_compute_instance_v2" "swarmnode" {
      + access_ip_v4        = (known after apply)
      + access_ip_v6        = (known after apply)
      + all_metadata        = (known after apply)
      + all_tags            = (known after apply)
      + availability_zone   = (known after apply)
      + flavor_id           = (known after apply)
      + flavor_name         = "b2-15"
      + force_delete        = false
      + id                  = (known after apply)
      + image_id            = (known after apply)
      + image_name          = "Ubuntu 18.04"
      + key_pair            = "key_pair"
      + name                = "int-swarmnode-004"
      + power_state         = "active"
      + region              = (known after apply)
      + security_groups     = [
          + "int-backend",
        ]
      + stop_before_destroy = false
      + user_data           = "9b9be33f6e07597350ed760b4e19b2bc4fd59da3"

      + network {
          + access_network = false
          + fixed_ip_v4    = (known after apply)
          + fixed_ip_v6    = (known after apply)
          + floating_ip    = (known after apply)
          + mac            = (known after apply)
          + name           = "Ext-Net"
          + port           = (known after apply)
          + uuid           = (known after apply)
        }
      + network {
          + access_network = true
          + fixed_ip_v4    = (known after apply)
          + fixed_ip_v6    = (known after apply)
          + floating_ip    = (known after apply)
          + mac            = (known after apply)
          + name           = "privnet_saas_int"
          + port           = (known after apply)
          + uuid           = (known after apply)
        }
    }

  # openstack_compute_keypair_v2.ssh_keypair_*** must be replaced
-/+ resource "openstack_compute_keypair_v2" "ssh_keypair_***" {
      ~ fingerprint = "***" -> (known after apply)
      ~ id          = "***" -> (known after apply)
        name        = "***"
      + private_key = (known after apply)
      ~ public_key  = <<~EOT # forces replacement
            ***
        EOT
        region      = "GRA7"
    }

Plan: 5 to add, 0 to change, 4 to destroy.

Warning: Resource targeting is in effect

You are creating a plan with the -target option, which means that the result
of this plan may not represent all of the changes requested by the current
configuration.
    
The -target option is not for routine use, and is provided only for
exceptional situations such as recovering from errors or mistakes, or when
Terraform specifically suggests to use it as part of an error message.


------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

I don’t known why the image is not set because im my terraform state (remote to aws bucket) it’s set:

{
      "mode": "managed",
      "type": "openstack_compute_instance_v2",
      "name": "swarmnode",
      "each": "list",
      "provider": "provider.openstack.ovh",
      "instances": [
        {
          "index_key": 0,
          "schema_version": 0,
          "attributes": {
            "access_ip_v4": "***",
            "access_ip_v6": "***",
            "admin_pass": null,
            "all_metadata": {},
            "all_tags": [],
            "availability_zone": "nova",
            "block_device": [],
            "config_drive": null,
            "flavor_id": "***",
            "flavor_name": "b2-15",
            "floating_ip": null,
            "force_delete": false,
            "id": "****",
            "image_id": "****",
            "image_name": "Ubuntu 18.04",
            "key_pair": "****",
            "metadata": null,
            "name": "int-swarmnode-001",
            "network": [
              {
                "access_network": false,
                "fixed_ip_v4": "****",
                "fixed_ip_v6": "***",
                "floating_ip": "",
                "mac": "***",
                "name": "Ext-Net",
                "port": "",
                "uuid": "***"
              },
              {
                "access_network": true,
                "fixed_ip_v4": "***",
                "fixed_ip_v6": "",
                "floating_ip": "",
                "mac": "***",
                "name": "privnet_saas_int",
                "port": "",
                "uuid": "***"
              }

Also user_data changed but don’t know why.

I tried to do a terraform import to get the last values but doesnt change anything.

How can I resolve the conflict here without destroy the instance ?

EDIT:

Here my terraform code to deploy swarmnode instance with count:

resource "openstack_compute_instance_v2" "swarmnode" {
  provider = openstack.ovh
  count = var.ovh_nb_swarmnodes
  name = "${var.ovh_env}-swarmnode-${format("%03d", count.index + 1)}"
  image_name = "Ubuntu 18.04"
  # flavor_name = "s1-2" # 1 vCores 2.4GHz / 2Go RAM / 10Go SSD
  flavor_name = var.ovh_swarm_instance_type
  key_pair = openstack_compute_keypair_v2.ssh_keypair.name
  user_data = templatefile("cloud-init.swarmnode.conf", {ssh_public_key = file("vars/${var.ovh_env}/${var.ovh_env}.pub")})

  security_groups = [
    "${openstack_compute_secgroup_v2.sg-backend.name}"
  ]

  network {
    name = "Ext-Net"
  }
  network {
    access_network = true
    name = ovh_cloud_network_private.network.name
  }
}

Thanks for your help

Best regards,

BDT