Update one vm from a group

If I launch a group of virtual machines with the same hardware configurations but then later I need to change the storage and memory size of just one; how do I do that without updating all virtual machines?
I want to set this up in azure and aws.
If this was answered elsewhere please kindly point me in the right direction.
I tried searching for a couple of days but maybe I am using the wrong keywords.
Thank you!

Assuming you have something like this currently:

resource "virtual_machine" "machines" {
  for_each = local.vm_details

  name    = each.key
  storage = 100
  memory  = 10
}

I would change it to something like:

resource "virtual_machine" "machines" {
  for_each = local.vm_details

  name    = each.key
  storage = each.value.storage
  memory  = each.value.memory
}

If most of the VMs have identical values to simplify the map I’d probably adjust the memory = each.value.memory type lines to something like memory = try(each.value.memory, 10) so it uses a default if that map entry isn’t specified.

1 Like

Hello @ dennis.pomadev, as you refer to a group, AFAIK, AWS ASG and Azure VMSS both require all machines to be identical on deploy. If you alter one, a subsequent Terraform plan will detect the drift and will want to replace the instance.

1 Like

This is what I was thinking and was not sure if it was even possible with Autoscaling groups.

This is great, thank you.
I guess the biggest concern would be how to dynamically update new resources with the addition of new members to, for example, an AD group.
That is, if I have a group Marketing and they hire someone knew, once AD is updated then we would have a trigger in a pipeline to execute terraform to read through AD and then create a remote workstation for that new hire.

There are a lot of ways to skin that cat and it partly depends on the size of the data set. Thinking always for scale, I would imagine you would have a pipeline orchestrator which had privileged access to a list of users. This way you would be able to define a full list of machines required.

I would have a job dynamically populate one or more JSON list(s) as locals such that a unique user equates to a unique string representing an entry for a machine (if you are tying machines to users 1-1), with a TF config which instantiates the list into an HCL map. Then get your code to use dynamic blocks as needed to iterate the list deploying a machine per entry. Check this as well.

I’d do the machine prep with HCP Packer so you have control over the build of the machine you allocate, writing a hardened base image, and then a set of phoenix images as needed for the various groups.