Automating KVM infrastructure deployment with terraform-libvirt library

Hi All,

I’m trying to automate deployment of guest machines on KVM. Tried other open-source solutions, but terraform is the one I really like. I’ve used couple of other online resources to put together what I have now (https://titosoft.github.io/kvm/terraform-and-kvm/ https://blog.gruntwork.io/terraform-tips-tricks-loops-if-statements-and-gotchas-f739bbae55f9). At the moment I can create n-number of vms as defined in the variables. I’m facing few issues:

  1. All VMs created with this script get the correct name when listed in virsh command, however they all have the same hostname (ubuntu) once I ssh into it - I’d like to have a line in my script to change it, but I’m still struggling to find a way. I’ve tried to populate it with user_data as I read it’s most elegant solution.
    This didn’t do it
    user_data = “${data.template_file.user_data.rendered[count.index]}”

  2. Perhaps stupid question, but I’d like to feed data (hostnames for instance) from external source (csv for instance or a project saved at GitLAB)

  3. I’d like to understand how can I push more cloud-init values

Does anyone have an idea on how this can be solved please?
Thank you for any help!

This is my (working) code now:

instance the provider

provider “libvirt” {
uri = “qemu:///system”
}

variable “vm_machines” {
description = “Create machines with these names”
type = list(string)
default = [“master01”, “worker01”, “worker02”, “worker03”]
}

We fetch the latest ubuntu release image from their mirrors

resource “libvirt_volume” “ubuntu” {
name = “${var.vm_machines[count.index]}.qcow2”
count = length(var.vm_machines)
pool = “guest_images”
source = “http://cloud-images.ubuntu.com/releases/bionic/release-20191008/ubuntu-18.04-server-cloudimg-amd64.img
format = “qcow2”
}

Create a network for our VMs

resource “libvirt_network” “vm_network” {
name = “vm_network”
addresses = [“10.224.1.0/24”]
dhcp {
enabled = true
}
}

Use CloudInit to add our ssh-key to the instance

resource “libvirt_cloudinit_disk” “commoninit” {
name = “commoninit.iso”
pool = “guest_images”
user_data = “{data.template_file.user_data.rendered}" network_config = "{data.template_file.network_config.rendered}”
}

data “template_file” “user_data” {
template = “{file("{path.module}/cloud_init.cfg”)}"
}

data “template_file” “network_config” {
template = “{file("{path.module}/network_config.cfg”)}"
}

Create the machine

resource “libvirt_domain” “ubuntu” {
count = length(var.vm_machines)
name = var.vm_machines[count.index]
memory = “8196”
vcpu = 2

cloudinit = “${libvirt_cloudinit_disk.commoninit.id}”

network_interface {
network_id = “${libvirt_network.vm_network.id}”
network_name = “vm_network”
}

IMPORTANT

Ubuntu can hang is a isa-serial is not present at boot time.

If you find your CPU 100% and never is available this is why

console {
type = “pty”
target_port = “0”
target_type = “serial”
}

console {
type = “pty”
target_type = “virtio”
target_port = “1”
}

disk {
volume_id = libvirt_volume.ubuntu[count.index].id
}
graphics {
type = “vnc”
listen_type = “address”
autoport = “true”
}
}

I’m still learning Terraform, so this solution may not be the best, but after much “hair pulling” I managed to set the hostname of each VM by applying “count” to the cloudint/commoninit definitions:

variable "vm_names" {
  description = "The names of the VMs to create"
  type = list(string)
  default = ["itp-dev-master01","itp-dev-worker01","itp-dev-worker02"]
}

# Use CloudInit to users and their SSH public keys to the VM instance
resource "libvirt_cloudinit_disk" "commoninit" {
  name = "commoninit${count.index}.iso"
  user_data      = data.template_file.user_data[count.index].rendered

  count = length(var.vm_names)
}

data "template_file" "user_data" {
  template = "${file("${path.module}/cloud_init.cfg")}"

  vars = {
    HOSTNAME = var.vm_names[count.index]
  }

  count = length(var.vm_names)
}

# Virtual-Machine(s)
resource "libvirt_domain" "itp-dev-vm" {
  name   = var.vm_names[count.index]
  memory = "1024"
  vcpu   = 1
  autostart = false

  network_interface {
    network_id = libvirt_network.vm_network.id
    network_name = var.vm_network_name
    hostname = var.vm_names[count.index]
    wait_for_lease = true
  }

  cloudinit = libvirt_cloudinit_disk.commoninit[count.index].id

  disk {
    volume_id = element(libvirt_volume.vm-ubuntu-qcow2.*.id,count.index)
  }

  # IMPORTANT
  # Ubuntu can hang if a isa-serial is not present at boot time.
  # If you find your CPU 100% and never is available this is why
  console {
    type = "pty"
    target_type = "serial"
    target_port = "0"
  }

  graphics {
    type = "spice"
    listen_type = "address"
    autoport = true
  }

  count = length(var.vm_names)
}

# IPs: Use "wait_for_lease true" on "network_interface" or alternatively, after creation, use "terraform refresh", "terraform show" 
#      or "virsh net-dhcp-leases vm_network" to display the IP addresses of the KVM domains
output "IPs" {
  value = libvirt_domain.itp-dev-vm.*.network_interface.0.addresses
}

and then running the following command from my cloud-init.cfg file:

runcmd:
  # Set hostname
  - hostnamectl set-hostname ${HOSTNAME}

Steve