Building UEFI images with QEMU/KVM

Hi!

I’m trying to build qcow2 images for bare metal deployments (afterwards converted to raw) with using Packer and QEMU/KVM.

For BIOS supported hardware it works already fine and I’m building images for several Linux and Windows distributions. But since our newly DELL servers only supporting UEFI when using our NVME disks I need also UEFI images.

After hours of debugging I finally got KVM running with UEFI by using these qemuargs in Packer.

"qemuargs": [
        ["-m", "2048M"],
        ["-machine", "q35,accel=kvm"],
        ["-smp", "2"],
        ["-global", "driver=cfi.pflash01,property=secure,value=on"],
        ["-drive", "file=artifacts/qemu/{{user `name`}}/packer-{{user `name`}},if=virtio,cache=none,discard=unmap,format=qcow2"],
        ["-drive", "file=/usr/share/OVMF/OVMF_CODE.secboot.fd,if=pflash,format=raw,unit=0,readonly=on"],
        ["-drive", "file=/var/lib/libvirt/qemu/nvram/f20-uefi_VARS.fd,if=pflash,format=raw,unit=1"],
        ["-boot", "order=c,once=d,menu=on,strict=on"]
      ],

So far, so good. In my example I’m building a Ubuntu 20.04 image. The unattended installer is running fine and then Ubuntu installation reboots for the final provisioning steps.

And here comes the issue, the UEFI driver ignores the KVM -boot order and always using CD-ROM as first boot device. So it never boots from the installed qcow2 disk - so the machine hangs in a CD-ROM boot-loop forever.

Has someone already found a solution for this? I’ve been trying around with using bootorder on the -drive parameters but this seems also not being supported:

2020/11/25 05:22:52 packer-builder-qemu plugin: Qemu stderr: qemu-kvm: -drive file=artifacts/qemu/ubuntu-20.04.1/packer-ubuntu-20.04.1,if=virtio,cache=none,discard=unmap,format=qcow2,bootindex=0: Block format ‘qcow2’ does not support the option ‘bootindex’

Thank you in advance!

Alex

Oh man have I been down a rabbit hole this morning trying to solve this exact same problem!

Hopefully here’s a teach a man to fish moment…

The bootindex property only applies to -device definitions.

According to this https://bugzilla.redhat.com/show_bug.cgi?id=1752838#c2, OVMF will respect bootindex

Therefore we have to “override” using the qemuargs and define the proper -device arguments:

    "builders": [
      {
        "type": "qemu",
        "qemuargs": [
          [ "-pflash", "OVMF-pure-efi.fd" ],
          [ "-device", "virtio-scsi-pci" ],
          [ "-device", "scsi-hd,drive=drive0,bootindex=0" ],
          [ "-device", "scsi-cd,drive=cdrom0,bootindex=1" ]
        ],
        "disk_interface": "virtio-scsi",
        "cdrom_interface": "virtio-scsi", 
        // snip
      }
    ],

Theoretically the above SHOULD work, and allow you to make use of Packer’s magic for the actual -drive stanzas, however there’s an issue with the way Packer generates the cdrom drive. It contains an index=0 property and QEMU aborts due to the device already existing.

qemu-system-x86_64: -drive file=packer_cache/<HASH>.iso,if=none,index=0,id=cdrom0,media=cdrom: drive with bus=0, unit=0 (index=0) exists

I tried what I could think of to get it to work without having to redefine the -drives, but have so far been unsuccessful.


Hack-ish Workaround

If we redefine the -drive arguments, just as Packer generates them, and simply remove the index=0 from the cdrom drive, everything works as expected! … at least on my machine :slight_smile:

    "builders": [
      {
        "type": "qemu",
        "qemuargs": [
          [ "-pflash", "OVMF-pure-efi.fd" ],
          [ "-device", "virtio-scsi-pci" ],
          [ "-device", "scsi-hd,drive=drive0,bootindex=0" ],
          [ "-device", "scsi-cd,drive=cdrom0,bootindex=1" ],
          [ "-drive",  "if=none,file=output-qemu/packer-alpine,id=drive0,cache=writeback,discard=ignore,format=raw" ],
          [ "-drive",  "if=none,file=packer_cache/<HASH>.iso,id=cdrom0,media=cdrom" ]
        ],      
        "headless": true,
        "iso_url": "{{ user `mirror` }}/v{{ user `version` }}/releases/{{ user `arch` }}/alpine-{{ user `release` }}-{{ user `version` }}.{{ user `patch` }}-{{ user `arch` }}.iso",
        "iso_checksum": "file:{{ user `mirror` }}/v{{ user `version` }}/releases/{{ user `arch` }}/alpine-{{ user `release` }}-{{ user `version` }}.{{ user `patch` }}-{{ user `arch` }}.iso.sha256",
        "disk_interface": "virtio-scsi",
        "cdrom_interface": "virtio-scsi",
        "http_directory": "http",
        "vm_name": "packer-alpine",
        "disk_size": "10G",
        "format": "raw",
        "ssh_username": "root",
        "ssh_private_key_file": "~/.ssh/id_packer",
        "boot_wait": "1m",
        "boot_command": [
            // snip
        ],
        "shutdown_command": "poweroff",
        "shutdown_timeout": "1m"
      }
    ],

If I do manage to figure out how to keep the auto-generated -device arguments from Packer, I will report back!

Matthew

hey guys, thanks for this post!
I try to test it but at startup doesn’t see my disk. Maybe need to inject drivers on?
thanks

Did anyone succeeded with the image build ?

Manage to do it, lines here helped me, so give back feedback… here is my json packer file for use in google cloud to built a rockylinux specified server

rockylinux-8-from-iso.json
{
“builders”: [
{
“type”: “qemu”,
“name”: “rockylinux-8-x86_64-84-image”,
“vm_name”: “disk.raw”,
“output_directory”: “output”,
“accelerator”: “kvm”,
“disk_size”: 20048,
“format”: “raw”,
“headless”: true,
“http_directory”: “http”,
“iso_url”: “https://download.rockylinux.org/pub/rocky/8/isos/x86_64/Rocky-8.4-x86_64-boot.iso”,
“iso_checksum”: “sha256:53a62a72881b931bdad6b13bcece7c3a2d4ca9c4a2f1e1a8029d081dd25ea61f”,
“shutdown_command”: “/sbin/shutdown -hP now”,
“ssh_username”: “root”,
“ssh_password”: “root_password_will_be_deleted”,
“ssh_port”: 22,
“ssh_wait_timeout”: “10000s”,
“qemu_binary”: “qemu-kvm”,
“qemuargs”: [
[ “-m”, “1024m” ],
[ “-smp”, “2” ],
[ “-vga”, “virtio” ],
[ “-device”, “virtio-blk-pci,drive=drive0,bootindex=0” ],
[ “-device”, “virtio-blk-pci,drive=cdrom0,bootindex=1” ],
[ “-drive”, “if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.secboot.fd” ],
[ “-drive”, “if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_VARS.fd” ],
[ “-drive”, “if=none,file=output/disk.raw,cache=writeback,discard=ignore,format=raw,id=drive0” ],
[ “-drive”, “if=none,file=packer_cache/a4672833d0d89d9d9953d44436c44a32e50994ed.iso,media=cdrom,id=cdrom0” ]
],
“machine_type”: “q35”,
“boot_wait”: “10s”,
“boot_command”: [
“e inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/rockylinux-8.ksx”
]
}
],
“provisioners”: [
{
“type”: “shell”,
“scripts”: [
“scripts/gce.sh”
],
“execute_command”: “sh ‘{{.Path}}’”
},
{
“type”: “shell”,
“scripts”: [
“scripts/linux-guest-environment.sh”
],
“execute_command”: “sh ‘{{.Path}}’”
}
],
“post-processors”: [
[
{
“type”: “compress”,
“output”: “output/packer-rockylinux.tar.gz”
},
{
“bucket”: “{{user gcs_bucket}}”,
“image_description”: “Rocky Linux 8 Server”,
“image_family”: “{{user image_family}}”,
“image_name”: “jyl-rockylinux-8-server-{{timestamp}}”,
“image_guest_os_features”: “UEFI_COMPATIBLE”,
“project_id”: “{{user project}}”,
“type”: “googlecompute-import”
}
]
]
}

Things to have in mind :

  • If you use a RedHat Like distro to build, use version 8 to be able to use q35 machine type (prerequesite for uefi), version 7 does’nt have it
  • boot_command need to be changed because in uefi EL8, this is grub which is used and not isolinux
  • for uefi you need to import with the correct flag in google

JYL

Hey, all!

I have bumped into the same issue that lead most of us here by googling it. Luck to me, I was able to figure out a working around for my use case (Generating and uploading the vagrant uefi boxes to vagrant cloud)

This repository contains the build scripts for managing CentOS 7, 8.4 and 8.5 with UEFI enabled, GitHub - r0x0d/vagrant-uefi-boxes: Collection of RHEL clones vagrant boxes with UEFI support.

What I did instead of messing up with qemuargs was using the firmware option available in the packer QEMU builder. You just need to set that to the path of your OVMF.fd file, and that should be enough for packer.

Another thing that is important, at least for my use case, is to run grub2-install by the end of the kickstart script, this way grub2 can actually install the correct UEFI files. Without that step, the only thing I got was a UEFI Shell screen every time I booted the VM, see an example here: vagrant-uefi-boxes/centos85-ks.cfg at main · r0x0d/vagrant-uefi-boxes · GitHub

If you want to check out those boxes in action, you can check them out here: rhel-conversions - Vagrant Cloud

Hope to help anyone else that bumps into this issue like I did :slight_smile: