Using qemu builder gives network ens3, after importing in virt-manager it becomes enp1s0

Hi,

I’m building my first image with packer/qemu. It’s a simple config for Ubuntu server 20.04, taken from the examples. The build process succeeds, an image is created.

I can import the image with virsh. The resulting VM boots ok, but its network doesn’t start. The reason is that during packer-build the VM enumerated the network as ens3, but in my new VM the network is enumerated as enp1s0. During the autoinstall on Ubuntu it created a netplan configuration with ens3 in it.

I can manually fix that of course, but that is not the point of an automated process to create images.

I’m quite puzzled how to find out how this network enumeration process works. There is some documentation but nowhere it explains where the s (in ens3) comes from. Knowing how the system comes up with the s may be an important clue. Maybe I need to define PCI devices?

Does anyone recognize this problem?

BTW. There is a work-around and that it to add net.ifnames=0 to boot_command. In that case both the builder and the final image will fall back to the old fashioned eth0.

1 Like

The documentation you’re looking for is at PredictableNetworkInterfaceNames

In 2013, systemd and udev changed to default to predictable names for network interfaces. The lists below are from the linked page, which also describes why this solution was chosen and what problem it’s trying to solve.

Names are created using the following schemes.

  1. Names incorporating Firmware/BIOS provided index numbers for on-board devices (example: eno1 )
  2. Names incorporating Firmware/BIOS provided PCI Express hotplug slot index numbers (example: ens1 )
  3. Names incorporating physical/geographical location of the connector of the hardware (example: enp2s0 )
  4. Names incorporating the interfaces’s MAC address (example: enx78e7d1ea46da )
  5. Classic, unpredictable kernel-native ethX naming (example: eth0 )

If you don’t want this, there are three alternatives.

  1. You disable the assignment of fixed names, so that the unpredictable kernel names are used again. For this, simply mask udev’s .link file for the default policy: ln -s /dev/null /etc/systemd/network/99-default.link
  2. You create your own manual naming scheme, for example by naming your interfaces “internet0”, “dmz0” or “lan0”. For that create your own .link files in /etc/systemd/network/, that choose an explicit name or a better naming scheme for one, some, or all of your interfaces. See systemd.link(5) for more information.
  3. You pass net.ifnames=0 on the kernel command line

Like you, I chose option 3 as my VMs overwhelmingly use a single network interface and this keeps it simple.

1 Like

Don’t get me wrong, but unpredictable interface names are not recommended on most of the newer distributions, so IMHO this is a really bad solution.
RedHat for example is really clear on this one:

  • No. Red Hat strongly recommend that the new RHEL7, RHEL8 and RHEL9 naming conventions are used.

I really like the new interface style and don’t want to think up the names myself, so the only solution is to manipulate the images afterwards with other tools or scripts the way they should be in the final infrastructure. This is kind of hacky but the best solution I found so far.

I would really appreciate a better solution working with packer. Maybe with a qemuarg to make the virtual network interface available like enp1s0 or something like that. Anyhow, i don’t understand why packer makes it available as pcie interface.

Is there nothing to do about this?

1 Like

One way to get the same predictable network interface name is to make sure the virtual hardware used by the NIC is identical. I would start with inspecting the virsh config and maybe run lspci to list the virtual hardware. Once you figure out which nic you want to have your image use after build customize the packer qemu args to use the same virutal NIC hardware.

I was able to make this work automatically by creating an /etc/netplan/50-cloud-init.yaml configuration:

network:
  version: 2
  ethernets:
    en_interfaces:
      match:
        name: "en*"
      optional: true
      dhcp4: true
      dhcp6: false

And /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg:

network:
  config: disabled

With the provisioner:

  provisioner "file" {
    source      = "provision/50-cloud-init.yaml"
    destination = "/var/tmp/50-cloud-init.yaml"
  }

  provisioner "file" {
    source      = "provision/99-disable-network-config.cfg"
    destination = "/var/tmp/99-disable-network-config.cfg"
  }

  provisioner "shell" {
    inline = [
      "sudo rm -f /etc/netplan/50-cloud-init.yaml",
      "sudo cp /var/tmp/50-cloud-init.yaml /etc/netplan/50-cloud-init.yaml",
      "sudo chown root:root /etc/netplan/50-cloud-init.yaml",
      "sudo chmod 600 /etc/netplan/50-cloud-init.yaml",
      "sudo netplan apply",
      "sudo cp /var/tmp/99-disable-network-config.cfg /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg",
    ]
  }

Signed up to say thanks and share the fix. Exact same problem when I build with packer on laptop and deploy to KVM on another server including the device IDs. @trodemaster has the right idea for a fix. I managed to get an enp1s0 ethernet card like this:

Step 1: Change bus
machine_type = "pc-q35-7.2"

Gives me an enp device instead of ens but wrong slot and bus ID

Step 2: Tweak the qemu args to “install” some PCI-e slots

   ["-device", <<-EOT
    {"driver":"pcie-root-port","port":8,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x7"}
    EOT
    ],

Step 3: “Install” the network card into the PCI-e slot

    ["-netdev", <<-EOT
    {"type":"user","id":"hostnet0"} 
    EOT
    ],
    ["-device", <<-EOT
    {"driver":"virtio-net-pci","netdev":"hostnet0","id":"net0","mac":"de:ad:be:ef:ca:fe","bus":"pci.1","addr":"0x0"}
    EOT
    ]

Step 4: (if required) adjust the bus addr value

You may get an ID clash on the bus with other random devices, like this:

2025/02/02 12:00:12 packer-plugin-qemu_v1.1.0_x5.0_linux_amd64 plugin: 2025/02/02 12:00:12 Qemu stderr: : PCI: slot 0 function 0 not available for pcie-root-port, in use by mch,id=(null)

In this case, I got things working by just trying new addr values at random until I found a free one at 0x7, just like setting IRQ 7 for soundblaster in the good old days.

After spending way too long on this, I was rewarded with enp1s0 in my packer VM, the correct interface definition in Debian’s /etc/network/interfaces and an IP address when booting the image on the KVM server. The other way to do this would have been cloud-init.

Complete example (UEFI + secure boot)

net_device comes directly from qemuargs so don’t also try to set it with plugin.

source "qemu" "debian12" {
  qemuargs = [
    ["-bios", "/usr/share/OVMF/OVMF_CODE.fd"],
    ["-chardev", "stdio,id=char0,logfile=serial.log,signal=off"],
    ["-serial", "chardev:char0"],

    ["-device", <<-EOT
    {"driver":"pcie-root-port","port":8,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x7"}
    EOT
    ],
    ["-netdev", <<-EOT
    {"type":"user","id":"hostnet0"} 
    EOT
    ],
    ["-device", <<-EOT
    {"driver":"virtio-net-pci","netdev":"hostnet0","id":"net0","mac":"de:ad:be:ef:ca:fe","bus":"pci.1","addr":"0x0"}
    EOT
    ]

  ]
  communicator      = "none"
  cpus              = "2"
  memory            = "2048"
  iso_url           = "..."
  iso_checksum      = "sha512:..."
  output_directory  = "..."
  shutdown_timeout  = "1h"
  disk_size         = "20G"
  format            = "qcow2"
  accelerator       = "kvm"
  http_directory    = "http"
  vm_name           = "debian12"
  #net_device       = "virtio-net-pci"
  disk_interface    = "virtio"
  boot_wait         = "2s"
  machine_type      = "pc-q35-7.2"
  boot_command      = [ ... ]
}

Cheers,
Geoff