Stop rebooting vSphere virtual machine using packer

Is there any way I could configure a virtual machine in vSphere that is created starting from an Ubuntu cloudimage (ova) (Ubuntu 22.04 LTS (Jammy Jellyfish) release [20230719]) to not reboot once after the first boot?

I’m creating the templates using packer and then using terraform. The biggest problem I have is with terraform, because I need to connect to the VMs using SSH to retrieve some information. But even with packer cloud-init isn’t always reacting correctly, because I can’t seem to be able to easily stop ssh at the right moment (“systemctl” or “service” don’t seem to be recognised when running it at ‘bootcmd’ stage), so SSH starts too early, packer starts provisioning stuff and then the virtual machine reboots because of the vmtools (which run some perl scripts for the guest os customisation). Vmtools watch cloud-init, it waits to finish, and then it reboots.

I’ve tried using disable_vmware_customization: false in cloud-init, and while the VM doesn’t reboot anymore and user-data does seem to be injected correctly, this seems to create a broken template where I cannot provide the meta-data (network configuration) through terraform anymore:

  clone {
    template_uuid =
    customize {
      linux_options {
        host_name = "${each.key}"
        domain    = "company.internal"

      network_interface {
        ipv4_address = "${each.value["ip_address"]}"
        ipv4_netmask = "24"
      ipv4_gateway = "${}"
      dns_server_list = "${var.dns}"

At least that’s how I interpret it. This leads to the VMs starting with disconnected network interfaces. Enabling them manually as fast as possible (just for debugging) doesn’t seem to inject the correct configuration into netplan.

A solution for terraform (so the second stage, as it were) would also help here, if there are any. Any suggestions are appreciated.