Hello,
I am trying to reboot the VM during provisioning. I am installing a newer Kernel and want to reboot to be able to remove the old one, which cannot be removed while running.
I tried multiple ways but unfortunately none of them worked for me.
I am using the latest Debian Bullseye ISO to create our private cloud image, and up to the reboot everything works fine.
Hier is the relevant hcl template:
provisioner "shell" {
environment_vars = [
"BUILD_NAME=${var.build_name}",
"IMAGE_DATE=${var.datestamp}.${var.build_id}"
]
execute_command = "echo 'vagrant' | {{ .Vars }} sudo -S -E bash '{{ .Path }}'"
pause_before = "2s"
expect_disconnect = "true"
scripts = [
"provision_scripts/grub.sh",
"provision_scripts/apt.sh",
"provision_scripts/kernel.sh",
]
}
provisioner "shell" {
environment_vars = [
"BUILD_NAME=${var.build_name}",
"IMAGE_DATE=${var.datestamp}.${var.build_id}"
]
execute_command = "echo 'vagrant' | {{ .Vars }} sudo -S -E bash '{{ .Path }}'"
pause_before = "60s"
scripts = [
"provision_scripts/install.sh",
"provision_scripts/configure.sh",
"provision_scripts/setup.sh",
"provision_scripts/cleanup.sh",
"provision_scripts/zerodisk.sh"
]
}
in kernel.sh
:
...................
...................
...................
log "Reboot and use the new Kernel"
reboot
#sleep 60
exit 0
Unfortunately packer breaks after the reboot without running the first script in the second provisioner with a Timeout during SSH handshake
. The VM is up and running and sshd
is listening to port 22 within 20 seconds after the reboot.
Any idea how to handle reboots between provisioning scripts?
Thanks