Virtualbox-VM Builder fails to export a working appliance

Hi Guys

I am attempting to use the virtualbox-vm builder to export an ovf appliance (let’s call it appliance_01) after a number of provisioners have produced a new snapshot (let’s call it snapshot_01) on my vbox vm.

The entire process appears to work with snapshot_01 being created and a subsequent export of appliance_01 to the named output directory.

Unfortunately appliance_01 cannot be opened and gives an error of NS_ERROR_INVALID_ARG (0x80070057).

I realize that this error is like a rash across the internet and seems to be VirtualBox’s “catch-all” error.

I have identified a possible cause of the problem but am unsure as to how to use Packer to solve it.

When snapshot_01 is created by Packer the vm is left in a state where the vdi is marked by vbox as “Inaccessible” and it is in this state that an export is carried out - an export which produces an unopenable appliance.

If I manually restore the vm to snapshot_01 and then attempt a manual export the subsequent appliance works perfectly and all the provisioned changes are available.

I have been trying to use vboxmanage_post to do the restore before the export but unfortunately vboxmanage_post seems to run prior to the creation of the snapshot so errors out with a “snapshot not found”.

My question is this -

Is there a way to either
a) produce snapshot_01 in a way which does not cause the vdi to be “inaccessible”
b) restore snapshot_01 after it has been created but before the export

Thanks in advance
regards
Ian Carson

Here’s my packer template for reference

{
"variables": {
	"paradigm_release_version": "{{env `CURRENT_BUILD_VERSION`}}",
	"ci_commit_sha": "{{env `CI_COMMIT_SHORT_SHA`}}",
	"dev_name": "{{ env `DEV_NAME` }}",
	"build_timestamp": "{{isotime \"20060102150405\"}}",
	"headless": "true",
	"template_name": "hardening-paradigm-base-partitioned-server",
	"home_directory": "/home/paradigmuser",
	"api_access": "{{env `API_ACCESS`}}",
	"vm_name": "ubuntu-18-04-partitioned"
},
"builders": [
	{
		"name": "virtualbox-{{user `template_name`}}",
		"type": "virtualbox-vm",
		"communicator": "ssh",
		"headless": "{{user `headless`}}",
		"ssh_username": "xxxxxxxxx",
		"ssh_password": "xxxxxxxx",
		"ssh_timeout": "1h",
		"guest_additions_mode": "disable",
		"shutdown_command": "sudo shutdown -h 2",
		"output_directory": "{{user `home_directory`}}/build_vms",
		"vm_name": "{{user `vm_name`}}",
		"boot_wait": "10s",
		"attach_snapshot": "Base_State",
		"target_snapshot": "ubuntu-18.04-hardened-{{ user `paradigm_release_version` }}",
		"force_delete_snapshot": "true",
		"keep_registered": "false",
		"skip_export": "false"
	}
],
"provisioners": [
	{
		"type": "file",
		"only": ["virtualbox-{{user `template_name`}}"],
		"source": "{{template_dir}}/hardening/templates",
		"destination": "{{user `home_directory`}}",
		"pause_before": "5s"
	},
	{
		"type": "shell",
		"only": ["virtualbox-{{user `template_name`}}"],
		"execute_command": "sudo -S -E bash -eu '{{.Path}}'",
		"scripts": ["./base/hardening/hardening.sh"],
		"pause_before": "15s"
	}
]

}

Here’s some more information which might help your thinking.

The problem is intermittent - about 1 run in 10 produces a valid snapshot which leaves the vdi in an accessible state

Could all this be related to shutdown timing/delay? There must have been a good reason to make such settings available?