Building macOS 12.x VMs with packer and Fusion

Starting a new thread for discussion on building macOS 12.x virtual machines with packer and VMware fusion 12.0. macOS 12.0.1 Monterey will be released shortly. My packer template for building these complicated VMs has been updated on github.

For general questions please use this thread. If you have a bug or feature request use github issues.

Thanks,
Blake

1 Like

The previous thread does have some good info so linking here.

1 Like

Just want to say a big Thank You for this repo. I was now able to get a vagrant box out of the configuration and replace macinbox using macOS Monterey.

Just a note to say that I was seeing some issues with starting the box on VMWare on a Mac Mini after building on a MacBook Pro. I think there may be a bug (or dependency?) in the packer vmware plugin as the cores per socket is being set to a value causes issues (seems to be same as number of cpus / cores being configured). I also had to update the model ID and board ID. So if people see boot failures and VMWare refusing to start because of misconfigured CPUs and then check CPU, Socket, and board ID options in the vmx file. Fixed by manually editing the vmx file.

Currently looking replacing the packer user config with a direct vagrant option so this is direct replacement to get a vagrant compatible box.

Once I have this complete I may try to submit a PR from my fork.

2 Likes

Hi @trodemaster — Thanks for publishing the repo.

I notice it mentions this about the M1 series:

“These templates only support x86 platform as Apple has introduced breaking changes with the new Applesilicon platform.”

Do you know what was broken and thus what needs to be fixed for it to work on M1? If yes, you would mind elaborating?

Supporting the new applesilicon architecture is up to the hypervisor vendors and apple. It’s looking like VMware Fusion won’t support macOS guests as apple requires use of their own framework and essentially is not a VMware VM. I have no interest in supporting parallels and qemu support could show up in a year or two.

Board type and CPU settings are all parameterized. Depending on your host hardware you will want to adjust those to match. Also the timing for boot wait can be set using variable file along with it. If the VM is doing random things in non english you need to expand the boot wait time.

Apologies if not clear. I adjusted through the mechanisms you provided but when transferring the resulting image to another machine, VMWare would not run due to a mismatch in a parameter in the vmx file. More an FYI than a criticism. Not clear how the plugin is setting that or whether it has some dependency on the image build machine for the value. I don’t have the message to hand but VMW was complaining about mismatched CPU and socket counts.

Thank you for elaborating.

I was going to ask if you knew if VirtualBox supported the M1 yet or not, but decided to Google first and found that a forum moderator claim there will be no plans to support non-x86 architectures.

Given that I looked around and found UTM which is specific for and supports macOS directly, albeit will require an ARM version of Windows for it to run Windows. And of course it is not yet mature for Windows.

So it looks like when I get time to switch to my new Mac M1 I may have to write a builder plugin for UTM…

Hey so UTM is an interesting project for sure. I found that they have a patched version of qemu posted GitHub - utmapp/qemu: qemu with iOS host support and have been using that on applesilcon for linux VMs. As of now it will not boot macOS on applesilicon. That version of qemu does work with packer directly so no real need to write a new builder.

If qemu gains the ability to boot macOS on applesilicon I will look at supporting that with my packer template. Until then I have been using GitHub - KhaosT/MacVM: macOS VM for Apple Silicon using Virtualization API and just running my config scripts by hand as needed.

A lot of people are just using physical systems on applesilicon for testing now. With erase all settings feature I expect more people to start using that as it provides snapshot style reverting to clean state on physical systems.

1 Like

@trodemaster

Whoa!

Thanks for sharing github.com/KhaosT/MacVM; I will definitely be looking more into that.

Have you tried this with the latest 12.2 build? Is there a way to enable debugging of the initial installer? It looks like the user installation and ssh enablement packages are not being applied successfully and just interested how you can debug this.

I have been building 12.2 through all the betas and the release without issue. If you haven’t had any successful build the boot command process is very timing sensitive. Start with extending some of the timing values and confirm the resource limits are appropriate for your host system.

https://github.com/trodemaster/packer-macOS-11#adjust-timing

Not clear this is a timing issue. I see the boot process go through without issue and it executes the bootstrap.sh downloaded from http. I see it go through startosinstall with the packages as parameters. However when the node reboots, the packages have not been installed and the default machine setup screen sequence is run and user needs to be manually registered. Once through this, the install user is not configured nor is SSH daemon. Just interested if there is a way to capture the /var/log/install.log prior to reboot to see what the installer has done. Right now the device restarted directly from startosinstall so opportunity that I have found to review, i.e. further commands in shell script are not run.

Add an exit or sleep 9999 to this line.
https://github.com/trodemaster/packer-macOS-11/blob/fcfc553539f868d6e0fa7398e90ec3da56187a91/http/bootstrap.sh#L11

You can then run the following line by hand to debug. Can’t remember if there is a way to stop the reboot.
https://github.com/trodemaster/packer-macOS-11/blob/fcfc553539f868d6e0fa7398e90ec3da56187a91/http/bootstrap.sh#L17

Keep in mind the installation of packages happens after the reboot. When macOS is in recovery it’s really just writing a bunch of stuff to the target disk before booting into it. Once it’s booted from the VM disk is when the installer does the work.

Have you looked at the install.log on the target disk?

Thank you. I’ll try this. I reviewed the install log on the built machine but don’t see the packages being installed. I retry again to see if I accidentally missed this.

Thanks for your help. Tried many things over last few days without success. Some observations:

  • using the technique you suggested I can see startosinstall preload the OS and includes the CLI packages in the UnwrapperPackages subdirectory.
  • I tried reverting to original packages (pcaker and setupsshlogin) and also tried using signed copies of these without any difference.
  • for some reason, using buildprereqs.sh and selecting a prior release of the OS, a machine always builds to the latest OS version. I can create different ISOs with different shasum but they always go to latest (was 12.2 until today when it switched to 12.2.1 as part of upgrade).
  • I can see the installation log on the built machine (via “log show --info --debug”) and can see the multiple phases of the installation but at no point do I see any reference to the passed in packages

Image always comes up fresh with standard registration screen sequence (created a different account to see whether packer was there in background but not seeing any other account apart from new registered account).

All very odd. But again… thanks for the pointers.

If you have customized the packages and it’s failing then the packages are the issue for sure. They are just really touchy when installed via startosinstall. As you have seen it’s a nightmare to debug. I have confirmed the packages I’m using are the same ones that are on git in the main branch. Most people who try to change those packages just want a different user/pass combo. An alternative is to add a script to the end of the build that sets up the user you want via any number of community scripts out there. It’s much more reliable than tweaking the packages. That’s why the packer user is UID 502 leaving the normal first UID of 501 available.

For building the non latest beta version of the os use the packer variable seeding_program = “none”
If you don’t want any upgrades to happen you can remove the 2 software upgrade scripts from the packer template.

Just putting out here if anyone else has issues. Seems to come down to two main problems:

  • the submodule used to build ISOs appears to cache some data and I was getting corrupt installers (alway latest version rather than actual version selected through script). These seem to work for the OS install but always pulled a more recent version that expected. Creating the ISO directly from a downloaded installer (software --fetch-full-installer <version) and then building ISO was more reliable in ensuring a specific version
  • the user installed package was also corrupt. Not found the specific cause but running the create user script outside of a bash script produced a valid package but in a bash script resulted in a broken package. I debug this by manually copying to newly build guest and running and finding that package failed. Rebuilding package and testing installation was required.

This module is great as I can now use to build a vagrant package for use in our automated build pipeline as a replacement for macinbox.

Thanks trodemaster!

The repo for this template has been updated with changes to support macOS 12.3 and a new tool for generating the needed boot iso files.

1 Like

@trodemaster if you have an M1 available (I haven’t), you might have a look at this:

I found out about this because I was following up on Apple M1 support for Github Actions.