after using Packer ad-hoc on some occasions I now did all the tutorials (Hashcorp Learn Getting started).
I am working with smaller sized companies (VFX / Animation / XR / AR studios) who are looking to connect their On-Prem resources (powerful usually Windows 10 DCC workstations) to cloud infrastructure (burst rendering, virtual workstations, remote work).
Within Packer Documentation, I am always looking for the “reference architecture way” of how to provision Bare Metal machines with identical config as cloud instances.
I understand that Hashicorp is probably more used to the full Cloud native workflow… and to provision software a concrete build machine needs to actually run and bare metal is rather unpredictable compared to for example AWS EC2.
→ The current way seems to be to use Packer with Virtualbox (or VMware), somehow create an ISO out of it and push it to bare metal machines, manually using USB sticks or with PXE
→ Or switching to some Bare Metal provisioning (example OpenStack Ironic).
→ Possible bare metal builders could be: OpenStack, VMware, Virtualbox ?
It looks like having some form of a Hypervisor helps to automate the process…
I am rather new to those topics and not an experienced Ops engineer…
I spend some time looking at Stefan Scherers Github ( GitHub - StefanScherer/packer-windows: Windows Templates for Packer: Win10, Server 2016, 1709, 1803, 1809, 2019, 1903, 1909, 2004, Insider with Docker ) and was able to create AWS AMIs and Virtualbox images.
I believe everything is there…
It would be nice to have a little section or paragraph about a best practice scenario within the Packer Documentation pointing people into the right direction.
Thanks and best regards, Martin