Thank you for your efforts on Packer and for making them open source.
Packer has helped us a lot and we appreciated the shared knowledge it encapsulates.
Nothing that follows should be taken to suggest something is broken in Packer or Vault.
We are relatively new to both Packer and Vault. This makes us suspect we are doing it wrong when it comes to using Vault data in Packer builds. So we have three questions, and we have provided details of our approach for some context.
Hopefully our approach may help someone else?
Questions
- Does anyone know of docs or blogs outlining Packer+Vault best practices?
- Is Vault integration considered finalized? Or is it still âin progressâ? We couldnât find an open issue in Packer repo tracking Vault integration
- Is our workaround an approach people think could be proposed upstream?
Context/Requirements:
Build a Packer image on multiple cloud providers and locally using scripts that are unmodified and without conditional branching on build environment (this keeps build scripts easily auditable).
Vault server cannot be publicly visible or accessible in any way - no publicly accessible proxies etc. Ease audits by ensuring Packer specific secrets that need to be audited are the smallest set across Packer projects.
Pain Point:
When building an image on cloud vendor A,B,C etc. our Vault secret access, which works locally, no longer works - because the Vault server is not accessible from the cloud vendor infrastructure.
It seems we need to add to our Packer definitions something like this for every Vault data item:
{
"variables": {
"my_secret": "{{ vault `myapp/info/tain/ment` `my-value`}}"
}
Then we also need to add every secret item to the sensitive-variables
:
"sensitive-variables": [
...,
"my_secret",
...
]
This felt all kinds of wrong.
In particular we decided the only values allowed in the sensitive-variables
list are variables related to a builder - these sensitive variables are environment variables on a developers machine. This ensures the secrets that need to be audited are the same across all Packer projects.
For example:
"sensitive-variables": [
"digitalocean_token",
"linode_token",
"vault_token"
]
Workaround
Our current workaround is this:
- Proxy private Vault server to
http://127.0.0.100:8200
on the Packer operatorâs machine. - All image build Shell/Ansible/Chef/Puppet/Salt scripts use Vault server address in 1)
- Packer sets up a SSH reverse tunnel making Vault available on
http://127.0.0.100:8200
on the cloud vendor instance launched by Packer.
The upshot is we donât use the vault
function anywhere in our Packer files.
Yes, we could use Packerâs vault
to access the Packer Builder secrets, we donât for reasons of taste, nothing substantial.
The downside is we have two scripts for SSH tunnel setup and tear-down that we have to copy to every Packer project, as well as some boiler plate Packer configuration we have to insert.
The fact we carry so much boiler plate code to every project makes us think this should either be upstream, or people are doing something else we havenât thought of.
If anyone can see a exploit in this setup weâd appreciate a heads-up.
Otherwise, is this an approach people think could be proposed upstream.
Notes:
- The Vault server that is used for Packer builds is a special âbootstrapâ Vault server. It holds only secrets related to image building. For example some of its secrets are the address, port, and certificates for the Vault server used in production.
- We use http to simplify the setup on the remote instance: Because we use SSH tunnels the SSL connection does not provide any security benefit that outweigh, in our opinion, the setup complications.
- We should say we really donât like introducing another use of SSH. We are working SSH out of our infrastructure, but because this is only used at the build phase we live with it. In case it helps anyone a systemd unit removes SSH on shutdown, and we use WireGuard for access during dev and test phases.