Packer integration with Vault: Best practices

Thank you for your efforts on Packer and for making them open source.
Packer has helped us a lot and we appreciated the shared knowledge it encapsulates.

Nothing that follows should be taken to suggest something is broken in Packer or Vault.

We are relatively new to both Packer and Vault. This makes us suspect we are doing it wrong when it comes to using Vault data in Packer builds. So we have three questions, and we have provided details of our approach for some context.
Hopefully our approach may help someone else?


  1. Does anyone know of docs or blogs outlining Packer+Vault best practices?
  2. Is Vault integration considered finalized? Or is it still ‘in progress’? We couldn’t find an open issue in Packer repo tracking Vault integration
  3. Is our workaround an approach people think could be proposed upstream?


Build a Packer image on multiple cloud providers and locally using scripts that are unmodified and without conditional branching on build environment (this keeps build scripts easily auditable).
Vault server cannot be publicly visible or accessible in any way - no publicly accessible proxies etc. Ease audits by ensuring Packer specific secrets that need to be audited are the smallest set across Packer projects.

Pain Point:

When building an image on cloud vendor A,B,C etc. our Vault secret access, which works locally, no longer works - because the Vault server is not accessible from the cloud vendor infrastructure.
It seems we need to add to our Packer definitions something like this for every Vault data item:

  "variables": {
    "my_secret": "{{ vault `myapp/info/tain/ment` `my-value`}}"

Then we also need to add every secret item to the sensitive-variables:

 "sensitive-variables": [

This felt all kinds of wrong.
In particular we decided the only values allowed in the sensitive-variables list are variables related to a builder - these sensitive variables are environment variables on a developers machine. This ensures the secrets that need to be audited are the same across all Packer projects.
For example:

  "sensitive-variables": [


Our current workaround is this:

  1. Proxy private Vault server to on the Packer operator’s machine.
  2. All image build Shell/Ansible/Chef/Puppet/Salt scripts use Vault server address in 1)
  3. Packer sets up a SSH reverse tunnel making Vault available on on the cloud vendor instance launched by Packer.

The upshot is we don’t use the vault function anywhere in our Packer files.
Yes, we could use Packer’s vault to access the Packer Builder secrets, we don’t for reasons of taste, nothing substantial.

The downside is we have two scripts for SSH tunnel setup and tear-down that we have to copy to every Packer project, as well as some boiler plate Packer configuration we have to insert.

The fact we carry so much boiler plate code to every project makes us think this should either be upstream, or people are doing something else we haven’t thought of.

If anyone can see a exploit in this setup we’d appreciate a heads-up.

Otherwise, is this an approach people think could be proposed upstream.


  • The Vault server that is used for Packer builds is a special ‘bootstrap’ Vault server. It holds only secrets related to image building. For example some of its secrets are the address, port, and certificates for the Vault server used in production.
  • We use http to simplify the setup on the remote instance: Because we use SSH tunnels the SSL connection does not provide any security benefit that outweigh, in our opinion, the setup complications.
  • We should say we really don’t like introducing another use of SSH. We are working SSH out of our infrastructure, but because this is only used at the build phase we live with it. In case it helps anyone a systemd unit removes SSH on shutdown, and we use WireGuard for access during dev and test phases.

I’m not sure I understand what breaks when you try to use the vault function in Packer? Is it that you can’t connect to the Vault server?

You mention that you aren’t using the vault function at the moment. Does that mean you have shell scripts which run Packer and fetch secrets, exposing them as environment variables?

Could you explain a bit more about exactly what is broken for you - we use the vault function quite happily?

Apologies for the confusion - I’ll edit the OP to make this clearer.

We are not suggesting anything is broken.

The vault function would work as a general way of accessing secrets locally and passing them into the cloud.
We can’t use it because it increases the risk of secret disclosure - ack sensitive-variables is available but we aren’t working in a org/domain where that list is an acceptable safeguard.

Apart from accidental secret disclosure we didn’t like (a statement of taste) that every time a developer added data to Vault for an image build script/recipe/artifact, that data had to bubble up into the Packer file as a vault function call.
We acknowledge, this is a question of taste, and some may even consider this percolation a safeguard.
In addition, by using vault function the packer operators needs to know something about Vault. Currently they do not, and we’d like to keep it that way. Again, we’re not saying this is best practice, but we do like the fact that our Packer on-boarding involves only Packer.

Love to hear alternatives and what the cost/benefits are.

Not in general. We do this only for those specific secrets required to start an instance for a Packer Builder. As an example: vault_token,linode_token, etc. The vault_token naturally expires after an hour, and the other’s we can revoke easily enough. Currently there are only seven (7) such variables. If we were to add another cloud vendor the token required by its Packer Builder would be added.
Otherwise, this list of sensitive data never changes between Packer projects.

At the moment the only variables we allow in sensitive-variables are vault_token and tokens/keys/secrets required by a Packer Builder - the safeguards required to ensure no additional variable names are added to source control do not pose an audit challenge.

To be clear, this is a post about best practices, given the constraints we face.

No worries. We are relative new comers to Packer and Vault:
Are we correct in understanding that, without a public Vault server, if you were to add a Vault secret to an image configuration, using say via Ansible, that you would inject this secret into the Packer script - via the vault function?

Any other workarounds you’ve come across? - we didn’t think we could get WireGuard to work easily.
By easily we mean the setup and teardown scripts looked like they would be more complex than SSH.

Our Vault server isn’t public, but is accessible via our internal AWS & VMware infrastructure. In general we don’t put many secrets into our images, instead injecting them at runtime (e.g. via cloud init). Only a minimal amount is “baked in”, but that does include some values which are fetched from Vault with the vault function (for example authentication details for our VMware build server).

Our build scripts are written in Ansible, which also calls Vault (using the lookup) for a few values too.

We don’t use any form of SSH/VPN software when using Packer (other than general office VPN if working remotely). Our CI system runs Packer which connects to our Vault server as needed. The images are built in both AWS and VMware.

1 Like

Hi, Some progress. In turns out our intuition:

was likely correct.
After some digging it appears the ‘correct’ way of doing this is using the SSH communicator options:

  1. ssh_local_tunnels
  2. ssh_remote_tunnels

As you’ll see from those links the documentation is absent.
Some guidance can be gleaned from Packer issue #6976

We haven’t yet had time to try out this conjecture.

Hope this helps some one in the meantime.

Some additional insight into the scope and limits of ssh_local_tunnels and ssh_remote_tunnels (and a use case) is described in the Packer plugin-sdk issue #34

Hope that helps.