Authenticating Vault inside a script in your container

This is clearly a dumb question as I can’t find the answer but I’m confused about Hashicorp Vault and the way you get the secrets.

Part of the idea of Vault is that you can store your secrets in there and you don’t need to stick a bunch of sensitive things in environment variables which are inspectable.

Assuming you have a Vault service running somewhere with some secrets in it and you need to use a token to authenticate against Vault to get your secrets in your script inside your docker container, how does the token get in there? Surely if you use an environment variable, that defeats the point as someone could just take the token. Do you mount a volume with it in? Or something more inventive?

1 Like

There are a variety of ways to do this… its secure introduction of a way to get a service/server/etc into Vault to get the secret it needs.
AppRole is one way, AWS auth (ie, use IAM or EC2 metadata to trust a server), Vault Agent (setup the agent on an instance securely, then the apps talk locally to that agent only under a defined policy), etc.
Take a look here - https://learn.hashicorp.com/vault/identity-access-management/iam-secure-intro and see if that answers your question

That odd occurrence when you ask a question on a forum and a twitter acquaintance replies! Unless there are 2 @mikegreen 's at Hashicorp.

Thanks Mike that link was very useful, I’ll dig into that and see what I can concoct.

Magicaltrout.

Oh, hey, Tom! Good to hear from you.
Only one of us mikegreen’s here, so far.
Funny timing, I just cleaned out a few forks of Saiku in Github over the weekend. Small world. There’s still a few hundred thousand securities being priced everyda by a can-and-string implementation of mondrian+saiku in a really big bank from back then :slight_smile:

Take a gander at https://blogs.uequations.com/2020/01/01/injecting-dynamic-database-credentials-into-k8s-pods-via-vault/ as well if you’re heading towards the Kubernetes world. The agent-injector has had a bit of traction lately, though, for good or bad there are numerous ways to auth instances… At image build time, pipeline, post deployment, etc.

Hi all! I guess I’m here for this topic’s anniversary :cake:
I have been trying to find the docs or topic for my specific challenge, and this is the closest so far, so I’m hoping whoever reads this will have some advice.

My challenge:

I have a Vault instance with AppRole authentication enabled.
I want to use the AppRole auto auth to run vault agent on containers, in order to provide them with some templated configuration files and a renewable token.

Details

Some more details which may be pertinent:

  1. The containers are Jenkins build agents.
  2. I want Jenkins to know only about the AppRole Role ID and AppRole Secret ID - it can read them from it’s own credentials store.

Some constraints:

  1. I can not store any sensitive information in the container image - they are publicly accessible, and need to authenticate and retrieve secrets only at run time, using the vault agent
  2. I want to inject the Role ID and Secret ID from environment variables
  3. I don’t want to have to change the entrypoint, but I would accept if that is the only way to do what I need to do.

The desired behaviour of the container should be as follows:

  • with VAULT_ROLE_ID and VAULT_SECRET_ID set in the environment, run a container of the image with vault agent configuration already included.
  • Execute vault-agent somehow to authenticate and write the sinks

Now, I can accept piping the environment variables passed to the container. But since I already have an entrypoint, I can’t think of the “right” way to start the vault agent.

Do I have to run it once-off with exit_after_auth? Do I pass it to an init like tini or supervisord or something?

What do you suggest?

1 Like