Client authentication with Hashicorp Vault

We are doing a POC on using HashiCorp Vault to store the secrets.

As a part of the POC, we have an ETL application that runs on-prem and tries to Fetch the
secrets from Vault.

Following is the process we are looking into.

  • Step-1:

    Authenticate with Vault by logging in with UserName and 
    Password using Userpass.
  • Step-2:

    Use the token generated in Step-1, Fetch the role id for the app role.
  • Step-3:

    Use the token generated in Step-1, Fetch the wrapped secret id for the app role.
  • Step-4:

    Use the token generated in Step-1, Fetch the unwrapped secret id for the app role.
  • Step-5:

    Use the token generated in Step-1, Login with approle by providing unwrapped secret id 
     and roleid  and fetch the token
  • Step-6:

    Use this token to fetch the secrets.

Can anyone please help me resolve below queries

  1. Is this the right approach or is there a better way of fetching
    the secrets as this approach involves multiple api calls.
  2. In Step-1, i need a username and password to initially
    authenticate with vault, which means i need to store these
    details on the ETL Server in a file.Is this secure?
  3. Is there a better way to authenticate client initially with vault
    without username and password.

You have to login somehow initially. Its the secret-zero / secure-introduction challenge.

You can use certs, IP ranges, instance metadata, or short-lived-secrets to initially authenticate… It will depend on your environment and architecture (ie, onprem, cloud, static, etc)…

Might read some ideas like Solving the "Secret Zero" Dilemma with HashiCorp Vault Response Wrapping

Thanks Mike.This is helpful

The issue of secret-zero will always be around, however, if your application is in a secure LAN or a kubernetes namespace it would make your life a lot easier.

A couple of notes:

  • It’s pointless to authenticate with a userpass then reauthenticate with approle.
  • role-id in approle is static, it never changes and is not a secret. You can hardcode that in a config file.
  • Using ‘vault agent’ does help you with some mitigation of the secret zero issue.

My recommendation is have your deployment automation tool be always authenticated to Vault. Then when you need to deploy your software, a depenedency of the deploying is the ‘vault agent’ which can be activated with the secret from the deployment automation tool. The agent will auto-refresh (if allowed by role design) its own token so that it’s always alive. You can use the agent to either have a token available for your application or have your secrets local. This does require that the app be in Kubernetes (agent = sidecar) or docker stack, or in a secure VM as you don’t want the agent to be accessible by anything other than your application.

Hi Aram,

Thanks for your inputs.

Actually we are thinking to run vault on separate sever.This sever will be dedicated exclusively for vault.

We have python scripts that do ETL process and we want to use the Vault Rest API to fetch the secrets in these python scripts.

We are thinking of using Userpass to get authenticated with Vault or is it better to use App role based authentication.Can you please provide your suggestion.

Also we are thinking installing Vault through docker on the server. In this case should we use the Vault image available in Docker hub or should we build our own custom image, as we are planning to use RAFT Storage backend.

IMO, userpass should be the last resort auth option. There is no mechanism for rotating your password easily and if the sceret is exposed, then there are no easy recourse. AppRole does need an initial token to “start” but then using an agent you can renew the token automatically which means even if the initial secret is exposed, in short term it’s automatically revoked (assuming you use a realistic TTL). So it’s up to your orgnization to decide on which auth platform to go with. Approle is more of a complex setup but the preferred way to me.

What are you trying to accomplish with docker? Since Vault is a single executable you’re not gaining anything by dockerizing it. There are no dependencies to worry about. Upgrading Vault is simply stopping the old process, and starting the new one. So unless everything in your organization is containerized and it’s part of your system management, I would skip it.

RAFT is builtin, there is no need to “build your own”.

I do want to mention that you said “server” and not “servers”. With raft or really any multi-vault setup you want servers, not a server, otherwise again there is no point having multiple vault instances. The whole point of raft with integrated storage is to distribute the data so that if one server goes down your vault instance is still available as one of the standby nodes would take over. Putting them all on one machine (containerized or not) would gain you nothing.

With RAFT it’s recommended to have 3 or 5 nodes on different servers/instances. I’d recommend 3 with a VIP that points to the leader node at all times.

1 Like

Thanks for the info. This is very helpful.

The reason i mentioned server is that currently we are planning to use only one server for Vault.
We have a script that checks frequently whether the vault server is up or not and once it is down we would restart the vault.

Does using only one server will decrease the vault performance?

The reason why we choose RAFT is going forward if we want to increase the nodes, there is no need to change the backend.

It’s not the greatest setup, but it can work if you don’t care about 100% up time. In my environment, even a short outage can cause massive number of issues so we do everything possible to lower the chances that Vault isn’t available, so up to you how you build your environment.

You’re not decreasing the perfomance of vault, but you’re leaving performance on the table without multiple nodes. The standby nodes act as read-only end points, they can reply to simple queries without having to bother the leader node. Without any standby nodes everything has to go to the leader node. If your environment is small enough, 1 large node should be able to handle everything.

Just a terminology note, raft is a storage mechanism for data for Vault’s integrated storage. If you’re using something else to do storage, then it isn’t raft. (Consul as a backend also uses raft communicate between Consul nodes).

Thanks Aram for the detailed Explanation.It is very helpful