Distributed App authorization using Vault

I’m probably missing something as I cannot find a clear solution for,


  1. RestAPI listening for requests and do a job only if connected party (Client) is verified/auth
  2. Use Hashicorp Vault as verification/auth mechanism so RestAPI run job based on some short life information RestAPI and Clients exchange/share using Vault
  3. Possibly using internal dynamic secret engine (not external like: db, aws, etc.)
  4. RestAPI and Clients will auth on Vault using AppRoles and Cubbyhole
  5. Least privilege for RestAPI and Clients on vault
  6. No real and long life secrets exchanged between RestAPI and Clients
  7. RestAPI securely check Clients request before run the Job

I initially thought a simple solution, without using PKI or something complex can be reached allowing Clients creating a token with permission for a secret/flag, wrapping the token (with limited ttl and use limits) and sending wrapped token to RestAPI so that RestAPI can be unwrap and read secret/flag and run Job on behalf of Clients.

Allowing token creation to Clients identity is a security practice ?
There is a vault backend allowing sharing some very short life token/secrets managed by vault istelf ?
There is a best way to do that ?

Thank you

The OIDC auth engine might be what you’re looking for. From the docs:

OIDC provides an identity layer on top of OAuth 2.0 to address the shortcomings of using OAuth 2.0 for establishing identity. The OIDC auth method allows a user’s browser to be redirected to a configured identity provider, complete login, and then be routed back to Vault’s UI with a newly-created Vault token.

Just keep in mind that the auth would create a unique entity for each app so you would need a lot of licenses (if you’re using the enterprise version).

Hi @aram535,
it would probably work, but so it violates requirement (3.) Also, distributed app just can auth using approle as requirement (4.).

I found a possible solution using a one-time-token cubbyhole:

# create a one-time-token setting very shot explicit-max-ttl
# and se-limit=2 (one for App1 to write secret, one for App2
# to read secret)
token=$(vault token create -field=token -policy=default -explicit-max-ttl=30s -use-limit=2)

# write a secret UpdateID based on some always changing
#  data on App1 (hash of: token_accessor + data, ...)
VAULT_TOKEN=${token} vault write cubbyhole/allow_update UpdateID=$(echo -n $(date)${token_accessor or other}| md5sum | awk '{print $1}')

[App1] call APP2 RestAPI sending token and UpdateID
[App2] read token cubbyhole secret from Vault:

VAULT_TOKEN="${token}" vault read -field=UpdateID cubbyhole/allow_update

and compares secret with UpdateID.

This token is a one-time-token, expiring quickly, sent over TLS so sufficient securely, I think. App2 will validate using token sending Alert on error to inform of interception and unauthorized use of the one-time-token.

A one time kv (e.g. App1 can write, App2 can read, Vault revoke kv after read) probably avoid token creation, transmission and arbitrary end-point to use one-time-token, raising security a little , but I can’t find such feature in Vault.

But Vault features are so many that I’m probably missing it.