Hello all,
I am new to Vault, and to secrets management in general.
I work on an application which keeps some sensitive data encrypted in a database. The encryption key is stored in a config file on disk. I’m investigating how to avoid having that encryption key stored on disk, specifically to address the threat model of someone gaining access to the server, and therefore the config file, and being able to use it to decrypt the sensitive data in the database.
At first glance it seems there are many ways Vault can help with this. For eg the simplest option I came across is the KV secrets engine which could store the encryption key. The application would retrieve it from Vault, instead of from disk. Alternatively the Transit secret engine could handle the encryption and decryption for me, and there’s no encryption key to store anywhere. These both sound great.
But no matter which option I look at, I keep coming back to the same problem: how does my application authenticate itself to Vault, without running into the same problem I am trying to address?
-
If I use tokens to authenticate, then I need a token saved on disk so my application can read it and ask Vault for the encryption key. This does not seem to be any improvement in the threat model I am trying to address - an attacker who gains access to the machine can find that token just the same as they could find the encryption key now. They just need to use the token, get the key, and I’m back to where I started. I don’t think adding this extra step makes any difference, any attacker skilled enough to have gained access to the machine won’t be slowed down by having to trace through my application to replicate what it does.
-
If I set up the application to automatically generate and renew tokens, then an attacker on the machine can do the same thing and generate themselves a new token. And anyway the tokens need to be stored so the app can use them.
-
If I use some form of AppRole, then my app needs a role ID and a secret ID to access Vault. So I need a role ID/secret ID saved to disk, which seems to defeat the purpose? As I understand it AppRoles can have extra restrictions like only valid for particular hosts/users. Neat, but again, if an account on the machine is authorised to access Vault, then an attacker with access to the machine can do it too.
-
All 3 community-built PHP SDKs listed on Vault’s libraries page simply have tokens hard-coded on disk. How is this any better than storing the secrets in a file on disk and not using Vault at all?
-
Consul Template needs a token from somewhere to access Vault, and then automatically generate config files with secrets on disk. Back to square one.
-
Envconsul - same as Consul Template but secrets are saved as environment variables. As I understand it an attacker with access to the machine can access those env vars much the same as they can access config files on disk. No improvement.
-
Vault Agent Templates much the same, though requires AWS or Kubernetes, I am using neither;
-
Platform Integration - I am not using any of the supported cloud infrastructures (AWS, GCP, Azure …).
-
Trusted Orchestrator - I am not using an orchestrator;
-
Vault Agent - AFAICT involves writing tokens to a sink, which is a file on disk. Attacker can access this too?
I feel like I am missing some basic principle, or misunderstanding something fundamental. This is all new to me so I may just not be grokking things properly yet.
What am I missing? How can Vault help in this scenario?