We have a setup where deployments are done through Jenkins which is able to generate secret IDs. The deployments create services in a docker swarm and are saving the secret IDs as docker secrets which are mounted into the services.
These secret IDs have a 6 month ttl because some deployments do not happen very often and we do not want the secret IDs expiring in between deployments.
But some deployments happen many times a day. We currently are not destroying secret IDs wehn new ones are generated so some approles have a huge amount of secretIDs. The token ttl’s associated with these are only for a couple of hours.
So my question (eventually ) is will this cause a memory issue like having a long token ttl would?
You really don’t want long term tokens in your leases, they will end up causing more resource for no very little gain.
IMHO what you’re proposing is an anti-pattern. The best-practice approach is to make the TTL as small as possible for each single deployment – in fact the deployment should revoke its own token as the last step. Why have an exposure for 6 months? Let your CI get a secure wrapped token to auth, then use approle for each deployment that lasts just as long as deployment needs to run for. Yes there is a little bit of setup and orchestration but then you’ll not need to touch it very often and its a lot more secure.
The secret ID’s are used by the applications to fetch secrets as needed while running, not just at deployment time.
I was not involved in the initial setup of this but the way it is currently working is the long life secret IDs are contained in docker secrets and the containers have the role IDs built in. They use these credentials to authenticate with vault and get a short lived token when they need to fetch secrets. Does this sound like an anti-pattern?
Anyway, it is generating too many secret IDs which looks like it is going to become a problem so I think we will have to bach to the drawing board by the sounds of it!