Vault File Backend using too much inodes

I am using vault via file backend on our automated testing system, where the home directory is configured in /var. It has about 12GB of space in total( Cant modify due to certain restriction )
Each day around 4-5 times, vault will be installed and all the apps will use the role_id to generate more tokens. As far as possible i am revoking the secret_id/wrapped_secret_id after its used. This has worked without any problem for past 3-4 months
Now, whats happening is that the no of inodes in my /var partition is now full( 786000) of which 95 % files are in secret_id and accessor. Now vault deployment is failing since its unable to create any new secretid or token and i am not able to unseal it.
Is it that even after token revoke, the physical backend file will stay in the system or will it get deleted. If it is not deleted, is there a way to revoke unused tokens/accessors/secret-ids.

This is an interesting issue. Generally speaking, accessors that no longer reference a secret should be tidied up.

There is an issue currently with the tidy operation around improving lock handling, located here.

There are a couple of possibilities here:

  • The accessors are still referencing a token
  • Tokens are being created so quickly that they’re tying up the locks for the tidy operation, and Vault is shutting down before it could be run
  • The Vault version you’re on is older, and doesn’t have the tidy operation as it exists today

It may be helpful to enable trace level logging for one day in Vault, then look for Trace messages like this describing what’s going on. I’m very curious what that will reveal, as well as what version of Vault you’re on. If you don’t want to wait a day, you could also force a tidy operation with this endpoint.

Well, also, I suppose that’s only helpful if you can get Vault back up to begin with. If you haven’t already, it should work to copy your Vault data to a larger instance that isn’t full, point your config at the new location, and bring Vault back up. Then you should be able to get it going again.

Apologies we didn’t respond on this sooner, the entire team has been very heads down trying to get a beta out the door.