We are running Vault with the GCS storage backend and have a GCP auth method. For particular reasons some of our software will requests short lived, single use, batch tokens from this auth method.
We are witnessing 429 rate limits issues in our GCS bucket due to excessive updates and have isolated the problem to subsequent requests to /v1/auth/gcp/login from different machines (i.e. different JWTs/metadata).
The GCS suffix is logical/<uuid>/packer/buckets/<id> where uuid and bucket ID are different in different environments, however, consistent in use during the requests to /v1/auth/gcp/login.
We are already using batch token to avoid storage backend writes during these transient operations but are unsure what is being written and if there is a way to avoid it?
failed to persist packed storage entry: 1 error occurred:
! * error closing connection: googleapi: Error 429: The rate of change requests to the object aaa/logical/iii/packer/buckets/ii exceeds the rate limit. Please reduce the rate of create, update, and delete requests., rateLimitExceeded
logical/<uuid>/ is the prefix that Vault uses to store data for a particular secrets engine.
You can get the UUID for a particular secrets engine from the output of vault secrets list -detailed, but I recognize the /packer/buckets/ path well enough to tell you that this will be your identity secrets engine.
One of the functions of the identity secrets engine is to store Vault’s user database.
When a new user logs in (or an existing user whose identity has changed in some way, such that metadata about the user should be updated), the Vault identity entity or entity-alias is created/updated.
So, even though you are avoiding a storage write for the token, by using a batch token, writes are still being generated to the identity store.
Your next step is to look in to the configuration of the GCP auth method, especially the iam_alias, iam_metadata, gce_alias, gce_metadata options, which control how Vault groups logins to identity entity-aliases, and to look at the identity entities that are being created by all these logins, to figure out whether new ones are being created each time.