Consequences of creating a large number of secrets engine mounts

I am working on a project where resources for customers on AWS are allocated with an AWS account per customer. We need to be able to create an AWS Secrets Engine iam_user credential periodically for access to some customer resources.

For a variety of security concerns/policies, we cannot use one AWS Secrets Engine mount point as the account used by a particular AWS secrets engine would need CreateUser permissions.

We are considering an AWS Secrets Engine mount point per customer account, which will mean we could be creating several thousand mounts. Is this an OK idea, or is it a bad idea? What are the consequences of doing this?

These iam_users will not be created frequently, so these mounts will not be heavily utilized.

Each mount uses some space in the mount table, which is currently stored as a single (compressed) storage entry. The Vault documentation page https://www.vaultproject.io/docs/internals/limits describes how this impacts the maximum number of mounts, for the default storage entry sizes. In particular, with Consul’s default settings, you have a maximum of 7000 enabled secret methods, which may be lower if you start specifying mount options. (The actual configuration of each AWS secret mount is stored separately, in its own storage.)

“Several thousand” might push you uncomfortably close to this limit; if you switch to a larger storage entry size, such as the 1MiB that is default on integrated storage, then that will give extra capacity for more secret engine mounts.

1 Like

Is this one potential issue? Is the mount table cached in each Vault instance, and storage not read every time? Or is storage read each time a mount is accessed? I would assume it is cached in memory, but best to check. Certainly the more mount points, the more larger the reads/writes to storage, but if these happen infrequently, it shouldn’t be a huge issue.

I do have another topic I posted. The Google Cloud Storage backend is being used, I am trying to find out what the limits on mount points are for that backend.

The mount table is loaded into memory when Vault is unsealed, and does not need to be re-read later (except by replicas or standby nodes, to pick up changes.)

1 Like

Cool.

Are there any other issues we should be aware of, or is the mount limit the only real concern and otherwise it is an OK usage?

@mgritter are there any other adverse consequences for thousands of secrets mounts if we are OK with being close to the limit? And in reality, I think we would be about 2000 away from the consul limit, and, after looking at the code for the GCS storage backend, I can’t see anything that looks like a limit on the size of the persisted mount table.

I don’t know of any that specifically affect this use; as long as Vault is configured with hardening recommendations for productions (such as number of file descriptors) we know that some Vault customers successfully create large numbers of auth methods or secret engines.

1 Like