What paths are supported for aws secrets?

I’ve configured roles for use of aws as the vault engine (for jenkins-x running on k8s), but have not found a way to create secrets on the command line; the error returned is ‘unsupported path.’
What paths are supported for aws as a vault engine? Ideally, I would like to create distinct paths for secrets associated with specific apps/functions, but even if I have to settle for a ‘flat’ single path, I’d like to know what I can use.

It’d be best if you share the secret engine config, and the CLI command you’re getting an error from… kinda hard to troubleshoot without knowing what you’re asking Vault to do.

Thanks, Mike.

I’ve configured aws as the backend for vault as per this page: AWS - Secrets Engines | Vault by HashiCorp

I’m using an AWS profile with keys for a VaultUser account with full access, and have the following environment variables set (VAULT_TOKEN value masked):


I have created a role to read secrets (details not provided here, don’t think they’re relevant).

Here is an example of just one of my failed attempts to create a secret:

vault write aws/ nothing=nada
Error writing data to aws: Error making API request.

Code: 404. Errors:

* 1 error occurred:
	* unsupported path

Any ideas?

Perhaps I’m misunderstanding what you’re trying to accomplish but I think you want to be using
vault read aws/creds/my_role_name
as documented here: AWS - Secrets Engines | Vault by HashiCorp

Thanks, Jeff. I have done that, but can’t see the connection between getting the lease and CREATING a new secret. Here is an example:

vault read aws/creds/vault-read-secrets-role
Key                Value
---                -----
lease_id           aws/creds/vault-read-secrets-role/VkJMCkLSeqpDDfmJbvYA8KHe
lease_duration     768h
lease_renewable    true
access_key         AKIA3M2WMDFPYILNVJPD
secret_key         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

What am I missing?

The access_key and secret_key values are the secret in this case. When you read the aws/creds/vault-read-secrets-role Vault will reach into AWS and generate a new access_key and secret_key. When your auth token or lease TTL expire then the access_key and secret_key will be revoked.

Apologies if you already know this, I’m still not entirely sure what you’re after.

Thanks. What I thought vault would do, and what AWS SSM does, is to store a (static) secret such as a password, and require credentials in order to fetch that value.
It sounds like you’re saying that things like, say, the password for a mongodb account are ephemeral, and there is NO static value anywhere. I honestly don’t see how that works with applications that have no clue that their passwords are being managed by vault.
So, still not getting it.

I think you’re on the right track with your thinking. Your application would need to be made aware of Vault (and there are many ways to do so). Vault, in my experience, is generally not something that can simply be turned on and take over credential management for applications.

Vault does support static secrets via the KV engine, but dynamic are preferred where possible.

In general you’ve got several options for app integration (note that this is only a very small subset of the available options):

  • Use a client library like Spring Vault to have your app interact with Vault directly
  • Use the Vault Agent to drop credentials in config files (your app my need to be notified of credential updates - Vault Agent can run arbitrary commands after credential updates to help out with this)
  • Use envconsul along with Vault Agent to put credentials in environment variables for your app to consume
  • Create a custom integration with the CLI or API

Hopefully this helps.


1 Like

Thanks. We’re migrating from a traditional ECS/Fargate infrastructure to Jenkins-x V3 on k8s, thus the large context switch. It turned out that jx3 doesn’t actually support aws as a vault back end directly (this was a rude awakening, as it sounded like it did), so we’ve been blazing our own trail, so to speak.
I was thinking of switching to kv to diminish the ‘culture shock’ to my Ops colleagues, that sounds like it might end up working better.
I appreciate your time!

Jeff, every time I get more information that I think will simplify my tasks, I find a new challenge. After our last exchange, I had figured that switching to kv (kv-v2) would meet our needs better than the role-based dynamic secret approach used by the aws back end.
Now it turns out that kv on aws, as opposed to on gcp, is yet again uncharted territory.

We use ASM currently for our EC2-based infrastructure, with static secrets. Is there ANY way to use vault with aws as a back end with STATIC rather than dynamic secrets?



What do you mean by AWS as a backend? Vault supports various different backend options including S3 buckets which works well.

The backend chosen doesn’t limit which secret engines you can use, so all dynamic a static options are available (e.g. AWS, GCP, SSH for dynamic or kv/kv2 for static). The same is true for all the different auth options (AWS, Kubernetes, AppRole, etc.)

Huh… when I tried to use the web UI to create a secret, all it offered was the ability to create a role. Is this just a limitation of the web UI? Where do I set the options from the command line?

Really appreciate the help, Stuart.

Revisiting all the available docs, here is part of the text returned from the ‘vault path-help aws’ command:

The AWS backend dynamically generates AWS access keys for a set of
IAM policies. The AWS access keys have a configurable lease set and
are automatically revoked at the end of the lease.

After mounting this backend, credentials to generate IAM keys must
be configured with the "root" path and policies must be written using
the "roles/" endpoints before any access keys can be generated.

That sounds pretty definitive about what can be done with the AWS backend… what am I missing?

That is the AWS secret engine and as it says it is for creating dynamic IAM users, etc.

If you are wanting to store static secrets you want to use the k/v (or k/v 2 to add version control) secret engine - by default it is mounted under /secrets.

Maybe it would help to take a step back. What are you wanting to use Vault for?

What type of secrets are you wanting to handle? Static secrets? Dynamic secrets (short lived credentials for AWS, databases, SSH, etc.)?

Where are you wanting to store the Vault database? S3 buckets? Raft? RDS? Elsewhere?

How do you want to authenticate with Vault? AppRoles? AWS? Kubernetes?

What applications do you want to integrate Vault with (to make use of any dynamic & static secrets managed by Vault)? Off the shelf? Custom created apps?


I had indeed started the switch to kv, and in the jenkinx-s documentation, there is complete conflation of the aws ‘secrets engine’ and use of the kv secrets engine, which may USE aws to store the content.

We’d ideally like to use IAM to authenticate, which would smooth the transition for the Ops folk from current infrastructure to jenkins-x and k8s. Where the secrets are actually stored is not as critical, but I see that jenkins-x already sets up some s3 secret storage for its internal helm packages.

The primary tools we rely on which need secret storage are redis, mongodb-atlas (trying to use their AtlasOperator), and mysql (managed by RDS).

OK. So it sounds like you want to use AWS auth method to login to Vault. Then static secrets using the kv secret engine. For the other ones you mention you can either use static secrets via k/v or you can switch to using dynamic secrets (the MongoDB Atlas and MySQL database secret engines).

The part that I’m not seeing is how I configure vault to use IAM authorization and storage in ASM, with kv as the selected back end.

I wish there were a definitive ‘Vault reference manual’ in print version, that would be sweet. I have a feeling that there is detailed documentation SOMEWHERE on the vault website, but I haven’t found it.



You can find the documentation for IAM authorisation here: AWS - Auth Methods | Vault by HashiCorp

ASM isn’t relevant as you are using Vault for your secret storage - in effect it is a “competitor” service.