Vault policy for terraform plan

We are configuring everything (except the secrets themselves) in our vault cluster using terraform in a CI pipeline (GitLab).
In our workflow, developers/ops teams allow to run a terraform plan and we (admins) merge and apply their changes in case they meet certain requirements.
As they may have access to the vault credentials which are used for terraform plan we would like to start to use a read only vault policy for that call.
Does any of you have a vault policy which would be read-only for “normal” configuration paths like sys/ auth/ etc. and can be used for a terraform plan call?
Or a documented/maintained list of paths somewhere, which would need to be allowed for a terraform plan in order to be able to configure vault (without being able to read/generate secrets)?

Thanks,
Balazs

That’s going to depend on what you’re expecting your customers to be able to provision and whether you’re using namespaces or not.

I’ve been casually working on a document to help guide policy design: GitHub - jeffsanicola/vault-policy-guide: A brief guide to help illustrate some of the more nuanced aspects of HashiCorp Vault's policies.

I plan to eventually add some full policy examples.

I’d be interested in your feedback, if you’d be willing to give it a look over.

But in general, I like to get pretty granular with pathing in my policies. With auth engine paths for instance, something like the following would allow for creating/maintaining roles without granting the rights to generate approle secrets:

path "auth/+/roles/" {
  capabilities = ["list"]
}

path "auth/+/role/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

path = "auth/+/role/+/role-id" {
  capabilities = ["deny"]
}

path = "auth/+/role/+/secret-id" {
  capabilities = ["deny"]
}

path = "auth/+/role/+/custom-secret-id" {
  capabilities = ["deny"]
}

Note that if you have a legit need to generate approle secrets in your pipeline you’ll have to carve out a specific exception for that in here.

The API Guide is always my go-to source for figuring out pathing and capabilities. I’d suggest using that as well, if you aren’t already, to make well constructed policies.

3 Likes

There’s a significant security risk in the above policy.

If I possess any approle role-id and secret-id, I can now come along and write to auth/approle/role/role-name and modify whatever approle I do have access to, to have any policy that already exists in Vault - including ones with higher privileges.

Are you trying to do with with an Enterprise license or with OSS? With Enterprise the idea of namespaces makes this a lot safer and easier.

I’d also say that there is no need to run a plan by the user, the pipeline can automatically run the plan on commit on the branch (which the user will have access to as an output) then approval then merged the branch and runs the actual apply.

1 Like

This policy would be intended to be attached to a “management” role (i.e., a role that creates, updates, and deletes the AppRole roles). I would find it reasonable that if you need to customize the AppRole name that the management process would have the rights to do so.

If this policy were to be attached to any arbitrary AppRole role, then yes, I 100% agree with your statement and the policy should be adjusted to prevent that type of abuse.

It really doesn’t, namespaces are a solution to a different problem entirely. Namespaces help you safely delegate unrestricted control of a subtree of Vault paths. This question is about how to safely delegate restricted control to propose changes, and obtain a Terraform plan to validate the correctness of the proposal via self-service, but then hold application of the changes for admin review.

This misses the point - from a security perspective, the pipeline running a plan on unreviewed Terraform code written by the user, is usually equivalent to giving the user access to the credentials the pipeline uses to run the plan - as there are many exploits a user who can submit arbitrary Terraform code to a pipeline can use, to have it deliver up the credentials it has to the user.

I believe @Cajga is already on the right path, by looking to run the terraform plan on user submissions using separate low privilege credentials.

However:

cannot be simply answered, as what paths you need to allow depends on which secrets engines and auth methods you have mounted in Vault, and which features you want to configure using Terraform.

But the context of this question is a low privilege, read only policy, to be used for terraform plan, to sandbox the planning of untrusted Terraform input.

1 Like

First of all, thank you all for the answers.

@jeffsanicola I checked your vault policy guide and I do believe that it is useful for people like me (starting to use vault policies). I specially like the example about the identity metadata and templated policy. Also, the KV V2 policy example was also useful for me.

@aram We are using the OSS as the Enterprise license is out of our budget. But I fully agree with @maxb as namespaces would not help here. Also, @maxb totally right about if a user/developer can do a merge request to a repo and a merge pipeline kicks in on his change, that means basically that he has access to every information that the MR pipeline has (even if the credentials are “MASKED” in the logs, he can send a mail to himself with the credentials). We want to review the terraform plan output of the MR (after the static analysis of the terraform code) to decide if we merge it or not.

@maxb I must say we are just starting to use Vault extensively/properly so, I am not sure if this makes sense: We plan to configure everything with terraform that are not dealing with the static secrets themselves (so, we do not have them in git). Like policies, auth methods (LDAP, approle now and probably OIDC will join a bit later) secrets engine configuration (KV-V2, PKI, transit, maybe database and AWS).

Now, that I did more learning about the topic, I understand that unfortunately there is no simple solution for this and I find the problem pretty under documented. I think, it would be nice to have a collection of “safe policies” for terraform plan (meaning that it only allows reading the config but not writing it or generate/read secrets) for generic config and for each auth method/secretes engines which would allow the least privileges pattern for terraform plan.

Maybe we can contribute this to Hachicorp documentation or @jeffsanicola 's github page.

If you guys have any recommendations or hints for any part of the config then please let me/us know.

Declarative configuration makes sense - and Terraform is the only existing solution for declarative configuration of Vault that I know of.

That said, Terraform has performance problems when the size of a configuration grows. I have a configuration involving 3000+ Vault ACL policies, and even though the Terraform configuration is exceedingly simple - just a single resource "vault_policy" block with a for_each over a big map of policy name to policy text - Terraform still takes 5 minutes to do a terraform plan -refresh=false. That’s 5 minutes just to compare the current contents of the Git tree with the values previously saved to Terraform state, and work out which ones need to change; and do note this is with -refresh=false - there’s no communication with the actual Vault API during the plan!

In short: Terraform can be useful, but be prepared to migrate some of your configuration out of Terraform if you need to scale.

Also, terraform-provider-vault has some frustrating bugs. In a previous release, just changing the description field of a KV resulted in the entire KV and all its data being destroyed, and a new KV mounted in its place. That got fixed… but last week I ran into the same issue with an auth method instead!

I think Terraform still has value for configuring Vault, but you have to be prepared for it to be a learning experience with some rough edges.

The problem is, the exact policy would differ depending on which secret engines and auth methods you have mounted, using which names, and which features of Vault you are actually configuring. Therefore I believe the better way is to teach people how to write an appropriate policy for their environment.

My advice is to get an environment set up, with a strict policy applied to your terraform plans, and improve it via testing, trial and error. Run terraform plan. See where it fails with permission errors. Craft a suitable update to the policy.

Expand on the policy when your requirements change, and you start using more resource types.

HTTP API | Vault | HashiCorp Developer will be vital but in addition to that, it’s helpful to know that Vault has a built in Swagger UI accessed by typing api into the Vault in-browser CLI, or browsing to /ui/vault/api-explorer - note that this is adaptive, and only shows you endpoints in secrets engines and auth methods that you have mounted, and at least some permissions to each engine/method.

If you use AWS, there was a subtle bug in how it autogenerated the Swagger spec which resulted in it displaying /aws/creds when the actual endpoint is /aws/creds/{name} - fixed in Vault 1.10; [Vault-4628] OpenAPI endpoint not expanding root alternations by VinnyHC · Pull Request #13487 · hashicorp/vault · GitHub

Essentially, don’t worry about getting the policy right first time, just make sure it’s not insecure, and rely on feedback in the form of permission denied errors to guide you to what you need to add.

Wow, this is really cool. I was not aware of this. Here they also show how to access it from the ui (I will definitely forget the url itself :slight_smile: )

We aim to have only declarative config in our infra… We are heavy terraform (and puppet, and ansible and ArgoCD for Kubernetes) users. We are exclusively use terraform to configure our AWS accounts. So, handling the terraform part should be fine as we will never scale above 100 policies I think. :slight_smile:
Although, in the second day I run into a bug of the provider (it could not delete an option from kv-v2 mount point but returned like it did so it always showed a diff) but managed to workaround it (as it was empty yet, deleted the mount and recreated it :smiley: ).

That is the plan now. The only problem that I can see is that without enough knowledge/experience, there is a risk that we will open up some path to read which can be used to fetch sensitive information and will never realize that… But well, we will do our best :slight_smile: