Vault on Kubernetes with auto unseal and fully self recovered

Hi folks!

Before anything I’d like to estate that I have done a lot of google research, including this forum, and I still have a few questions left that I`d appreciate for some help.

So, I just “inherited” a vault server configuration in some k8s clusters. However the person who did it didn’t follow some good practices so I’m working on making it more reliable (HA configuration, for instance) and specially decrease the manual interference for recover when it restarts (as it eventually will).

Basically, what we have is:

  1. A Vault installation using its helm chart (cloned repository, not added with helm repo add)
  2. A MYSQL storage
  3. classic shamir unseal keys
  4. default standalone

With this configuration we have a few issues:

  1. No HA
  2. no auto unseal
  3. every restart we need to manually execute:
   vault write auth/kubernetes/config \
   token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
   kubernetes_host=https://${KUBERNETES_PORT_443_TCP_ADDR}:443 \
   kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
   disable_iss_validation=true

Otherwise the kubernetes authentication won’t work.

It is very frustrating needing to execute it manually as it keeps me wake in the night.

My plan is to do the following, in this order:

  1. Migrate from shamir keys to auto-unseal with AWS-KMS (any references would be highly appreciated)
  2. Enabled HA
  3. somehow automate that auth/kubernetes/config command line.
  4. uses the non cloned helm charts

So my questions are about:

  1. Any tips and tricks?

  2. Other than adding the HA flag to the mysql database connection what else should I pay attention to enabled HA?

  3. This is the million dolar question, how? I’ve seen some things on using postStart helm option but didn’t fully got it, some speciall references would be suppper appreciated.

  4. It seems simples, even if I have to uninstall it entirelly first. Would it work? Any tips?

Thank you folks!!

1 Like

Upgrade to 1.8.5 and use integrated storage. There is no gain in using an external MySQL server.
No idea why you have to reset the kub auth setup that makes no sense, look into that. Don’t automate this, fix the underlying issue.

Are you confusing HA and DR? HA is just having enough nodes that it can form a raft cluster. (min 3, recommended: 5) Adding a load balancer in front as ingress and you have HA.
DR is an enterprise feature, where you setup a second cluster of nodes (that’s dark) which will duplicate/stream any updates from the primary, including leases and tokens, and can be “promoted” to primary if the primary cluster goes down.

There is no reason to override any of that. Leave the helm values alone, you only want to override the minimum of what you need with a values.yaml.

Depends on how much downtime you can afford. If you don’t have Vault in a terraform plan, that’s the way to go up front. Management becomes much easier while it’s under a tfstate. If you’re using integerated storage, and 1.8. You can backup and restore the Vault data easily with snapshot save/restore.

Hi Aram, thank you so much for your response.
I have a few questions from what you have said:

Digging a bit more now I saw this ClusterRoleBinding creation:

I didn’t created it, can it somehow related to this issue? If not, what is it for?

Still about this ClusterRoleBinding I saw here that it creates a ServiceAccount and then applies the ClusterRoleBinding to it. I’m confused, aren’t those two different kubernetes objects?

Is it related to the Shamir keys migration to KMS I’ve mentioned previously? Supose I need to keep MySQL, would it make sense to migrate to KMS?

I do mean HA for having multiple instances of my vault server. For now it is an standalone instalation and I wanna tackle that issue soon after the auto-unseal configuration with KMS. Does it make sense to pursuit this path?

I Agree!!! Upgrating with the cloned helm charts is really painful.

I think it is safe to say I can afford some downtime. I have a few lower environments where I can test it as many times as I want.

Also agree about the terraform. Just migrated most of the manual stuff to use helm_release terraform and it is already saving us a lot of effort, migrating Vault will be the last mile now.

Thank you for your answers!!

My steps recommendation, in order:

  1. Backup your data (use snapshots as well as MySQL dump)
  2. Upgrade to 1.8 (1 & 2 & 3 can be combined into a single step)
  3. Implement the proper helm chart.
  4. Use values.yaml to set the node count to 3 (5 would be better).
  5. Migrate your storage to integrated storage (RAFT storage)
  6. Verify your ingress (SSL pass thru) setup.
  7. Fix your auth setup
  8. Implement auto-unseal with migrating your key to KMS.

Most likely you’re binding to the individual node IP addresses rather then the ingress IP which breaks every time your node gets a new IP address. Auth method shouldn’t ever change.

Two different things. You’re using MySQL as your “data store”. The data store is encrypted with a “master” key. The master key right now is shamir. You can use auto-unseal with KMS which will move the key off-site and will auto unseal your cluster. Sounds like you have a small install, there is no reason to be using MySQL, in fact it’s adding latency and possibly timeout issues.

You can do it either way but I highly HIGHLY recommend getting your cluster in order before adding complexity. Running a single node, specially in a k8s is just asking for corruption and trouble AND you have external storage which is even worse.

Hi Aram, thank you again for your answers, it is very valuable.

I just executed steps 1 to 3 safely, it was easier than I thought although I did it in sandbox first.

I’m still facing the authentication failure with this message:

{"@level":"error","@message":"login unauthorized due to: lookup failed: service account unauthorized; this could mean it has been deleted or recreated with a new token","@module":"auth.kubernetes.auth_kubernetes_a00d68c5","@timestamp":"2021-10-26T17:23:58.795572Z"}

Can it be related to that ClusterRoleBinding I didn’t create?
I’d like to tackle this issue before proceeding, any suggestions ?

Thank you!!

Sounds like your kube auth is not setup correctly. Sadly I’m not the right person to answer the question as I’m starting with k8s myself.

We figured out what was going on.
Basically we didn’t had the ServiceAccount and related ClusterRoleBinding with system:auth-delegator from where we could get the token.

Looking backwards I could see a reference that was showing on how to do it, however I was overflown with lots of official documentations and a few of them telling me to get this token from a file mounted inside the vault server, that changed for every restart thus the need to execute it again.

In short, we did the following:

  1. used the proper helm install
  2. we kept mysql
  3. auto unseal with AWS KMS as opposed to shamir keys
  4. created the service account and cluster role binding to configure kubernetes authentication properly
  5. created an ingress entirely for Vault UI as a convience, connected with our cert manager so we can have a safe connection

We are now experimenting HA at the lower environments and we plan on moving to prod soon to avoid the single point of failure.

Anyway, now we have a MUCH more stable and easier to maintain setup.

Thank you!

Depending on your usecase and size of the vault clients – my suggestions are:
A) Switch to integrated storage or Consul. Vault is extremely sensitive to latency and IO and using a RDBMS doesn’t buy you anything but headaches with even a small bit of load.
B) Do not terminate your connections at the ingress, pass thru your SSL connections to the nodes. It’s a pain with the SANS and certs but it’s well worth it for troubleshooting.

1 Like