Before anything I’d like to estate that I have done a lot of google research, including this forum, and I still have a few questions left that I`d appreciate for some help.
So, I just “inherited” a vault server configuration in some k8s clusters. However the person who did it didn’t follow some good practices so I’m working on making it more reliable (HA configuration, for instance) and specially decrease the manual interference for recover when it restarts (as it eventually will).
Basically, what we have is:
A Vault installation using its helm chart (cloned repository, not added with helm repo add)
Otherwise the kubernetes authentication won’t work.
It is very frustrating needing to execute it manually as it keeps me wake in the night.
My plan is to do the following, in this order:
Migrate from shamir keys to auto-unseal with AWS-KMS (any references would be highly appreciated)
Enabled HA
somehow automate that auth/kubernetes/config command line.
uses the non cloned helm charts
So my questions are about:
Any tips and tricks?
Other than adding the HA flag to the mysql database connection what else should I pay attention to enabled HA?
This is the million dolar question, how? I’ve seen some things on using postStart helm option but didn’t fully got it, some speciall references would be suppper appreciated.
It seems simples, even if I have to uninstall it entirelly first. Would it work? Any tips?
Upgrade to 1.8.5 and use integrated storage. There is no gain in using an external MySQL server.
No idea why you have to reset the kub auth setup that makes no sense, look into that. Don’t automate this, fix the underlying issue.
Are you confusing HA and DR? HA is just having enough nodes that it can form a raft cluster. (min 3, recommended: 5) Adding a load balancer in front as ingress and you have HA.
DR is an enterprise feature, where you setup a second cluster of nodes (that’s dark) which will duplicate/stream any updates from the primary, including leases and tokens, and can be “promoted” to primary if the primary cluster goes down.
There is no reason to override any of that. Leave the helm values alone, you only want to override the minimum of what you need with a values.yaml.
Depends on how much downtime you can afford. If you don’t have Vault in a terraform plan, that’s the way to go up front. Management becomes much easier while it’s under a tfstate. If you’re using integerated storage, and 1.8. You can backup and restore the Vault data easily with snapshot save/restore.
Hi Aram, thank you so much for your response.
I have a few questions from what you have said:
Digging a bit more now I saw this ClusterRoleBinding creation:
I didn’t created it, can it somehow related to this issue? If not, what is it for?
Still about this ClusterRoleBinding I saw here that it creates a ServiceAccount and then applies the ClusterRoleBinding to it. I’m confused, aren’t those two different kubernetes objects?
Is it related to the Shamir keys migration to KMS I’ve mentioned previously? Supose I need to keep MySQL, would it make sense to migrate to KMS?
I do mean HA for having multiple instances of my vault server. For now it is an standalone instalation and I wanna tackle that issue soon after the auto-unseal configuration with KMS. Does it make sense to pursuit this path?
I Agree!!! Upgrating with the cloned helm charts is really painful.
I think it is safe to say I can afford some downtime. I have a few lower environments where I can test it as many times as I want.
Also agree about the terraform. Just migrated most of the manual stuff to use helm_release terraform and it is already saving us a lot of effort, migrating Vault will be the last mile now.
Backup your data (use snapshots as well as MySQL dump)
Upgrade to 1.8 (1 & 2 & 3 can be combined into a single step)
Implement the proper helm chart.
Use values.yaml to set the node count to 3 (5 would be better).
Migrate your storage to integrated storage (RAFT storage)
Verify your ingress (SSL pass thru) setup.
Fix your auth setup
Implement auto-unseal with migrating your key to KMS.
Most likely you’re binding to the individual node IP addresses rather then the ingress IP which breaks every time your node gets a new IP address. Auth method shouldn’t ever change.
Two different things. You’re using MySQL as your “data store”. The data store is encrypted with a “master” key. The master key right now is shamir. You can use auto-unseal with KMS which will move the key off-site and will auto unseal your cluster. Sounds like you have a small install, there is no reason to be using MySQL, in fact it’s adding latency and possibly timeout issues.
You can do it either way but I highly HIGHLY recommend getting your cluster in order before adding complexity. Running a single node, specially in a k8s is just asking for corruption and trouble AND you have external storage which is even worse.
Hi Aram, thank you again for your answers, it is very valuable.
I just executed steps 1 to 3 safely, it was easier than I thought although I did it in sandbox first.
I’m still facing the authentication failure with this message:
{"@level":"error","@message":"login unauthorized due to: lookup failed: service account unauthorized; this could mean it has been deleted or recreated with a new token","@module":"auth.kubernetes.auth_kubernetes_a00d68c5","@timestamp":"2021-10-26T17:23:58.795572Z"}
Can it be related to that ClusterRoleBinding I didn’t create?
I’d like to tackle this issue before proceeding, any suggestions ?
We figured out what was going on.
Basically we didn’t had the ServiceAccount and related ClusterRoleBinding with system:auth-delegator from where we could get the token.
Looking backwards I could see a reference that was showing on how to do it, however I was overflown with lots of official documentations and a few of them telling me to get this token from a file mounted inside the vault server, that changed for every restart thus the need to execute it again.
In short, we did the following:
used the proper helm install
we kept mysql
auto unseal with AWS KMS as opposed to shamir keys
created the service account and cluster role binding to configure kubernetes authentication properly
created an ingress entirely for Vault UI as a convience, connected with our cert manager so we can have a safe connection
We are now experimenting HA at the lower environments and we plan on moving to prod soon to avoid the single point of failure.
Anyway, now we have a MUCH more stable and easier to maintain setup.
Depending on your usecase and size of the vault clients – my suggestions are:
A) Switch to integrated storage or Consul. Vault is extremely sensitive to latency and IO and using a RDBMS doesn’t buy you anything but headaches with even a small bit of load.
B) Do not terminate your connections at the ingress, pass thru your SSL connections to the nodes. It’s a pain with the SANS and certs but it’s well worth it for troubleshooting.