I’ve been trial-and-erroring my way through spinning up a Vault/Consul cluster, and it’s about ready to go live with my user base.
The only problem remaining is that I’m having a very difficult time getting everthing to work with the commercially issued TLS cert I’m trying to use.
If I use the (wildcard) cert as it is provided, I can get the frontend working with no cert errors in the browser.
However the backend is a bit of a mess. Any CLI Vault commands give me the error stating there are no IP SANs in my cert for 127.0.0.1. I can get around this by running the export VAULT_ADDR='vault.site.com:8200'
command - and adding an entry to the server(s) hosts file to pair the FQDN to 0.0.0.0. For some reason, this doesn’t persist after reboot. I also tried switching any references going to port 8200 in the defaults.hcl to the FQDN.
So, that’s a bit of a kludge, but that resolves my CLI errors (at least until that machine is rebooted).
However even after all of that, API access doesn’t work unless I give the API user my root cert and they add that as a part of their curl command. So far, that’s the only way I’ve managed to fix that problem.
All of this seems to relate back to the commercial cert. From what I’ve read, it seems like Vault wants you to use a self-signed cert with a SAN for vault.service.consul and 127.0.0.1 and supply the cert to all users to install on their machines. Without installing the cert, the users obviously get cert errors in the browser, which I need to avoid. I tried using a terraform script to insert my own SANs into the existing cert, but as you might imagine, that essentially just turned it in to a self-signed cert.
Am I the only one who kind of needs to use a commercial cert? Is there a way to install one properly in a way that all methods of access will work?