Hello, just wondering why the terraform providers don’t cover end to end provisioning of vault and consul in HA clusters. Specifically there isn’t coverage of the API for bootstrapping consul and setting agent tokens or initialising and unsealing vault. These resources are trivial to add to the providers. Are they omitted because they would polite tfstate files with sensitive data?
Vault init is very secret sensitive since it outputs your keys. Normally you use terraform to standup your infrastructure – not initialize the applications, specially for secret sensitive ones.
That initialisation produces the unseal keys and root token. very sensitive I get that. I can’t stand up all my infrastructure without seeding gossip encryption keys for consul and populating both vault and consul with policies. What is the expectation? Should this be a multi stage process? Stand up infrastructure, notify humans who unseal, seed gossip keys and store them away then continue with provisioning policies on another terraform prpfile? There are resources for manipulating policies in vault and consul so there is everything you need to stand it up, gap for initialisation, continuation with resources for configuring policies and tokens. Tbh, I’d rather initialise in terraform and deliver keys to individuals password d managers and unseal automatically, finish provisioning and let CD runners forget all the tokens. This seems like there are more provisions for the approach with cloud providers like gcp, azure and Aws.
How many times do you imagine you’ll be re-initializing your Vault cluster? This should only be a once in a life time operation. There is no point in automating it. Yes, in every enterprise that I’ve been a part of you trigger an action for a human to do the init and secure the environment.
If you’re using a lab env, then you can set what you want your root key to be, set your unseal key count to 1 and throw it away, every restart is a new vault.
We may deploy our system for many clients in its entirety with their own instances of vault and consul. We do blue / green deployments too, if we are going to roll the consul version forwards, it’s with a new deployment and the old is blown away. That leaves vault with no data / back in the uninitialized state. There are some challenges with the whole nuke and pave approach but it gives us some advantages too. We don’t have to know the versions or the states of any of the systems. New deployments and updates are handled the same way. The way we are working the systems it would help us considerably if we could automate this. I’ve got most of it working in the vault provider already on my own fork. I’m trying to gauge the reasoning behind its omission. I suspect I’m implementing something that was left out on purpose because it’s considered an insecure way of doing it? I kind of want to know what those insecurities are and if there’s a way to manage them.
This is a very good summation. Right or wrong, Hashicorp has implemented Vault as a full safe way of securing your keys. You can obviously just capture the output of the init and store it, mail it, etc… but that breaks the “circle of trust” paradigm they have designed. Can you do it? Sure … as long as you agree that you are breaking it and doing something unsafe.
One thought is to actually capture the PGP keys of the clients and use the PGP signatured to encrypt the keys for each individual user, rather than plain text saving the keys. There is no easy way to IaC without writing custom scripts but it is doable.
There are facilities for this in the API and CLI. That was how I was looking at delivering the unseal keys. Essentially, I want to initialize vault on a pipeline and deliver the keys to users.
I’m trying to understand the reasoning behind the omissions in a little more depth, I can get unseal keys delivered to our users encrypted with their public keys stored in keybase. That’s not really an issue, the initialize operation has the ability to do that lookup and encryption for you: operator init - Command | Vault | HashiCorp Developer.
I don’t see how making a CD pipeline deliver keys to a set of users to establish a circle of trust breaks the paradigm. I can see in the API that there is sensitive information returned from the unseal operation. Root token and plain text unseal keys. However, no matter how you do it, someone has to initialize vault from somewhere. There’s always a computer that runs unseal, it’s always a single point that can be compromised and used to steal the keys to the kingdom, whether that’s a users computer or a CD runner the risk profile is the same. If you can get at that machine, you can compromise vault.
I think the provisions should be given to users with the correct information on the risk profiles involved here rather than it just omitted on purpose leaving anyone who wants to automate this to DIY and potentially make mistakes.
Also, this is already in the rest API, the CLI and the go client Library for vault itself. This is not about what provisions vault gives their users, this is specific to the terraform provider. I’m asking why the provision is omitted here when it is made everywhere else
That manual action you’re referring to is called terraform apply
, not vault operator init
. Are there other one-time manual recurring actions that you’d maybe like to add to that list? Just as, I don’t know, adding nodes to the cluster or thereabouts? You know, just a little manual push, 'cause you’re only going to do it once after all!