Yes, you can make a HA cluster from a single node. Just spin up a new node, and join it to the first node.
You’ll want 3 nodes at least.
Having a cluster isn’t exactly a full DR solution, so make sure you’re taking snapshots and have a restore plan for both infrastructure breakdown as well as logical corruption/etc.
A single node is still considered a “cluster.” Just not a healthy one. You can always add more nodes (should be an odd number of nodes, 1,3,5,7).
I want to expand @mikegreen’s answer. I don’t recommend a cluster with 3 nodes. It’s actually better to have 1 node then 3. So 3 doesn’t buy you HA, as if one node fails you’ll lose the ability to do elections as the minimum for an election is 3 healthy nodes. You want to be able to restart a single node without the whole cluster going off the rails; during upgrades for example. Go with 5 and you’ll have a little piece of mind and actual HA with up 2 nodes failing without the cluster either sealing itself or becoming read-only.
Sorry to be so blunt - but absolutely not. This guarantees a entire cluster failure when you lose that single/1 node.
If you lose a single node in a 3-node cluster, you still have quorum. So one of the ‘alive’ remaining 2 nodes will become the active node. At this point, yes, 5 would be better than 3 but 3 is a fine HA setup to have a single-node-fault tolerance (whereas 5 allows 2 nodes).
Depends on what you mean by “cluster failure”. To me a cluster that is sealed or read-only is not worth keeping up. I rather it fail and restart completely in an available as quickly as possible [ fail fast theory ].
Depends on how and which node you lose … and I have experience this twice now with customers, and once myself in our 5 node kub PR cluster.
If the node you lose is the leader node while there are three node clusters, 2 standby nodes the cluster is not going to be healthy. You’ll see that the two remaining nodes will either seal the cluster or go into a read-only state and never recover.
The easiest way to do this is to setup a 3 node kubernetes cluster and while doing a bunch of writes, kill the leader node.
Thanks for the reply, but i’m struggling to understand the docs
I have a test vault server lonvaulttest1 which is configured with this config
storage "file" {
path = "/var/lib/vault"
}
ui = true
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = true
}
default_lease_ttl ="768h"
max_lease_ttl = "43800h"
type or paste code here
And i have two vault servers not configured yet, lonvaulttest2 lonvaulttest3
I have populated lonvaulttest1 with junk data, but i am unsure of what i need to add to the config file of all three servers to make sure that data is replicated
Just know that the minimum recommended raft node count is 5 (absolute minimum is 3, which means if at 3, if one fails, you may end up with an unavailable cluster).