I am running an application (haproxy
) that needs to always be available to end users. I typically have 3 instances of the application running per cluster/datacenter. I prefer to have each instance on a separate node/host, to protect against node problems affecting multiple instances. An abbreviated example of my setup looks like this:
job "loadbalancer" {
group "haproxy" {
count = 3
constraint { distinct_hosts = true }
constraint {
attribute = node.class
value = "haproxy"
}
}
task "haproxy" {
...
}
}
Now I want to add rolling updates into the mix so that when there is a change to apply it does not cause the service to be offline at all. Let us say that I only have 3 nodes. If I add the update
stanza to my job like this:
job "loadbalancer" {
update {
max_parallel = 1
auto_revert = true
auto_promote = true
canary = 1
}
group "haproxy" {
count = 3
constraint { distinct_hosts = true }
constraint {
attribute = node.class
value = "haproxy"
}
}
task "haproxy" {
...
}
}
… I will obviously have irreconcilable constraints which cause the job to be unplaceable.
Class haproxy filtered 3 nodes
Constraint${node.class} = haproxy
filtered 1 node
Constraintdistinct_hosts
filtered 3 nodes
If I have one extra node in the datacenter that allows this application on it, then I think the problem goes away. However, I am not sure I can guarantee that scenario and would like to have a contingency plan.
I am looking at the scaling
stanza to see if I can leverage that. I am also considering changing the distinct_hosts
constraint
to use the affinity
stanza instead.
Any suggestions would be helpful.