We have been operating clusters for a couple of years on Amazon Linux 2 with one head node per cluster. Data loss hasn’t been a concern, because we have an EFS share hooked into all nodes. However, we finally encountered a failed head node, so I am exploring having more than one.
With the original setup, I had a shell script to populate the management token and client tokens for all users other than myself under /home. In order of operation, this script
- Sets the management token
- Applies an anonymous read policy
- Establishes client tokens
As far as I can tell, this needs to happen on the server that is the leader. When I bootstrap the ACLs on the other one, the token doesn’t work in the UI, and the other server also does not recognize it. Is that how it works, or am I missing something? I need at least the management token set dynamically for automated testing, but I’m not sure how I can do that reliably without knowing the lead Nomad server a priori. Any help is greatly appreciated.