How nomad handles multiple vault hosts

Hello.
I have a HA Vault cluster and I use vault.service.consul as Vault address in the nomad client config. vault.service.consul resolves into 3 IPs and looks like a nomad job gets a random Vault IP out of the 3 and uses it even if there is no working Vault instance on the IP. Is there a way to switch to another Vault instance if the current one failed ?

Hi @Laboltus,

Instead of using vault.service.consul I would recommend using active.vault.service.consul which will always point to the active Vault server. Further information regarding Vault’s Consul registration can be found here.

Thanks,
jrasell and the Nomad team

Yes, I thought about it. But that doesn’t change anything if nomad only resolves active.vault.service.consul once when starting the job.

Hi @Laboltus,

I spelunked into the code yesterday and traced the path in which the configuration parameter takes. Nomad itself does not do any DNS resolution on the passed address value and hands this straight off to Vault NewClient API function as part of the config object. Vault sets the HTTP client to this DefaultPooledClient where I do not see any DNS caching or resolution of address performed.

I’ll try to find some time to test this locally. If you’re able to provide any further configuration details that could help me in this, that would be appreciated; also if you have any logs that display the issue. If you believe this is a bug in either Nomad or Vault, please feel free to raise a bug report against the relevant repo.

Thanks,
jrasell and the Nomad team

All the logs are removed by GC but there was error, something about Vault failure and the job did not try another Vault instances from the cluster. In fact at that moment the failed Vault instance was up but sealed.