How to use Nomad health checks when using bridge networking?

I’m trying to use Nomad to deploy an instance of Nextcloud with a Postgres DB.
I want Nextcloud to be reachable from external sources, but the DB should only be reachable by Nextcloud.

Reading about bridge networking I find that:

Tasks that bind to the loopback interface (localhost or 127.0.0.1) are accessible only from within the allocation.

So I tried it and, sure enough, it works as intended.
Find my jobspec here.

Now the problem is Nomad cannot reach the DB’s port to perform health checks, which causes the task to be flagged as unhealthy.

I noticed Consul reports the DB address and port as 127.0.0.1:5432.
2024-05-01T15:38:56+02:00
I guess that’s what Nomad is trying to use for health checks, and obviously it fails.

How should I go about this?

Hi.

listen_addresses=127.0.0.1

The health check is performed by Consul running on the machine. So either you have to expose the port for Consul to check it, or you don’t expose it and Consul can’t check it.

How should I go about this?

Remove the service registration.

You can expose the port only on localhost on the machine it is running on, so it will be accessible only locally on that machine, i.e. add localhost to client Block - Agent Configuration | Nomad | HashiCorp Developer and select localhost for this port.

driver_config = { options = [{

It is odd for me that options is an array. Just options { ... }. Also no need for =. Just option { ... }, no [ ] and no =. Similarly just driver_config {.

1 Like

Thank you.
I ended up using script checks instead.

Removing the = causes Failed to parse using HCL 2.
Removing the [...] causes failed to parse config: * Incorrect attribute value type: Inappropriate value for attribute "options": list of map of string required.

This topic was automatically closed 62 days after the last reply. New replies are no longer allowed.