Hi, this is a newbie question. I’m trying to deploy a web app, so I need to deploy several services like database, workers, redis, the app and some others.
I don’t know how to communicate with them. I come from kubernetes and docker-compose where you can just connect to “db” and it was automatically resolved.
I think I have three options:
Create env files or export env templates from consul and read them in the app.
Use a dns server like dnsmasq where nomad and consul runs.
Use traefik?
I’ve tried 1 & 2. For 1, I don’t think is the best way. Do I need to reload the app if the job changes to another node?
I’m trying to figure out how to do it with (2). I have dnsmasq and I have configured this:
# Enable forward lookup of the 'consul' domain:
server=/consul/127.0.0.1#8600
In the node I can run dig db.service.consul and it’s resolved successfully, but on the task (I’m using the docker driver) it doesn’t work. The dns servers in resolve.conf aren’t the ones expected.
And about (3), I don’t know if it’s possible or it’s an acceptable way of doing it.
it is a great question. 1 and 2 are possible, but you can manage it like in k8s or docker-compose. I am a beginner too but now is it almost two months since I run the first cluster with an application so maybe you can reuse my example.
Here is an working example:
job “cool-app” {
datacenters = [“dc1”]
type = “service”
group “deployment” {
count = 1
# https://www.nomadproject.io/docs/job-specification/network
network {
# this is IMPORTANT
mode = "bridge"
port "http" {
to = 8080
}
}
service {
name = "${NOMAD_JOB_NAME}-http"
tags = [
"public",
"traefik.enable=true",
"traefik.http.routers.${NOMAD_JOB_NAME}-http.rule=Host(`${NOMAD_JOB_NAME}.mydomain.com`)",
"traefik.http.routers.${NOMAD_JOB_NAME}-http.tls=true"
]
port = "http"
check {
name = "${NOMAD_JOB_NAME} - alive"
type = "http"
path = "/_health"
interval = "1m"
timeout = "10s"
}
}
task "app" {
driver = "docker"
config {
image = "my-cool-app:stable"
command = "server"
ports = ["http"]
}
env {
DB_HOST = "127.0.0.1"
MYSQL_DATABASE = "appdb"
MYSQL_USER = "app"
MYSQL_PASSWORD = "secretPassword:-)"
}
kill_timeout = "30s"
} # END cool-app task
task "mariadb" {
driver = "docker"
kill_timeout = "30s"
lifecycle {
sidecar = true
}
# way how to restore your db
artifact {
source = "https://secure.backup-api.com/dump.sql.gz?key=THIS_SHOULD_BE_IN_VAULT"
destination = "local/"
options {
filename = "db_dump.sql.gz"
archive = false
}
}
config {
image = "mariadb:10.5.5"
volumes = [
"local/init:/docker-entrypoint-initdb.d"
]
}
env {
MYSQL_ROOT_PASSWORD = "rootPassword"
MYSQL_DATABASE = "appdb"
MYSQL_USER = "app"
MYSQL_PASSWORD = "secretPassword:-)"
}
} # END mariadb task
}
}
This deployment deploys 2 containers sharing localhost, so the app container will see the 3306 port of the MariaDB container on his loopback IP (127.0.0.1). You will have to install the CNI plugins before running this job.
The advantage of this solution is that a TCP port of the DB is not open to the world.
I leave in example a mechanism of exposing app port on a node and a traefik example (maybe it helps you with start).
This works only inside group stanza. And group stanza should be at the same node. If you need cross node connection, the sidecar proxy is solution for you. I don’t have experience with it but this could be great source for you https://learn.hashicorp.com/tutorials/consul/service-mesh-with-envoy-proxy
I’ve experience only with a k8s and docker-compose. In K8s is service discovery provided by etcd, in the hashistack you can do the same with a Consul. In k8s you use networking layer across nodes which alows you to connect services together and you can do the same in nomad cluster of course by CNI plugins, but in K8s and Nomad it is definitely not easy.
As I’ve told before you have basicaly options as follows.
You can connect services together via localhost (bridge), but it works only betwean tasks in the same group and running at the same node.
Other option, is to make services able to comunicate together across whole cloud by binding services to node’s port. Simpiest way how to reach this is binding all services in job at the node’s ports and than use consul as a service discovery service. Than you can use registered services via dnsmasq as you are trying or by consul templates (have you already read this article https://www.nomadproject.io/docs/job-specification/template ?)
Underlaying networks like calico is good way and you can do it with nomad too, but the service mesh way with consul connect is bit more modern and you have more controll there. So I really recomend you to use this way https://www.nomadproject.io/docs/integrations/consul-connect. It is not as complicated as it sems to be.
I think 1 would be a good start to get a better starting of how things work in Nomad. As you’ve noticed, they are a bit different from how docker-compose and Kubernetes do things.
For 1, I don’t think is the best way. Do I need to reload the app if the job changes to another node?
It depends on what you are running. Some applications, like nginx, can reload their configuration without a restart. You can control what happens when your config template changes by using change_mode. Here’s an example where we use Consul to find the IP and port of a set of redis servers.
2 should also work, but I think it would require a bit more setup. You can define the DNS configuration for your group within network -> dns.
The dns servers in resolve.conf aren’t the ones expected.
Could you elaborate a bit more on this?
As @msirovy mentioned, you can also Consul Connect for service mesh. Nomad is well integrated with Consul Connect, so it’s actually quite easy to get started. The link posted will give you quick introduction, but basically when you add this to your service:
connect {
sidecar_service {}
}
Nomad will automatically deploy a sidecar proxy for you that uses Consul to find other services. Since the sidecar will be deployed in the same group, everything can be accessed via localhost. Behind the scenes, the Consul Connect proxy will automatically encrypt and send traffic across your nodes to other proxies, so this will work even if your jobs are running in different clients.