Bonjour,
Today! Missing drivers…
Nomad v1.5.3 (single node)
Ubuntu 22.04
Docker version 23.0.3, build 3e7cbfd
I switch Docker to TCP listenner with TLS.
cat /etc/systemd/system/docker.service.d/override.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -D --tlsverify --tlscacert=/etc/ssl/docker/docker-ca.pem --tlscert=/etc/ssl/docker/dc1-server-docker.pem --tlskey=/etc/ssl/docker/dc1-server-docker.key -H tcp://0.0.0.0:2376
root@anita:/etc/nomad.d#
I configure docker.hcl
plugin "docker" {
config {
endpoint = "tcp://127.0.0.1:2376"
tls {
cert = "/etc/ssl/docker/dc1-client-docker.pem"
key = "/etc/ssl/docker/dc1-client-docker.key"
ca = "/etc/ssl/docker/docker-ca.pem"
}
allow_privileged = false
volumes {
enabled = true
}
gc {
image = true
image_delay = "1h"
container = true
dangling_containers {
enabled = true
dry_run = false
period = "5m"
creation_grace = "5m"
}
}
}
}
And part of nomad.hcl
bind_addr = "0.0.0.0"
advertise {
http = "127.0.0.1:4646"
rpc = "127.0.0.1:4647"
serf = "127.0.0.1:4648"
}
ports {
http = 4646
rpc = 4647
serf = 4648
}
When I run a job:
Constraint missing drivers filtered 1 node
And debug log
2023-04-05T12:20:01.973Z [DEBUG] http: request complete: method=GET path=/v1/job/mosquitto duration=1.128492ms
2023-04-05T12:20:01.996Z [DEBUG] worker: dequeued evaluation: worker_id=55dbf143-4263-c2ce-b37f-566c7e3fcecc eval_id=71cb85ad-975b-fe96-4cd3-26be41fa2c9e type=service namespace=default job_id=mosquitto node_id="" triggered_by=job-register
2023-04-05T12:20:01.997Z [DEBUG] http: request complete: method=GET path=/v1/job/mosquitto?index=18572 duration=33.846121442s
2023-04-05T12:20:01.997Z [DEBUG] worker.service_sched: reconciled current state with desired state: eval_id=71cb85ad-975b-fe96-4cd3-26be41fa2c9e job_id=mosquitto namespace=default worker_id=55dbf143-4263-c2ce-b37f-566c7e3fcecc
results=
| Total changes: (place 1) (destructive 0) (inplace 0) (stop 0) (disconnect 0) (reconnect 0)
| Created Deployment: "514994ee-bf40-1840-dde0-b8db69b0626d"
| Desired Changes for "mosquitto": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)
2023-04-05T12:20:01.997Z [DEBUG] http: request complete: method=GET path=/v1/job/mosquitto/evaluations?index=18572 duration=33.853779796s
2023-04-05T12:20:01.997Z [DEBUG] http: request complete: method=POST path=/v1/job/mosquitto duration=12.310982ms
2023-04-05T12:20:02.006Z [DEBUG] worker: created evaluation: worker_id=55dbf143-4263-c2ce-b37f-566c7e3fcecc eval="<Eval \"1f6b20db-9f4e-3c66-50f7-b117074abdbc\" JobID: \"mosquitto\" Namespace: \"default\">"
2023-04-05T12:20:02.006Z [DEBUG] worker.service_sched: failed to place all allocations, blocked eval created: eval_id=71cb85ad-975b-fe96-4cd3-26be41fa2c9e job_id=mosquitto namespace=default worker_id=55dbf143-4263-c2ce-b37f-566c7e3fcecc blocked_eval_id=1f6b20db-9f4e-3c66-50f7-b117074abdbc
2023-04-05T12:20:02.015Z [DEBUG] worker: submitted plan for evaluation: worker_id=55dbf143-4263-c2ce-b37f-566c7e3fcecc eval_id=71cb85ad-975b-fe96-4cd3-26be41fa2c9e
2023-04-05T12:20:02.015Z [DEBUG] worker.service_sched: setting eval status: eval_id=71cb85ad-975b-fe96-4cd3-26be41fa2c9e job_id=mosquitto namespace=default worker_id=55dbf143-4263-c2ce-b37f-566c7e3fcecc status=complete
2023-04-05T12:20:02.015Z [DEBUG] http: request complete: method=GET path=/v1/job/mosquitto/deployment?index=18571 duration=35.859597217s
2023-04-05T12:20:02.019Z [DEBUG] http: request complete: method=GET path=/v1/job/mosquitto/evaluations?index=18573 duration="954.093µs"
2023-04-05T12:20:02.024Z [DEBUG] worker: updated evaluation: worker_id=55dbf143-4263-c2ce-b37f-566c7e3fcecc eval="<Eval \"71cb85ad-975b-fe96-4cd3-26be41fa2c9e\" JobID: \"mosquitto\" Namespace: \"default\">"
2023-04-05T12:20:02.024Z [DEBUG] worker: ack evaluation: worker_id=55dbf143-4263-c2ce-b37f-566c7e3fcecc eval_id=71cb85ad-975b-fe96-4cd3-26be41fa2c9e type=service namespace=default job_id=mosquitto node_id="" triggered_by=job-register
2023-04-05T12:20:02.024Z [DEBUG] http: request complete: method=GET path=/v1/job/mosquitto/summary?index=18572 duration=35.862729065s
2023-04-05T12:20:04.030Z [DEBUG] http: request complete: method=GET path=/v1/job/mosquitto/evaluations?index=18575 duration=1.699689ms
2023-04-05T12:20:07.187Z [DEBUG] nomad: memberlist: Stream connection from=127.0.0.1:48994
2023-04-05T12:20:07.606Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=3.099814ms
log INFO from startup Nomad (very long to start):
tail -f /var/log/nomad/nomad.log
2023-04-05T16:00:13.495Z [WARN] nomad.raft: heartbeat timeout reached, starting election: last-leader-addr= last-leader-id=
2023-04-05T16:00:13.495Z [INFO] nomad.raft: entering candidate state: node="Node at 127.0.0.1:4647 [Candidate]" term=3
2023-04-05T16:00:13.518Z [INFO] nomad.raft: election won: term=3 tally=1
2023-04-05T16:00:13.518Z [INFO] nomad.raft: entering leader state: leader="Node at 127.0.0.1:4647 [Leader]"
2023-04-05T16:00:13.518Z [INFO] nomad: cluster leadership acquired
2023-04-05T16:00:13.609Z [INFO] nomad: eval broker status modified: paused=false
2023-04-05T16:00:13.609Z [INFO] nomad: blocked evals status modified: paused=false
2023-04-05T16:00:21.936Z [INFO] client.plugin: starting plugin manager: plugin-type=csi
2023-04-05T16:00:21.936Z [INFO] client.plugin: starting plugin manager: plugin-type=driver
2023-04-05T16:00:21.936Z [INFO] client.plugin: starting plugin manager: plugin-type=device
2023-04-05T16:01:11.937Z [WARN] client.plugin: timeout waiting for plugin manager to be ready: plugin-type=driver
2023-04-05T16:01:11.938Z [INFO] client: started client: node_id=fbc570b3-1c00-6fcd-5c97-7e67621b5784
2023-04-05T16:01:11.955Z [INFO] client: node registration complete
2023-04-05T16:01:17.163Z [INFO] client: node registration complete
I see: 2023-04-05T16:01:11.937Z [WARN] client.plugin: timeout waiting for plugin manager to be ready: plugin-type=driver
I can “talk” with docker daemon:
docker --tlsverify -H tcp://127.0.0.1:2376 --tlscacert /etc/ssl/docker/docker-ca.pem --tlscert /etc/ssl/docker/dc1-client-docker.pem --tlskey /etc/ssl/docker/dc1-client-docker.key ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
I have 2 interfaces. br0
and br1
. I don’t know if cause something wrong.
I deploy Hashistack with personnal role. It work without problem on testings VM. But not on my production server.
Help!
Thanks!