Hey there!
I’ve build a small dockerized NodeJS application for consuming the event stream of my Nomad server and publishing the events on a Mercure hub.
This works fine on my local machine and also when I run this on the server via cli as docker run. As soon as I move have Nomad run it however I’m unable to get a stable application and errors show up hinting towards network issues.
Since it doesn’t seem to be the application itself or the way it is expected to behave when run via Docker, I came to the conclusion I’m doing something wrong in my job specification:
job-specification
job "monitor" {
type = "service"
group "monitor" {
count = 1
task "monitor" {
driver = "docker"
config {
image = "<some image>"
}
service {
check {
name = "healthcheck_tcp"
type = "tcp"
port = "${var.healthcheck_port}"
interval = "15s"
timeout = "5s"
address_mode = "driver"
}
}
}
}
}
variable "healthcheck_port" {
type = number
default = 9000
}
After using the cli tool to run this job I switch over to viewing the logs from the UI. Within different intervals I can see this output on stderr:
Error: read ECONNRESET
at TCP.onStreamRead (node:internal/stream_base_commons:216:20) {
errno: -104,
code: 'ECONNRESET',
syscall: 'read'
}
This error is triggered deep within NodeJS (source: Github - nodejs/node) and I am not able to catch this error.
If I run the very same container on my local machine or on the server this error does not show up in stderr, hence figuring out the reason is hard.
The NodeJS app has code to handle certain errors for the connection. However non of these show up as triggered. Also I tested dropping the connection after time X but the error still occurs when reconnecting to the event stream endpoint after 5 seconds. So this does not seem to be a long-running job issue.