We have nomad jobs running using envoy proxy as connect sidecar. The cluster is currently running consul/nomad on http. We are planning to update it to tls and are using self signed certificates.
When tls is enabled for consul, corresponding nomad configs are updated and nomad is restarted, existing jobs needs a restart to start using the tls context to pass ca for connect configs.
Following are the service configurations:
consul stanza in Nomad config
consul {
address = “127.0.0.1:8501”
ssl = true
share_ssl = true
ca_file = “/etc/ssl/certs/vault-pki-ca-chain.pem”
token = “xxxx”
}
Connect sidecar config in nomad job
connect {
sidecar_service {
proxy {}
}
sidecar_task {
resources {
cpu = [[ .event_adapter.resources.envoy.cpu ]]
memory = [[ .event_adapter.resources.envoy.memory ]]
}
}
}
The error I see in logs when containers are not restarted:
nomadw[50685]: 2021-07-27T14:06:22.464Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=4d327e37-cdd8-5096-09cb-74a3537210fd error="read tcp 127.0.0.1:54528->127.0.0.1:8502: read: connection reset by peer" dest=127.0.0.1:8502 src_local=/opt/nomad/alloc/4d327e37-cdd8-5096-09cb-74a3537210fd/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
What is the best way to handle this manual step of restarting containers after tls is enabled?