Dear @Ranjandas,
I am sincerely grateful for your responses. It has helped me a lot to understand the things better than before.
Now, I have progressed with the Nomad part. Below are steps I followed so far;
Installed Consul Client ONLY in Nomad worker node (I didn’t install consul client in Nomad Server because workloads runs on nomad worker node)
data_dir = "/opt/consul"
client_addr = "0.0.0.0"
bind_addr = "192.168.40.11"
server = false
advertise_addr = "192.168.40.11"
retry_join = ["192.168.60.10"]
log_level = "INFO
I updated the client_addr (0.0.0.0 to allow any client), bind_addr and advertise_addr to 192.168.40.11 (the IP of the consul client node, same as Nomad client node)
Similarly, in Consul server config I have the below config
data_dir = "/opt/consul"
bind_addr = "192.168.60.10"
client_addr = "0.0.0.0" #Allow connections from any client
ui_config{
enabled = true
}
server = true
advertise_addr = "192.168.60.10"
bootstrap_expect=1
retry_join = ["192.168.60.10"]
ports {
grpc = 8502
}
connect {
enabled = true
}
#log_level = "DEBUG"
config_entries {
bootstrap = [
{
Kind = "proxy-defaults"
Name = "global"
AccessLogs {
Enabled = true
}
Config {
protocol = "http"
}
}
]
}
Added Nginx JOB configuration in Nomad with below service block
service {
name = "nginx-service" ==> used the same service name as in K8S
port = "http" # Reference the network port defined above
tags = ["nginx", "nomad"]
connect {
sidecar_service {
proxy {
transparent_proxy {}
}
}
}
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
}
In the above service block I added transparent proxy block to avoid any DNS issues because in the documentation it says;
When transparent proxy is enabled traffic will automatically flow through the Envoy proxy. If the local Consul agent is serving DNS, Nomad will also set up the task’s nameservers to use Consul. This lets your workload use the virtual IP DNS name from Consul, rather than configuring a template
block that queries services.
I can see both K8s and Nomad endpoints are available in Consul UI under same service name
Now, I am able to curl to the service from K8S as below
k exec -it multitool-pod -c multitool-container -- curl nginx-service.virtual.consul
Hello, I am running on Kubernetes!
However, it works intermittently and doesn’t work at times due to Could not resolve host:
k exec -it multitool-pod -c multitool-container -- curl nginx-service.virtual.consul
Hello, I am running on Kubernetes!
ubuntu@ubuntu-desktop:~$ k exec -it multitool-pod -c multitool-container -- curl nginx-service.virtual.consul
curl: (6) Could not resolve host: nginx-service.virtual.consul
command terminated with exit code 6
ubuntu@ubuntu-desktop:~$ k exec -it multitool-pod -c multitool-container -- curl nginx-service.virtual.consul
curl: (6) Could not resolve host: nginx-service.virtual.consul
command terminated with exit code 6
ubuntu@ubuntu-desktop:~$ k exec -it multitool-pod -c multitool-container -- curl nginx-service.virtual.consul
Hello, I am running on Kubernetes!
I think it stops working when it is trying to connect to Nomad side. But, I am not entirely sure. Below are some logs I got from multitool-pod’s consul-dataplane container
when nginx-service.virtual.consul succeeds
2024-09-28T22:54:33.069Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=62
2024-09-28T22:54:33.069Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=60
2024-09-28T22:54:33.069Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=60
2024-09-28T22:54:33.069Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=62
2024-09-28T22:54:33.071Z+00:00 [debug] envoy.filter(24) original_dst: set destination to 240.0.0.3:80
2024-09-28T22:54:33.071Z+00:00 [debug] envoy.conn_handler(24) [Tags: "ConnectionId":"91"] new connection from 30.0.1.17:50132
2024-09-28T22:54:33.071Z+00:00 [debug] envoy.http(24) [Tags: "ConnectionId":"91"] new stream
2024-09-28T22:54:33.071Z+00:00 [debug] envoy.http(24) [Tags: "ConnectionId":"91","StreamId":"16189153334393723254"] request headers complete (end_stream=true):
':authority', 'nginx-service.virtual.consul'
':path', '/'
':method', 'GET'
'user-agent', 'curl/7.79.1'
'accept', '*/*'
2024-09-28T22:54:33.071Z+00:00 [debug] envoy.http(24) [Tags: "ConnectionId":"91","StreamId":"16189153334393723254"] request end stream
2024-09-28T22:54:33.071Z+00:00 [debug] envoy.connection(24) [Tags: "ConnectionId":"91"] current connecting state: false
2024-09-28T22:54:33.071Z+00:00 [debug] envoy.router(24) [Tags: "ConnectionId":"91","StreamId":"16189153334393723254"] cluster 'nginx-service.default.dc1.internal.1734b89d-6c9d-6e59-d27c-a722a90084da.consul' match for URL '/'
2024-09-28T22:54:33.072Z+00:00 [debug] envoy.router(24) [Tags: "ConnectionId":"91","StreamId":"16189153334393723254"] router decoding headers:
':authority', 'nginx-service.virtual.consul'
':path', '/'
':method', 'GET'
':scheme', 'http'
'user-agent', 'curl/7.79.1'
'accept', '*/*'
'x-forwarded-proto', 'http'
'x-request-id', '81480dd3-358f-4252-9455-0617620c1666'
'x-envoy-expected-rq-timeout-ms', '15000'
2024-09-28T22:54:33.072Z+00:00 [debug] envoy.pool(24) [Tags: "ConnectionId":"14"] using existing fully connected connection
2024-09-28T22:54:33.072Z+00:00 [debug] envoy.pool(24) [Tags: "ConnectionId":"14"] creating stream
2024-09-28T22:54:33.072Z+00:00 [debug] envoy.router(24) [Tags: "ConnectionId":"91","StreamId":"16189153334393723254"] pool ready
2024-09-28T22:54:33.072Z+00:00 [debug] envoy.client(24) [Tags: "ConnectionId":"14"] encode complete
2024-09-28T22:54:33.076Z+00:00 [debug] envoy.router(24) [Tags: "ConnectionId":"91","StreamId":"16189153334393723254"] upstream headers complete: end_stream=false
2024-09-28T22:54:33.076Z+00:00 [debug] envoy.http(24) [Tags: "ConnectionId":"91","StreamId":"16189153334393723254"] encoding headers via codec (end_stream=false):
':status', '200'
'server', 'envoy'
'date', 'Sat, 28 Sep 2024 22:54:33 GMT'
'content-type', 'text/html'
'content-length', '35'
'last-modified', 'Sat, 28 Sep 2024 17:20:50 GMT'
'etag', '"66f83af2-23"'
'accept-ranges', 'bytes'
'x-envoy-upstream-service-time', '3'
2024-09-28T22:54:33.076Z+00:00 [debug] envoy.client(24) [Tags: "ConnectionId":"14"] response complete
2024-09-28T22:54:33.076Z+00:00 [debug] envoy.http(24) [Tags: "ConnectionId":"91","StreamId":"16189153334393723254"] Codec completed encoding stream.
2024-09-28T22:54:33.076Z+00:00 [debug] envoy.pool(24) [Tags: "ConnectionId":"14"] response complete
2024-09-28T22:54:33.076Z+00:00 [debug] envoy.pool(24) [Tags: "ConnectionId":"14"] destroying stream: 0 remaining
2024-09-28T22:54:33.080Z+00:00 [debug] envoy.connection(24) [Tags: "ConnectionId":"91"] remote close
2024-09-28T22:54:33.080Z+00:00 [debug] envoy.connection(24) [Tags: "ConnectionId":"91"] closing socket: 0
2024-09-28T22:54:33.080Z+00:00 [debug] envoy.conn_handler(24) [Tags: "ConnectionId":"91"] adding to cleanup list
2024-09-28T22:54:35.641Z+00:00 [debug] envoy.main(14) flushing stats
when nginx-service.virtual.consul fails : only these entries are generated.
2024-09-28T22:57:46.104Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=72
2024-09-28T22:57:46.104Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=72
2024-09-28T22:57:46.104Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=64
2024-09-28T22:57:46.104Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=64
2024-09-28T22:57:46.104Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=60
2024-09-28T22:57:46.104Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=60
2024-09-28T22:57:46.105Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=62
2024-09-28T22:57:46.105Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=62
Logs for nginx-service.service.consul
2024-09-28T23:00:16.867Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=60
2024-09-28T23:00:16.867Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=60
2024-09-28T23:00:16.870Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=78
2024-09-28T23:00:16.870Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=96
2024-09-28T23:00:16.871Z+00:00 [debug] envoy.filter(23) original_dst: set destination to 30.0.1.225:80
2024-09-28T23:00:16.871Z+00:00 [debug] envoy.filter(23) [Tags: "ConnectionId":"127"] new tcp proxy session
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.filter(23) [Tags: "ConnectionId":"127"] Creating connection to cluster original-destination
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.upstream(23) transport socket match, socket default selected for host with address 30.0.1.225:80
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.upstream(23) Created host original-destination30.0.1.225:80 30.0.1.225:80.
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.misc(23) Allocating TCP conn pool
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.pool(23) trying to create new connection
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.upstream(14) addHost() adding original-destination30.0.1.225:80 30.0.1.225:80.
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.pool(23) creating a new connection (connecting=0)
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.connection(23) [Tags: "ConnectionId":"128"] connecting to 30.0.1.225:80
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.upstream(14) membership update for TLS cluster original-destination added 1 removed 0
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.upstream(14) re-creating local LB for TLS cluster original-destination
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.connection(23) [Tags: "ConnectionId":"128"] connection in progress
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.conn_handler(23) [Tags: "ConnectionId":"127"] new connection from 30.0.1.17:47258
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.upstream(23) membership update for TLS cluster original-destination added 1 removed 0
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.upstream(23) re-creating local LB for TLS cluster original-destination
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.connection(23) [Tags: "ConnectionId":"128"] connected
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.pool(23) [Tags: "ConnectionId":"128"] attaching to next stream
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.pool(23) [Tags: "ConnectionId":"128"] creating stream
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.router(23) Attached upstream connection [C128] to downstream connection [C127]
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.filter(23) [Tags: "ConnectionId":"127"] TCP:onUpstreamEvent(), requestedServerName:
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.upstream(24) membership update for TLS cluster original-destination added 1 removed 0
2024-09-28T23:00:16.872Z+00:00 [debug] envoy.upstream(24) re-creating local LB for TLS cluster original-destination
2024-09-28T23:00:16.875Z+00:00 [debug] envoy.connection(23) [Tags: "ConnectionId":"127"] remote close
2024-09-28T23:00:16.875Z+00:00 [debug] envoy.connection(23) [Tags: "ConnectionId":"127"] closing socket: 0
2024-09-28T23:00:16.875Z+00:00 [debug] envoy.connection(23) [Tags: "ConnectionId":"128"] closing data_to_write=0 type=0
2024-09-28T23:00:16.875Z+00:00 [debug] envoy.connection(23) [Tags: "ConnectionId":"128"] closing socket: 1
2024-09-28T23:00:16.875Z+00:00 [debug] envoy.pool(23) [Tags: "ConnectionId":"128"] client disconnected, failure reason:
2024-09-28T23:00:16.875Z+00:00 [debug] envoy.pool(23) invoking 1 idle callback(s) - is_draining_for_deletion_=false
2024-09-28T23:00:16.875Z+00:00 [debug] envoy.pool(23) [Tags: "ConnectionId":"128"] destroying stream: 0 remaining
2024-09-28T23:00:16.875Z+00:00 [debug] envoy.pool(23) invoking 0 idle callback(s) - is_draining_for_deletion_=false
2024-09-28T23:00:16.875Z+00:00 [debug] envoy.conn_handler(23) [Tags: "ConnectionId":"127"] adding to cleanup list
2024-09-28T23:00:17.731Z+00:00 [debug] envoy.conn_handler(23) [Tags: "ConnectionId":"129"] new connection from 30.0.1.82:42944
2024-09-28T23:00:17.731Z+00:00 [debug] envoy.connection(23) [Tags: "ConnectionId":"129"] closing socket: 0
2024-09-28T23:00:17.731Z+00:00 [debug] envoy.conn_handler(23) [Tags: "ConnectionId":"129"] adding to cleanup list
2024-09-28T23:00:20.720Z+00:00 [debug] envoy.main(14) flushing stats
2024-09-28T23:00:25.722Z+00:00 [debug] envoy.main(14) flushing stats
2024-09-28T23:00:25.895Z+00:00 [debug] envoy.upstream(24) membership update for TLS cluster original-destination added 0 removed 1
2024-09-28T23:00:25.895Z+00:00 [debug] envoy.upstream(24) re-creating local LB for TLS cluster original-destination
2024-09-28T23:00:25.895Z+00:00 [debug] envoy.upstream(14) membership update for TLS cluster original-destination added 0 removed 1
2024-09-28T23:00:25.895Z+00:00 [debug] envoy.upstream(14) re-creating local LB for TLS cluster original-destination
2024-09-28T23:00:25.895Z+00:00 [debug] envoy.upstream(23) membership update for TLS cluster original-destination added 0 removed 1
2024-09-28T23:00:25.896Z+00:00 [debug] envoy.upstream(23) re-creating local LB for TLS cluster original-destination
2024-09-28T23:00:25.896Z+00:00 [debug] envoy.upstream(23) removing hosts for TLS cluster original-destination removed 1
2024-09-28T23:00:25.896Z+00:00 [debug] envoy.upstream(24) removing hosts for TLS cluster original-destination removed 1
2024-09-28T23:00:25.896Z+00:00 [debug] envoy.upstream(14) removing hosts for TLS cluster original-destination removed 1
Logs For nginx-service.connect.consul
2024-09-28T23:03:21.819Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=60
2024-09-28T23:03:21.819Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=60
2024-09-28T23:03:21.820Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=62
2024-09-28T23:03:21.820Z [DEBUG] consul-dataplane.dns-proxy.udp: dns messaged received from consul: length=96
2024-09-28T23:03:21.820Z+00:00 [debug] envoy.filter(24) original_dst: set destination to 30.0.1.225:80
2024-09-28T23:03:21.820Z+00:00 [debug] envoy.filter(24) [Tags: "ConnectionId":"148"] new tcp proxy session
2024-09-28T23:03:21.820Z+00:00 [debug] envoy.filter(24) [Tags: "ConnectionId":"148"] Creating connection to cluster original-destination
2024-09-28T23:03:21.820Z+00:00 [debug] envoy.upstream(24) transport socket match, socket default selected for host with address 30.0.1.225:80
2024-09-28T23:03:21.820Z+00:00 [debug] envoy.upstream(24) Created host original-destination30.0.1.225:80 30.0.1.225:80.
2024-09-28T23:03:21.820Z+00:00 [debug] envoy.misc(24) Allocating TCP conn pool
2024-09-28T23:03:21.820Z+00:00 [debug] envoy.pool(24) trying to create new connection
2024-09-28T23:03:21.820Z+00:00 [debug] envoy.pool(24) creating a new connection (connecting=0)
2024-09-28T23:03:21.820Z+00:00 [debug] envoy.connection(24) [Tags: "ConnectionId":"149"] connecting to 30.0.1.225:80
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.upstream(14) addHost() adding original-destination30.0.1.225:80 30.0.1.225:80.
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.connection(24) [Tags: "ConnectionId":"149"] connection in progress
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.conn_handler(24) [Tags: "ConnectionId":"148"] new connection from 30.0.1.17:58714
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.connection(24) [Tags: "ConnectionId":"149"] connected
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.pool(24) [Tags: "ConnectionId":"149"] attaching to next stream
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.pool(24) [Tags: "ConnectionId":"149"] creating stream
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.upstream(14) membership update for TLS cluster original-destination added 1 removed 0
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.upstream(14) re-creating local LB for TLS cluster original-destination
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.router(24) Attached upstream connection [C149] to downstream connection [C148]
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.upstream(23) membership update for TLS cluster original-destination added 1 removed 0
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.upstream(23) re-creating local LB for TLS cluster original-destination
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.filter(24) [Tags: "ConnectionId":"148"] TCP:onUpstreamEvent(), requestedServerName:
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.upstream(24) membership update for TLS cluster original-destination added 1 removed 0
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.upstream(24) re-creating local LB for TLS cluster original-destination
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.connection(24) [Tags: "ConnectionId":"148"] remote close
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.connection(24) [Tags: "ConnectionId":"148"] closing socket: 0
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.connection(24) [Tags: "ConnectionId":"149"] closing data_to_write=0 type=0
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.connection(24) [Tags: "ConnectionId":"149"] closing socket: 1
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.pool(24) [Tags: "ConnectionId":"149"] client disconnected, failure reason:
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.pool(24) invoking 1 idle callback(s) - is_draining_for_deletion_=false
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.pool(24) [Tags: "ConnectionId":"149"] destroying stream: 0 remaining
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.pool(24) invoking 0 idle callback(s) - is_draining_for_deletion_=false
2024-09-28T23:03:21.821Z+00:00 [debug] envoy.conn_handler(24) [Tags: "ConnectionId":"148"] adding to cleanup list
2024-09-28T23:03:25.772Z+00:00 [debug] envoy.main(14) flushing stats
However, I am unable to curl using nginx-service.service.consul
(1) or nginx-service.connect.consul
(2) at all.
(1) gives curl: (56) Recv failure: Connection reset by peer. command terminated with exit code 56
(2) gives curl: (52) Empty reply from server. command terminated with exit code 52
OR curl: (6) Could not resolve host: nginx-service.connect.consul
Additionally, I see the logs continuously outputs the following message;
2024-09-28T22:58:57.731Z+00:00 [debug] envoy.conn_handler(23) [Tags: "ConnectionId":"119"] new connection from 30.0.1.82:56396
2024-09-28T22:58:57.731Z+00:00 [debug] envoy.connection(23) [Tags: "ConnectionId":"119"] closing socket: 0
2024-09-28T22:58:57.731Z+00:00 [debug] envoy.conn_handler(23) [Tags: "ConnectionId":"119"] adding to cleanup list
2024-09-28T22:59:00.698Z+00:00 [debug] envoy.main(14) flushing stats
2024-09-28T22:59:05.701Z+00:00 [debug] envoy.main(14) flushing stats
2024-09-28T22:59:06.110Z [DEBUG] consul-dataplane.dns-proxy.udp: timeout waiting for read: error="read udp 127.0.0.1:8600: i/o timeout"
2024-09-28T22:59:07.731Z+00:00 [debug] envoy.conn_handler(24) [Tags: "ConnectionId":"120"] new connection from 30.0.1.82:34324
2024-09-28T22:59:07.731Z+00:00 [debug] envoy.connection(24) [Tags: "ConnectionId":"120"] closing socket: 0
2024-09-28T22:59:07.731Z+00:00 [debug] envoy.conn_handler(24) [Tags: "ConnectionId":"120"] adding to cleanup list
2024-09-28T22:59:10.704Z+00:00 [debug] envoy.main(14) flushing stats
2024-09-28T22:59:15.708Z+00:00 [debug] envoy.main(14) flushing stats
2024-09-28T22:59:16.111Z [DEBUG] consul-dataplane.dns-proxy.udp: timeout waiting for read: error="read udp 127.0.0.1:8600: i/o timeout"
I do not have any pod/service with IP 30.0.1.82.
Also, I could see the following results when using the DIG command inside both Nomad and K8S nodes, k8s pods and nomad task;
dig @192.168.60.10 -p 8600 nginx-service.virtual.consul
; <<>> DiG 9.18.28-0ubuntu0.24.04.1-Ubuntu <<>> @192.168.60.10 -p 8600 nginx-service.virtual.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 33461
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;nginx-service.virtual.consul. IN A
;; ANSWER SECTION:
nginx-service.virtual.consul. 0 IN A 240.0.0.3
;; Query time: 0 msec
;; SERVER: 192.168.60.10#8600(192.168.60.10) (UDP)
dig @192.168.60.10 -p 8600 nginx-service.service.consul
; <<>> DiG 9.18.28-0ubuntu0.24.04.1-Ubuntu <<>> @192.168.60.10 -p 8600 nginx-service.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27777
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;nginx-service.service.consul. IN A
;; ANSWER SECTION:
nginx-service.service.consul. 0 IN A 30.0.1.225
nginx-service.service.consul. 0 IN A 192.168.40.11
;; Query time: 1 msec
;; SERVER: 192.168.60.10#8600(192.168.60.10) (UDP)
dig @192.168.60.10 -p 8600 nginx-service.connect.consul
; <<>> DiG 9.18.28-0ubuntu0.22.04.1-Ubuntu <<>> @192.168.60.10 -p 8600 nginx-service.connect.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56585
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;nginx-service.connect.consul. IN A
;; ANSWER SECTION:
nginx-service.connect.consul. 0 IN A 30.0.1.225
;; Query time: 0 msec
;; SERVER: 192.168.60.10#8600(192.168.60.10) (UDP)
nginx-service.virtual.consul shows some random virtual IP, nginx-service.service.consul shows both K8S Pod and Nomad worker node’s IP, nginx-service.connect.consul shows the K8S Pod IP.
However, if I remove the DNS port 8600 from the DIG command it falls back to port 53 and stops working.
So I believe this could be a DNS issue. Do we have to use additional configurations to change DNS when Consul is used. I thought when transparent proxy is used DNS is handled automatically as I understood.
I also came across DNS usage overview | Consul | HashiCorp Developer which says
"If you are using Consul for service mesh on VMs, you can use upstreams or DNS. We recommend using upstreams because you can query services and nodes without modifying the application code or environment variables. "
From Nomad side I can only access the service by directly using node IP and port. I am unable to access using any consul service name at all.
I am sorry to make this too long, but I wanted to insert all the details that I found and completed so far to get your kind advice.
Thank you!