Not able to connect VM based consul client to to the K8s based consul server

I’ve a kubenetes consul cluster running.
I’m trying to add a new virtual machine (which is not a k8s node) to the consul cluster.
But I’m not able to.

I followed the steps from the attached suggestion from chatgpt (see the attachment)
consul-client-chatgpt.txt (2.4 KB)

I also compared the consul client running on the k8s worker nodes, read more online content but not able to connect VM based consul client to to the K8s based consul server.

Any insights on how to make this happen?

consul agent -node=pocnjv1mcom09 -advertise= -bind=0.0.0.0 -client=0.0.0.0 -node-meta=host-ip: -data-dir=/opt/consul/data -datacenter=dc1 -domain=consul
root@pocnjv1mcom09:/opt/consul#

root@pocnjv1mcom09:/opt/consul# sh start-agent
==> Starting Consul agent…
Version: ‘1.15.3’
Build Date: ‘2023-06-01 20:40:32 +0000 UTC’
Node ID: ‘700c18c0-fa35-11a0-65bd-0d0344e15b08’
Node name: ‘pocnjv1mcom09’
Datacenter: ‘dc1’ (Segment: ‘’)
Server: false (Bootstrap: false)
Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: -1, gRPC-TLS: -1, DNS: 8600)
Cluster Addr: 135.16.25.44 (LAN: 8301, WAN: 8302)
Gossip Encryption: false
Auto-Encrypt-TLS: false
HTTPS TLS: Verify Incoming: false, Verify Outgoing: false, Min Version: TLSv1_2
gRPC TLS: Verify Incoming: false, Min Version: TLSv1_2
Internal RPC TLS: Verify Incoming: false, Verify Outgoing: false (Verify Hostname: false), Min Version: TLSv1_2

==> Log data will now stream in as it occurs:

2023-06-23T19:19:57.602Z [WARN] agent.client.memberlist.lan: memberlist: Binding to public address without encryption!
2023-06-23T19:19:57.603Z [INFO] agent.client.serf.lan: serf: EventMemberJoin: pocnjv1mcom09 135.16.25.44
2023-06-23T19:19:57.603Z [INFO] agent.router: Initializing LAN area manager
2023-06-23T19:19:57.603Z [INFO] agent: Started DNS server: address=0.0.0.0:8600 network=udp
2023-06-23T19:19:57.603Z [INFO] agent: Started DNS server: address=0.0.0.0:8600 network=tcp
2023-06-23T19:19:57.603Z [INFO] agent: Starting server: address=[::]:8500 network=tcp protocol=http
2023-06-23T19:19:57.604Z [INFO] agent: started state syncer
2023-06-23T19:19:57.604Z [INFO] agent: Consul agent running!
2023-06-23T19:19:57.604Z [WARN] agent.router.manager: No servers available
2023-06-23T19:19:57.604Z [ERROR] agent.anti_entropy: failed to sync remote state: error=“No known Consul servers”

2023-06-23T19:20:15.598Z [WARN] agent.router.manager: No servers available
2023-06-23T19:20:15.598Z [ERROR] agent.anti_entropy: failed to sync remote state: error=“No known Consul servers”
2023-06-23T19:20:19.679Z [ERROR] agent: Failed to check for updates: error=“Get "https://checkpoint-api.hashicorp.com/v1/check/consul?arch=amd64&os=linux&signature=ab9b61a7-852b-61c6-ed0b-125ec4469f92&version=1.15.3\”: context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
^C2023-06-23T19:20:28.583Z [INFO] agent: Caught: signal=interrupt
2023-06-23T19:20:28.583Z [INFO] agent: Gracefully shutting down agent…
2023-06-23T19:20:28.583Z [INFO] agent.client: client starting leave
2023-06-23T19:20:28.583Z [INFO] agent.client.serf.lan: serf: EventMemberLeave: pocnjv1mcom09 135.16.25.44

^C2023-06-23T19:20:31.087Z [INFO] agent: Caught second signal, Exiting: signal=interrupt
2023-06-23T19:20:31.087Z [INFO] agent: Requesting shutdown
2023-06-23T19:20:31.087Z [INFO] agent.client: shutting down client
2023-06-23T19:20:31.087Z [WARN] agent.client.serf.lan: serf: Shutdown without a Leave
2023-06-23T19:20:31.090Z [INFO] agent: consul client down
2023-06-23T19:20:31.090Z [INFO] agent: shutdown complete
2023-06-23T19:20:31.090Z [INFO] agent: Stopping server: protocol=DNS address=0.0.0.0:8600 network=tcp
2023-06-23T19:20:31.090Z [INFO] agent: Stopping server: protocol=DNS address=0.0.0.0:8600 network=udp
2023-06-23T19:20:31.090Z [INFO] agent: Stopping server: address=[::]:8500 network=tcp protocol=http
2023-06-23T19:20:31.090Z [INFO] agent: Waiting for endpoints to shut down
2023-06-23T19:20:31.090Z [INFO] agent: Endpoints down
2023-06-23T19:20:31.090Z [INFO] agent: Exit code: code=1

Why would you do that?!

You have no way of knowing whether you’re getting sensible guidance or a meaningless mashup of contradictory or wrong information, assembled in a plausible seeming way.

And then, after all that, you don’t seem to have even followed it, either.

You haven’t informed Consul where to find the cluster, to connect to it, which is done via the -retry-join CLI option or retry_join in the configuration file.

I tried many options before exploring the last resort of the chatgpt. Yes I agree that it does provide some useless mashup but I thought I could get some idea.

Attempt-1 )

root@pocnjv1mcom09:/opt/consul/config# cat client.json
{
“datacenter”: “dc1”,
“data_dir”: “/opt/consul/data”,
“retry_join”: [“consul-cluster-node-1-ip”,“consul-cluster-node-2-ip”,“consul-cluster-node-3-ip”],
“client_addr”: “0.0.0.0”,
“bind_addr”: “”,
“enable_central_service_config”: true,
“server”: false,
“ui”: false,
“enable_syslog”: true,
“log_level”: “info”
}
root@pocnjv1mcom09:/opt/consul/config# consul agent -config-dir=/opt/consul/config
==> No private IPv4 address found

Attempt-2

{
“node”: “pocnjv1mcom09”,
“advertise_addr”: “”,
“bind_addr”: “”,
“client_addr”: “0.0.0.0”,
“ports”: {
“grpc”: 8501,
“grpc_tls”: -1
},
“data_dir”: “/opt/consul/data”,
“datacenter”: “dc1”,
“retry_join”: [“consul-consul-server-0.consul-consul-server.consul.svc:8301”,“consul-consul-server-1.consul-consul-server.consul.svc:8301”,“consul-consul-server-2.consul-consul-server.consul.svc:8301”],
“domain”: “consul”,
“server”: false,
“ui”: false,
“enable_syslog”: true,
“log_level”: “info”
}

2023-06-23T20:43:18.262Z [WARN] agent.auto_config: The ‘ui’ field is deprecated. Use the ‘ui_config.enabled’ field instead.
2023-06-23T20:43:18.262Z [WARN] agent.auto_config: bootstrap_expect > 0: expecting 3 servers
2023-06-23T20:43:18.273Z [INFO] agent.server.raft: initial configuration: index=0 servers=
2023-06-23T20:43:18.274Z [INFO] agent.server.raft: entering follower state: follower=“Node at 135.16.25.44:8300 [Follower]” leader-address= leader-id=
2023-06-23T20:43:18.274Z [WARN] agent.server.memberlist.wan: memberlist: Binding to public address without encryption!
2023-06-23T20:43:18.274Z [INFO] agent.server.serf.wan: serf: EventMemberJoin: pocnjv1mcom09.dc1 135.16.25.44
2023-06-23T20:43:18.275Z [WARN] agent.server.memberlist.lan: memberlist: Binding to public address without encryption!
2023-06-23T20:43:18.275Z [INFO] agent.server.serf.lan: serf: EventMemberJoin: pocnjv1mcom09 135.16.25.44
2023-06-23T20:43:18.275Z [INFO] agent.router: Initializing LAN area manager
2023-06-23T20:43:18.275Z [INFO] agent.server.autopilot: reconciliation now disabled
2023-06-23T20:43:18.275Z [INFO] agent.server: Handled event for server in area: event=member-join server=pocnjv1mcom09.dc1 area=wan
2023-06-23T20:43:18.276Z [INFO] agent.server: Adding LAN server: server=“pocnjv1mcom09 (Addr: tcp/135.16.25.44:8300) (DC: dc1)”
2023-06-23T20:43:18.278Z [INFO] agent.server.cert-manager: initialized server certificate management
2023-06-23T20:43:18.279Z [INFO] agent: Started DNS server: address=0.0.0.0:8600 network=tcp
2023-06-23T20:43:18.279Z [INFO] agent: Started DNS server: address=0.0.0.0:8600 network=udp
2023-06-23T20:43:18.280Z [INFO] agent: Starting server: address=[::]:8500 network=tcp protocol=http
2023-06-23T20:43:18.280Z [INFO] agent: Started gRPC listeners: port_name=grpc address=[::]:8502 network=tcp
2023-06-23T20:43:18.280Z [INFO] agent: Retry join is supported for the following discovery methods: cluster=LAN discovery_methods=“aliyun aws azure digitalocean gce hcp k8s linode mdns os packet scaleway softlayer tencentcloud triton vsphere”
2023-06-23T20:43:18.280Z [INFO] agent: Joining cluster…: cluster=LAN
2023-06-23T20:43:18.280Z [INFO] agent: (LAN) joining: lan_addresses=[“135...", "135...”, “135...*”, “consul-consul-server-0.consul-consul-server.consul.svc:8301”, “consul-consul-server-1.consul-consul-server.consul.svc:8301”, “consul-consul-server-2.consul-consul-server.consul.svc:8301”, “consul-consul-server.consul.svc:8301”]
2023-06-23T20:43:18.281Z [INFO] agent: started state syncer
2023-06-23T20:43:18.281Z [INFO] agent: Consul agent running!
2023-06-23T20:43:18.482Z [WARN] agent.server.memberlist.lan: memberlist: Failed to resolve consul-consul-server-0.consul-consul-server.consul.svc:8301: lookup consul-consul-server-0.consul-consul-server.consul.svc on 127.0.0.53:53: no such host
2023-06-23T20:43:18.661Z [WARN] agent.server.memberlist.lan: memberlist: Failed to resolve consul-consul-server-1.consul-consul-server.consul.svc:8301: lookup consul-consul-server-1.consul-consul-server.consul.svc on 127.0.0.53:53: no such host
2023-06-23T20:43:19.082Z [WARN] agent.server.memberlist.lan: memberlist: Failed to resolve consul-consul-server-2.consul-consul-server.consul.svc:8301: lookup consul-consul-server-2.consul-consul-server.consul.svc on 127.0.0.53:53: no such host
2023-06-23T20:43:19.368Z [WARN] agent.server.memberlist.lan: memberlist: Failed to resolve consul-consul-server.consul.svc:8301: lookup consul-consul-server.consul.svc on 127.0.0.53:53: no such host
2023-06-23T20:43:19.368Z [WARN] agent: (LAN) couldn’t join: number_of_nodes=0

I guess the issue the client is not able to recognize the consul server running on K8s. I tried to expose consul server with Nodeport but client is not detecting the expose server port.

root@pocnjv1mcom01:~# k get all -n consul
NAME READY STATUS RESTARTS AGE
pod/consul-consul-client-44zt5 1/1 Running 0 3d8h
pod/consul-consul-client-5bx95 1/1 Running 0 3d8h
pod/consul-consul-client-hrrgd 1/1 Running 0 3d8h
pod/consul-consul-client-vfrdz 1/1 Running 0 3d8h
pod/consul-consul-server-0 1/1 Running 0 6d21h
pod/consul-consul-server-1 1/1 Running 0 6d21h
pod/consul-consul-server-2 1/1 Running 0 6d21h
pod/consul-consul-sync-catalog-74fdc4b4df-qdr8g 1/1 Running 1 (6d21h ago) 6d21h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/consul-consul-dns ClusterIP 10.233.24.24 53/TCP,53/UDP 6d21h
service/consul-consul-expose-servers NodePort 10.233.35.46 8500:32002/TCP,8301:30521/TCP,8300:30483/TCP,8502:31059/TCP 128m
service/consul-consul-server ClusterIP None 8500/TCP,8502/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 6d21h
service/consul-consul-ui NodePort 10.233.12.244 80:32001/TCP

I also tied the API but it is also not registering the VM.

root@pocnjv1mcom01:~/hashicorp# cat payload.json
{
“Datacenter”: “dc1”,
“Node”: “”,
“Address”: “”
}
root@pocnjv1mcom01:~/hashicorp# curl --request PUT --data @./payload.json http:///v1/catalog/register
trueroot

Anybody able to help here?