Problem occur when client nodes outside the Kubernetes cluster join the cluster

Hello there! I have encountered some problems when following the following documents.

Steps:

  1. On the Kubernetes cluster, consul is deployed with helm.
  2. Create a file called test.json, which defined a service “test”
{
    "service": {
        "id": "test",
        "name": "test",
        "address": "192.168.137.120",
        "port": 80
    }
}
  1. Start the agent on a computer outside the cluster (hereinafter referred to as the “local computer”).
.\consul.exe agent -data-dir .\consul -config-file .\test.json -retry-join 'provider=k8s label_selector="app=consul,component=server"' -bind 192.168.137.120
  1. Enter the shell of the consul server in Kubernetes and run the consul join command.

Problem:
Now I can see the local computer if run command consul members, and it’s alive.
But I can’t see the service.

On consul server inside Kubernetes cluster:
image

On local computer:

Notice that, all node addresses are internal addresses of the cluster, that can not be pinged by the local computer, this may be the reason why the cluster cannot connect normally.


Other attempts I have made:

curl -X PUT -d'{ "id": "test", "name": "test", "address": "192.168.137.120", "port": 4844}' http://192.168.137.121:30301 /v1/agent/service/register

This requests the port exposed by the cluster, and the service address is the local computer.
In this case, although the service can be successfully registered, the Node Name is a server on the cluster, which leads to inaccurate Serf health status (it cannot be found when the machine is disconnected).
image