K8s Installation with Helm doesn't work - connection error's

Hi there,

I’ve tried running a server instance on K8s and connecting virtual machine clients outside the cluster.

To deploy the server instance, I’m using the official helm chart.
Since I want to connect from external, I expose the service:

global: 
  enabled: false
  name: consul
server:
  enabled: true
  replicas: 1
  exposeService:
    enabled: true
  # exposeGossipAndRPCPorts: true
  # ports:
  #   serflan: 
  #     port: 9301
ui:
  enabled: true
  ingress:
    enabled: true
    hosts:
      - host: consul-hawkv6.stud.network.garden

As soon as I deploy it with Helm, the following services are deployed:

k get svc -n hawkv6-consul
NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                            AGE
consul-connect-injector   ClusterIP      10.105.101.215   <none>        443/TCP                                                                            4s
consul-dns                ClusterIP      10.103.12.243    <none>        53/TCP,53/UDP                                                                      4s
consul-expose-servers     LoadBalancer   10.110.61.225    10.8.39.70    8500:32018/TCP,8301:30217/TCP,8300:32084/TCP,8502:31537/TCP                        4s
consul-server             ClusterIP      None             <none>        8500/TCP,8502/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP   4s
consul-ui                 ClusterIP      10.96.31.30      <none>        80/TCP    

Furthermore, the pods are deployed:

k get pods -n hawkv6-consul
NAME                                           READY   STATUS    RESTARTS   AGE
consul-connect-injector-6b99dfbccf-6njf4       1/1     Running   0          84s
consul-server-0                                1/1     Running   0          84s
consul-webhook-cert-manager-546dbcc5c6-9njwx   1/1     Running   0          84s

So far, the UI is working, and the server started properly.

Now, when I start the agent, I can observe that the client can join but that it has connection issues. I can observe that the answer is not coming from the LoadBalancerIp but from the internal pod IP, which is not reachable from the virtual machines.
See here for more information.
consul-server.log.txt (13.7 KB)
consul.log.txt (12.1 KB)

I have already tried to use a NodePort instead of LoadBalancer and also tried to use the exposeGossipAndRPCPorts and change the serflan port. Nothing is working.
When I change the serflan port, I can not join in general.

Is there even a way to connect virtual machines to a server running on K8s without a route to the internal pod IP?

Can somebody help?

Thanks a lot.

Cheers,
Severin

Hi @severin.dellsperger,

Welcome to the HashiCorp Forums! I hope you have seen the official documentation listing the networking requirements for this setup.

You should have either of the following requirements met:

  • The Pod IPs should be routable from the client VM IPs and reverse
  • Alternatively, if using Host Ports, the K8S Worker node IPs should be routable from VMs and reverse

You won’t be able to join client agents via a loadbalancer, as the Serf protocol used between agents requires full mesh networking connectivity.

ref:

1 Like

Hi @Ranjandas,

Thanks for your explanation.

Fortunately, I found the issue and resolved it by using Host Ports. However, I still have some questions:

If pod IPs are not routable, are Host Ports the only solution? While using Host Ports fixed the problem, it doesn’t seem like the most optimal approach, especially regarding high availability or security.

Also, what is the purpose of exporting a Service with a LoadBalancer or NodePort if pod IPs are not routable? If it’s only for the API, wouldn’t it be more appropriate to use an Ingress?

Thanks,
Severin

I’m glad you resolved the issue using HostPorts. You are right; if pod IPs are not routable, Host ports are the only option. This requirement comes from the underlying Gossip Protocol being used (Serf), which works based on being able to talk to every other agent (irrespective of Client/Server) like a full mesh.

Also, what is the purpose of exporting a Service with a LoadBalancer or NodePort if pod IPs are not routable? If it’s only for the API, wouldn’t it be more appropriate to use an Ingress?

The load balancer or NodePort is not to be used for client agents. In the new architecture, Consul on Kubernetes doesn’t have to run Client Agents, thereby removing the above requirement of routable IPs.

One use case where this load balancer or node port can be used when spanning a single Consul DC across multiple Kubernetes clusters, or using the enterprise admin partition feature were each Kubernetes cluster belongs to a separate partition etc. In these usecases, the primary cluster that runs Consul Servers will expose the servers using LB/NodePort service, thereby allowing the components in the rest of the K8S clusters to reach Consul.

I hope this helps.

ref:

Fantastic explanation!
It really helped me to understand it better.
Many thanks,
Severin

1 Like

This topic was automatically closed 62 days after the last reply. New replies are no longer allowed.