K8S agents with External EC2 Consul Servers on EKS with External_IP

We have an existing Consul cluster installed on VMs (EC2 Instances - 3 node cluster) and a cluster on EKS with a nodegroup with public IPs.

After deploying the following configuration:

global:
  enabled: false

client:
  enabled: true
  # Set this to true to expose the Consul clients using the Kubernetes node
  # IPs. If false, the pod IPs must be routable from the external servers.
  exposeGossipPorts: true
  # Consul Cluster Outside K8S leader IP
  join:
    - '10.12.25.152'  
  grpc: true
  nodeMeta:
    pod-name: ${HOSTNAME}
    host-ip: ${HOST_IP}

syncCatalog:
  enabled: true
  k8sDenyNamespaces: ["kube-system", "kube-public", "default"]

This creates the DaemonSet and the Sync pod, however the NodePort does not apear to be setup.

The agent nodes are added to Consul, but become unhealthy. It uses the Internal_IP instead of the External_IP for the nodes in the nodegroup. Since the Internal IP is not routable by the EC2 Consul cluster, this makes sense.

So my question is how to configure this to use the External_IP that should be routable from the EC2 Consul Cluster… I can create a public only nodeGroup but is it necessary?

Alternatively, is it possible to have just the K8S Sync pod send data to the existing cluster without the need to deploy consul agents on the cluster?

Hello!

Can you clarify what you mean by ’ External IP ’ ( external to the VPC or publicly accessible via internet ) and ’ Internal IP ’ ( internal to the pod network or to the VPC ) ?

Also, this tutorial on exposing kubernetes services in EKS might help: