Deploy consul in EKS

Hi, i have installed consul with helm with specifc configuratiosn using values.yaml, on my EKS cluster (I have 3 consul servers instances in my aws console).
I am not sure if i am missing anything or did something wrong, but it doesn’t work
I have:

  1. Installed with helm an nginx ingress controller with NLB.
  2. Created A record in route 53 which takes points to my NLB.
  3. Created consul namespace
  4. Created consul-gossip-encryption-key secret resource under consul namespace
  5. Installed consul with helm
  6. Changed the coredns cm resource
  7. Created an ingress rule for consul
  8. Of course i opened all the ports needed:
    Required Ports | Consul | HashiCorp Developer

I have attached the pods logs:
This is a test enviroment and it’s goign to be destroyed so i am ok with sharing the entire logs
consulpo1.txt (3.4 KB)
consulpo3.txt (3.8 KB)


This is my values.yaml content:

global:
  enabled: false
  image: "hashicorp/consul:1.15.2"
  datacenter: opsschool
  gossipEncryption:
    secretName: "consul-gossip-encryption-key"
    secretKey: "key"

client:
  enabled: true
  join:
    - "provider=aws tag_key=Consul tag_value=server"
    
dns:
  enabled: true

syncCatalog:
  enabled: true

This is the coredns configmap content:

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
      errors
      health

      kubernetes cluster.local in-addr.arpa ip6.arpa {
        pods insecure
        upstream
        fallthrough in-addr.arpa ip6.arpa
      }

      prometheus :9153
      proxy . /etc/resolv.conf
      cache 30
      loop
      reload
      loadbalance
    }

    consul {
      errors
      cache 30
      forward . consul-consul-dns.consul.svc.cluster.local
    }

I don’t know what to check exactly.
I understand that my eks can’t access my consul serves in aws…

That’s pretty much the worst possible way to report a bug or ask for help. Unless you explain what doesn’t work, and how it fails, chances of anyone understanding your problem are very low.

Are you implying that you have Consul running on VMs, separate from EKS?

Because:

You seem to have opted not to deploy any Consul servers via Helm.

so it is no wonder that, in your logs:

[ERROR] consul-server-connection-manager: connection error: error="failed to discover Consul server addresses: failed to resolve DNS name: consul-consul-server.consul.svc: lookup consul-consul-server.consul.svc on 172.20.0.10:53: no such host"

there aren’t any to be found.

Sorry for not knowing how to expalin the problem.
I am not an expert in consul, and there are very little information about consul on google.
I have 3 servers in aws console and i installed consul on my eks as well using helm.
I want the eks to communicate with my servers outside eks.
I thought it suppose to look for the servers with an IAM role “Describe instances” and this configuration:
join:
- “provider=aws tag_key=Consul tag_value=server”.
For sure i am doing something wrong, i am not sure what exactly?
The better thing is to know what the steps should be taken in order the consul in eks to work with aws.
I did went through a few documentations and didn’t understand what the problem can be.
If i only found a video that explains how to do it it would be very good, but most of the videos are out of date, and they don’t talk about how to communicate eks consul with aws consul.
Maybey i didn’t find the right video yet…
This is so far the hardest topic to learn, because there are very little info online…