Hi, i have installed consul with helm with specifc configuratiosn using values.yaml, on my EKS cluster (I have 3 consul servers instances in my aws console).
I am not sure if i am missing anything or did something wrong, but it doesn’t work
I have:
- Installed with helm an nginx ingress controller with NLB.
- Created A record in route 53 which takes points to my NLB.
- Created consul namespace
- Created consul-gossip-encryption-key secret resource under consul namespace
- Installed consul with helm
- Changed the coredns cm resource
- Created an ingress rule for consul
- Of course i opened all the ports needed:
Required Ports | Consul | HashiCorp Developer
I have attached the pods logs:
This is a test enviroment and it’s goign to be destroyed so i am ok with sharing the entire logs
consulpo1.txt (3.4 KB)
consulpo3.txt (3.8 KB)
This is my values.yaml content:
global:
enabled: false
image: "hashicorp/consul:1.15.2"
datacenter: opsschool
gossipEncryption:
secretName: "consul-gossip-encryption-key"
secretKey: "key"
client:
enabled: true
join:
- "provider=aws tag_key=Consul tag_value=server"
dns:
enabled: true
syncCatalog:
enabled: true
This is the coredns configmap content:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
consul {
errors
cache 30
forward . consul-consul-dns.consul.svc.cluster.local
}
I don’t know what to check exactly.
I understand that my eks can’t access my consul serves in aws…