How to deploy the Consul client on K8s with custom config.yaml?

Hi,

I am trying to deploy the Consul client on a k8s cluster ( the Consul server is on a docker swarm cluster). I want to use a config.yaml (mentioned in Consul Servers Outside of Kubernetes - Kubernetes | Consul by HashiCorp) to set up the configuration. I found a Helm Chart Configuration page (Helm Chart Configuration | Consul by HashiCorp) and a Configuration page (Configuration | Consul by HashiCorp). What is the difference between them? It seems that I need to refer to the Helm Chart Configuration page since I am working on the k8s. However, on the Helm Chart Configuration page, I can not find how to set up something like node_name, data_dir, client_addr, bind_addr, advertise_addr. Besides, I also need to set verify_incoming, encrypt, verify_outgoing, verify_server_hostname. For ca_file, cert_file, and key_file, I assume that cert_file is for caCert (in Helm Chart Configuration), key_file is for caKey, and I am not sure what the ca_file stand for.

Any help would be appreciated.
Thanks

Hi @cheyuxuanll,

Most of these parameters are specified by the Helm chart when deploying the Consul client or server pods. Is there a particular reason that you need to override these particular settings?

With that said, you can use server.extraConfig and client.extraConfig to provide additional configuration parameters to the server and client agents. Normally you would only specify parameters that are not already specified by the Helm chart.

Hi @blake,

Thanks for your reply!
Someone in my team deploys the consul server in a docker swarm with such parameters, and I need to deploy the consul client in a k8s cluster. So, I thought I needed to set similar parameters in the helm.

Here is the configure JSON they are used to create one of the consul servers (we have three servers: one leader , two worker):

{
  "node_name": "consul-server3",
  "server": true,
  "ui_config": {
    "enabled": true
  },
  "data_dir": "/consul/data",
  "addresses": {
    "http": "0.0.0.0"
  },
  "bind_addr": "0.0.0.0",
  "advertise_addr": "192.168.10.3",
  "retry_join": ["consul-server1", "consul-server2"], 
  "encrypt": "test",
  "verify_incoming": false,
  "verify_outgoing": true,
  "verify_server_hostname": true,
  "ca_file": "/consul/config/certs/ca-cert.pem",
  "cert_file": "/consul/config/certs/server3.dc1.consul.crt",
  "key_file": "/consul/config/certs/server3.dc1.consul.key"
}

here is the consul.yml that I am using to deploy the client on k8s (with helm):

global:
  enabled: false
  tls:
    enabled: true
    verify : true
    enableAutoEncrypt : true
    # caKey :  // I thought I can use auto encrypt, so I comment this two lines here.
    # caCert :
  gossipEncryption:
    secretName :  "consul-gossip-encryption-key" // I create this k8s secrete using "encrypt".
externalServers:
  enabled: true
  hosts:
    - '192.168.10.1'
    - '192.168.10.2'
    - '192.168.10.3'
client:
  enabled: true
  exposeGossipPorts: true
  retry-join:
    - '192.168.10.1'  // for consul-server1
    - '192.168.10.2'  //for consul-server2
    - '192.168.10.3' // for consul-server3

I am not sure if this client config aligns with the server?

Actually, I got a problem here :
I am using: helm install consul hashicorp/consul --create-namespace --namespace consul --values consul.yml to deploy this client, and I can have this response:

NAME: consul
LAST DEPLOYED: Tue Mar 8 18:11:53 2022
NAMESPACE: consul
STATUS: deployed
REVISION: 1
NOTES:
Thank you for installing HashiCorp Consul!
Your release is named consul.
To learn more about the release, run:
$ helm status consul
$ helm get all consul
Consul on Kubernetes Documentation:
https://www.consul.io/docs/platform/k8s
Consul on Kubernetes CLI Reference:
https://www.consul.io/docs/k8s/k8s-cli

And when I input this: helm status consul, it will say "Error: release: not found" (same with helm get all consul ). Am I missing something here?

If I input: helm list -A, I can get something like this :

NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
consul  consul          1               2022-03-08 19:45:24.812942169 +0000 UTC deployed        consul-0.41.1   1.11.3     

helm version: version.BuildInfo{Version:"v3.8.0", GitCommit:"d14138609b01886f544b2025f5000351c9eb092e", GitTreeState:"clean", GoVersion:"go1.17.5"}

(we have four servers in the k8s cluster)

Best
Yuxuan

Hi @cheyuxuanll,

You are installing Consul into the consul namespace in Kubernetes when you specify --namespace consul on the helm install command.

In order to retrieve the status of that deployment, you similarly need to specify the namespace flag when running helm status.

$ helm status --namespace consul consul

Hi @blake ,

Currently, I am using the following config.yaml in k8s:

global:
  enabled: false
  tls:
    enabled: true
    verify : true
    # enableAutoEncrypt : true
    caKey :
     secretName :  "consul-ca-key"
     secretKey: tls.key
    caCert :
     secretName :  "consul-ca-cert"
     secretKey: tls.crt
  gossipEncryption:
    secretName :  "consul-gossip-encryption-key"
    secretKey: 'consul-gossip'
  imagePullSecrets:
    - name: regcred
client:
  enabled: true
  exposeGossipPorts: true
  join: ['192.168.10.1','192.168.10.2','192.168.10.3']
server:
  enabled: false

in my k8s cluster, there are four nodes: one master and three workers.
After I run this :

sudo helm install hi-consul hashicorp/consul -n consul-client -f config.yaml

then run :

kubectl get pods -n consul-client -o wide -w

It shows that the consul clients are deployed to three workers and not in the master.
Am I missing something here?

when I inspect one of the pods:
the conditions are here:

Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True

However, when I check the events:

Warning Unhealthy 19s (x13 over 24m) kubelet Readiness probe failed:

Is this normal? Actually, in my consul server UI, it can list three nodes. I am not sure the consul client is working well.