Agent not installed onto Kubernetes master node

I’ve installed consul onto Kubernetes local instance.
I’ve one master and 3 worker nodes.

root@pocnjv1mcom01:~/hashicorp/consul-1.1.0# kubectl get nodes -o wide
pocnjv1mcom01 Ready control-plane,master 388d v1.22.5
pocnjv1mcom02 Ready 388d v1.22.5
pocnjv1mcom04 Ready 386d v1.22.5
pocnjv1mcom05 Ready 386d v1.22.5

In the consul helm values.yaml file I have:
enabled: true,
enabled: true
toConsul: true,
toK8S: false.

pocnjv1mcom02, pocnjv1mcom04 and pocnjv1mcom05 are registered as node in consul, I also see services running on these nodes are registered in consul. But I don’t see the master node pocnjv1mcom01 registered in consul.

  1. Do I have to manually start agent on the master node? Any idea why the master node was left out.

  2. If I have to manually start, can you please provide a command to start? When I check consul processed on the other nodes, I see several systemd+ processes started. How can I manually start these services onto master nodes?

root@pocnjv1mcom04:~# ps -ef | grep -i consul
root 1797 1639 0 22:04 pts/0 00:00:00 grep --color=auto -i consul
systemd+ 22847 22824 0 Jun16 ? 00:00:00 /usr/bin/dumb-init /bin/sh /usr/local/bin/ consul agent -advertise= -config-dir=/consul/config -config-file=/consul/extra-config/extra-from-values.json
systemd+ 22898 22847 2 Jun16 ? 00:30:33 consul agent -advertise= -config-dir=/consul/config -config-file=/consul/extra-config/extra-from-values.json
systemd+ 23823 23789 0 Jun16 ? 00:00:00 /usr/bin/dumb-init /bin/sh /usr/local/bin/ consul agent -node=pocnjv1mcom04 -advertise= -bind= -client= -node-meta=host-ip: -node-meta=pod-name:consul-consul-client-wv6x6 -hcl=leave_on_terminate = true -hcl=ports { grpc = 8502, grpc_tls = -1 } -config-dir=/consul/config -config-dir=/consul/aclconfig -datacenter=dc1 -data-dir=/consul/data -retry-join=consul-consul-server-0.consul-consul-server.consul.svc:8301 -retry-join=consul-consul-server-1.consul-consul-server.consul.svc:8301 -retry-join=consul-consul-server-2.consul-consul-server.consul.svc:8301 -config-file=/consul/extra-config/extra-from-values.json -domain=consul
systemd+ 23858 23823 0 Jun16 ? 00:09:00 consul agent -node=pocnjv1mcom04 -advertise= -bind= -client= -node-meta=host-ip: -node-meta=pod-name:consul-consul-client-wv6x6 -hcl=leave_on_terminate = true -hcl=ports { grpc = 8502, grpc_tls = -1 } -config-dir=/consul/config -config-dir=/consul/aclconfig -datacenter=dc1 -data-dir=/consul/data -retry-join=consul-consul-server-0.consul-consul-server.consul.svc:8301 -retry-join=consul-consul-server-1.consul-consul-server.consul.svc:8301 -retry-join=consul-consul-server-2.consul-consul-server.consul.svc:8301 -config-file=/consul/extra-config/extra-from-values.json -domain=consul

It is quite normal for Kubernetes control plane nodes to be reserved for Kubernetes control plane components only. My guess is that’s what’s happening here.

The master node probably has a taint with NoSchedule effect. You can still schedule a Pod on that node by using tolerations.

Yes the master node is tainted with NoSchedule. We don’t want to start application pods on master but I would like to monitor the node in the consul. How can I do that?

kubectl get nodes -o,TAINTS:.spec.taints --no-headers
pocnjv1mcom01 [map[effect:NoSchedule]]

@macmiranda , I’ve the same setting in my values.yaml, yet the master node doesn’t have consul agent configured.

1353 # Toleration Settings for Client pods
1354 # This should be a multi-line string matching the Toleration array
1355 # in a PodSpec.
1356 # The example below will allow Client pods to run on every node
1357 # regardless of taints
1358 #
1359 # yaml 1360 # tolerations: | 1361 # - operator: Exists 1362 #
1363 tolerations: “”

Well, you actually have to set tolerations to a value different from ""


 tolerations: |
   - operator: Exists

@macmiranda thank you for that information. It worked.

1 Like