Kubernetes Client mode Consul is not accepting service

Hi, I am trying to integrate Kubernetes Consul Client (1.7.2) with Conusul Server(1.7.1) deployed on a VM. I am using Minikube as my Kubernetes environment.

I have Connect Mesh and ACLs enabled on the server. I can see the Kubernetes Client connected in ‘consul members’ output.

However, when I try to deploy a Connect enabled service in Kubernetes, the Pod is stuck in “Init:CrashLoopBackOff” state.

The ‘kubectl log’ shows the following:
error: a container name must be specified for pod producer, choose one of: [producer consul-connect-envoy-sidecar consul-connect-lifecycle-sidecar] or one of the init containers: [consul-connect-inject-init]

The Pod is deployed successfully if remove <“consul.hashicorp.com/connect-inject”: “true”>

Custom Values for Helm

name: consul
datacenter: as
enabled: false
#manageSystemACLs: true
secretName: bootstrap-token
secretKey: token

enabled: true

enabled: true
exposeGossipPorts: true
- ‘’
grpc: true
enabled: true
- ‘’
k8sAuthMethodHost: ‘

Service POD

apiVersion: v1
kind: ServiceAccount
name: producer

apiVersion: v1
kind: Pod
name: counting
consul.hashicorp.com/connect-inject”: “true”

  • name: producer
    image: pfridm01/producer:latest
    • containerPort: 5000
      name: http
      serviceAccountName: producer

Also, i had to comment out ‘manageSystemACLs: true’ since I am only deploying a Client and in the docs this option requires a Server. Helm was failing until I disabled this option. In the tutorials this option was always set.


Hi @pfridm01,

Can you provide the logs from the consul-connect-inject-init container?

kubectl logs <pod_name> --container=consul-connect-inject-init

That should contain more info on why the injector is failing.

Hi Blake, here is the output:

kubernetes2 centos]# kubectl logs producer --container=consul-connect-inject-init
Error registering service “producer-sidecar-proxy”: Unexpected response code: 403 (could not retrieve initial service_defaults config for service “producer-producer-sidecar-proxy”: rpc error making call: Permission denied)

Thank you.

Hi @pfridm01,

It seems like ACLs are enabled in Consul. When ACLs are enabled, the name of the pod needs to match the serviceAccountName which you have specified as producer.

Can you change the name of the pod to producer so that it matches the serviceAccountName and redeploy? The sidecar should correctly be injected after making this change.

Thanks for the quick reply, the POD where my service is deployed is named ‘producer’. The Service Account is also named ‘producer’.

Here is the POD detailed info:

@consul-kubernetes2 centos]# kubectl describe pods producer
Name: producer
Namespace: default
Priority: 0
Node: consul-kubernetes2.platform.comcast.net.openstacklocal/
Start Time: Thu, 21 May 2020 14:41:56 +0000
Annotations: consul.hashicorp.com/connect-inject: true
consul.hashicorp.com/connect-inject-status: injected
consul.hashicorp.com/connect-service: producer
consul.hashicorp.com/connect-service-port: http
Status: Pending
Init Containers:
Container ID: docker://e7fe13109056dd5ce088aa60a1a626f5e5d520f3aec420460ae8b5042b6e25f1
Image: consul:1.7.2
Image ID: docker-pullable://consul@sha256:4592d81f9cecdc9fe1832bdcd22dfceafd36720011539679ae177f62cf169ce6
Host Port:

  export CONSUL_HTTP_ADDR="${HOST_IP}:8500"
  export CONSUL_GRPC_ADDR="${HOST_IP}:8502"

  # Register the service. The HCL is stored in the volume so that
  # the preStop hook can access it to deregister the service.
  cat <<EOF >/consul/connect-inject/service.hcl
  services {
    id   = "${PROXY_SERVICE_ID}"
    name = "producer-sidecar-proxy"
    kind = "connect-proxy"
    address = "${POD_IP}"
    port = 20000

    proxy {
      destination_service_name = "producer"
      destination_service_id = "${SERVICE_ID}"
      local_service_address = ""
      local_service_port = 5000

    checks {
      name = "Proxy Public Listener"
      tcp = "${POD_IP}:20000"
      interval = "10s"
      deregister_critical_service_after = "10m"

    checks {
      name = "Destination Alias"
      alias_service = "producer"

  services {
    id   = "${SERVICE_ID}"
    name = "producer"
    address = "${POD_IP}"
    port = 5000

  /bin/consul services register \

  # Generate the envoy bootstrap code
  /bin/consul connect envoy \
    -proxy-id="${PROXY_SERVICE_ID}" \
    -bootstrap > /consul/connect-inject/envoy-bootstrap.yaml

  # Copy the Consul binary
  cp /bin/consul /consul/connect-inject/consul
State:          Waiting
  Reason:       CrashLoopBackOff
Last State:     Terminated
  Reason:       Error
  Exit Code:    1
  Started:      Thu, 21 May 2020 14:42:17 +0000
  Finished:     Thu, 21 May 2020 14:42:18 +0000
Ready:          False
Restart Count:  2
  HOST_IP:            (v1:status.hostIP)
  POD_IP:             (v1:status.podIP)
  POD_NAME:          producer (v1:metadata.name)
  POD_NAMESPACE:     default (v1:metadata.namespace)
  SERVICE_ID:        $(POD_NAME)-producer
  PROXY_SERVICE_ID:  $(POD_NAME)-producer-sidecar-proxy
  /consul/connect-inject from consul-connect-inject-data (rw)
  /var/run/secrets/kubernetes.io/serviceaccount from producer-token-r8t7c (ro)

Container ID:
Image: pfridm01/producer:latest
Image ID:
Port: 5000/TCP
Host Port: 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
/var/run/secrets/kubernetes.io/serviceaccount from producer-token-r8t7c (ro)
Container ID:
Image: envoyproxy/envoy-alpine:v1.13.0
Image ID:
Host Port:
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
HOST_IP: (v1:status.hostIP)
CONSUL_HTTP_ADDR: (HOST_IP):8500 Mounts: /consul/connect-inject from consul-connect-inject-data (rw) /var/run/secrets/kubernetes.io/serviceaccount from producer-token-r8t7c (ro) consul-connect-lifecycle-sidecar: Container ID: Image: hashicorp/consul-k8s:0.14.0 Image ID: Port: <none> Host Port: <none> Command: consul-k8s lifecycle-sidecar -service-config /consul/connect-inject/service.hcl -consul-binary /consul/connect-inject/consul State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Environment: HOST_IP: (v1:status.hostIP) CONSUL_HTTP_ADDR: (HOST_IP):8500
/consul/connect-inject from consul-connect-inject-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from producer-token-r8t7c (ro)
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Type: Secret (a volume populated by a Secret)
SecretName: producer-token-r8t7c
Optional: false
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
QoS Class: BestEffort
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Type Reason Age From Message

Normal Scheduled 40s default-scheduler Successfully assigned default/producer to consul-kubernetes2.platform.comcast.net.openstacklocal
Normal Pulled 20s (x3 over 38s) kubelet, consul-kubernetes2.platform.comcast.net.openstacklocal Container image “consul:1.7.2” already present on machine
Normal Created 20s (x3 over 38s) kubelet, consul-kubernetes2.platform.comcast.net.openstacklocal Created container consul-connect-inject-init
Normal Started 19s (x3 over 37s) kubelet, consul-kubernetes2.platform.comcast.net.openstacklocal Started container consul-connect-inject-init
Warning BackOff 5s (x5 over 34s) kubelet, consul-kubernetes2.platform.comcast.net.openstacklocal Back-off restarting failed container
[root@consul-kubernetes2 centos]#

Also, I had to set manageSystemACLs to ‘false’. When set to ‘true’ the Helm was hanging.
According to the doc below ‘This requires servers to be running inside Kubernetes’. And since in my scenario Servers are outside of Kubernetes, I disabled this option. But the Consul tutorial https://www.consul.io/docs/k8s/installation/deployment-configurations/servers-outside-kubernetes had this option enabled.

[ manageSystemACLs ] ( boolean: false ) - If true, the Helm chart will automatically manage ACL tokens and policies for all Consul and consul-k8s components. This requires servers to be running inside Kubernetes. Additionally requires Consul >= 1.4 and consul-k8s >= 0.14.0.

@pfridm01 when your servers are running outside of Kubernetes and you have ACLs enabled the Consul servers need to be able to call into the Kubernetes API. They need to be able to do this because the connect pods authenticate to consul with their service account credentials which the consul servers then need to check with the Kubernetes API.

Are there any logs on your consul servers about not being able to authenticate with kubernetes?

We actually expose a setting k8sAuthMethodHost under externalServers (https://www.consul.io/docs/k8s/helm#v-externalservers-k8sauthmethodhost) that lets you configure which URL the external servers will need to use to talk to the kubernetes api.

In addition, if ACLs are enabled on your servers you either need to set global.acls.manageSystemACLs to true so that Kubernetes can set up its auth method that allows Connect pods to authenticate or you need to create that auth method yourself. There unfortunately isn’t a lot of documentation on how to do that right now.