Sidecar consul envoy not balancing traffic

Hi All,

I has been setup consul version 0.30.0 and envoy 1.14.6 from helm on kubernet
this is my value.yaml

global:
  name: consul
  imageEnvoy: "envoyproxy/envoy-alpine:v1.14.6"
  imagePullSecrets:
    - name: regcred
client:
  grpc: true
connectInject:
  enabled: true
  centralConfig:
    enabled: "true"
  sidecarProxy:
    resources:
      requests:
        memory: "100Mi"
        cpu: "100m"
      limits:
        memory: "100Mi"
        cpu: "100m"
server:
  resources:
    requests:
      memory: "100Mi"
      cpu: "100m"
    limits:
      memory: "100Mi"
      cpu: "300m"

syncCatalog:
  enabled: true

on my service i just add 'consul.hashicorp.com/connect-inject': 'true'
i have some probelm, traffic from my apigateway not balancing to the my service pods.

on my apigateway manifest config for consul i just add

'consul.hashicorp.com/connect-inject': 'true'
'consul.hashicorp.com/connect-service-upstreams': myservice:1001,myservice:1002

Screenshot from 2021-07-14 09-29-15

How to trace root cause this issue, and solve the probelm

Thanks
andi

Hi, so if you exec into your API gateway and curl localhost:1001 what’s the response?

Hi,

Thanks for your respone,
sorry i’m foirget to give information from apigateway to backend service use grpc protocol

/tmp $ ./grpc_health_probe --addr=127.0.0.1:10016
status: SERVING

Thanks
Andi

Hi Ikysow,
I changed my values.yaml configuration to something like this,

global:
  name: consul
  imageEnvoy: "envoyproxy/envoy-alpine:v1.16.4"
  imagePullSecrets:
    - name: regcred
client:
  grpc: true
  extraConfig: |
    {"enable_central_service_config": false}

connectInject:
  enabled: true
  sidecarProxy:
    resources:
      requests:
        memory: "100Mi"
        cpu: "100m"
      limits:
        memory: "100Mi"
        cpu: "100m"

controller:
  enabled: true

server:
  resources:
    requests:
      memory: "100Mi"
      cpu: "100m"
    limits:
      memory: "100Mi"
      cpu: "300m"
  nodeSelector: |
  extraConfig: |
    {"enable_central_service_config": false}

syncCatalog:
  enabled: true

I followed the instructions from the tutorial https://www.consul.io/docs/k8s/crds/upgrade-to-crds#connect-service-protocol-annotation
and I created a servicedefaults.yaml

apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
  name: microapi-stream
spec:
  protocol: 'grpc'

and servicedefaults created

NAME              SYNCED   AGE
microapi-stream  True     12h

But after describe my service pods, service-defaults with grpc protocol doesn’t showing,

    EOF
      
      /bin/consul services register \
        /consul/connect-inject/service.hcl
      
      # Generate the envoy bootstrap code
      /bin/consul connect envoy \
        -proxy-id="${PROXY_SERVICE_ID}" \
        -bootstrap > /consul/connect-inject/envoy-bootstrap.yaml
      
      # Copy the Consul binary
      cp /bin/consul /consul/connect-inject/consul

How to configure and enable these servicedefaults, and ensure that the protocol used is grpc, not tcp

Thanks
Andi

Hi, you won’t see anything about grpc when you describe the service pod but under the hood Consul knows that your service is using gRPC.