[CONSUL-ERROR] "Config entry not found for \"proxy-defaults\" / \"global\""

Hello Dear All,

I am struggling with Consul for a while to get things working. To summarize I have 1 K8S and 1 Nomad cluster. I also have 1 Consul Server deployed externally of the above clusters. So far I have connected both of these clusters into the Consul Server.

Below are my Consul configurations

Consul Server:

cat /etc/consul.d/consul.hcl
data_dir = "/opt/consul"

client_addr = "0.0.0.0"

ui_config{
  enabled = true
}

server = true

advertise_addr = "192.168.60.10"

bootstrap_expect=1

retry_join = ["192.168.60.10"]

ports {
 grpc = 8502 
}

connect {
 enabled = true
}

Kubernetes: consul-values.yaml

global:
  enabled: false
  tls:
    enabled: false
externalServers:
  enabled: true
  hosts: ["192.168.60.10"]
  httpsPort: 8500
server:
  enabled: false
syncCatalog:
  enabled: true
  toConsul: false
  toK8S: false

Nomad config: nomad.hcl

data_dir  = "/opt/nomad/data"
bind_addr = "0.0.0.0"

server {
  enabled          = true
  bootstrap_expect = 1  
}

advertise {
 http = "192.168.40.10:4646"
 rpc = "192.168.40.10:4647"
 serf = "192.168.40.10:4648"
}

client {
  enabled = false  # Disable the client on the server
}

consul {
 address = "192.168.60.10:8500"
 checks_use_advertise = true
}

Below are my manifest files:

Kubernetes

nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-nginx
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: k8s-nginx
  template:
    metadata:
      labels:
        app: k8s-nginx
      annotations:
        'consul.hashicorp.com/connect-inject': 'true'
    spec:
      containers:
      - name: k8s-nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
        command:
        - /bin/sh
        - -c
        - |
          echo "Hello World! Response from Kubernetes!" > /usr/share/nginx/html/index.html && nginx -g 'daemon off;'

nginx-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  annotations:
    'consul.hashicorp.com/service-sync': 'true'  # Sync this service with Consul
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: k8s-nginx

pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: multitool-pod
  labels:
    app: multitool
spec:
  containers:
  - name: network-multitool
    image: praqma/network-multitool:latest
    ports:
    - containerPort: 80

NOMAD: nginx.nomad

job "nginx" {
  datacenters = ["dc1"] # Specify your datacenter
  type        = "service"

  group "nginx" {
    count = 1  # Number of instances

    network {
      mode = "bridge" # This uses Docker bridge networking
      port "http" {
        to = 80 
      }
    }

    task "nginx" {
      driver = "docker"

      config {
        image = "nginx:alpine"

        # Entry point to write message into index.html and start nginx
        entrypoint = [
          "/bin/sh", "-c",
          "echo 'Hello World! Response from Nomad!' > /usr/share/nginx/html/index.html && nginx -g 'daemon off;'"
        ]
      }

      resources {
        cpu    = 500    # CPU units
        memory = 256    # Memory in MB
      }

      service {
        name = "nginx-service"
        port = "http"  # Reference the network port defined above
        tags = ["nginx", "nomad"]

        check {
          type     = "http"
          path     = "/"
          interval = "10s"
          timeout  = "2s"
        }
      }
    }
  }
}

K8S pods

NAME                         READY   STATUS    RESTARTS   AGE    IP           NODE                   NOMINATED NODE   READINESS GATES
k8s-nginx-68d85bb657-2tgz7   2/2     Running   0          19h    30.0.1.103   k8s-cluster3-worker1   <none>           <none>
multitool-pod                1/1     Running   0          101m   30.0.1.26

Only k8s-nginx-68d85bb657-2tgz7 has consul sidecar proxy running along it (2/2). Other pods are not a part of service mesh. I am trying to test the load balancing and fail over between K8S and Nomad through Consul. In Consul UI I can see that the services are available as below screenshot

Since I am using the same service name for both K8s and Nomad the pods behind it are available inside the same service name (nginx-service)

When I try to curl to the k8s-nginx-68d85bb657-2tgz7 from multitool-pod or from the host I am getting curl: (52) Empty reply from server error. This is the same when I try to curl to the pods IP or curl nginx-service.

**curl http://192.168.60.10:8500/v1/catalog/services | jq**

{
  "consul": [],
  "nginx-service": [
    "nginx",
    "nomad"
  ],
  "nginx-service-sidecar-proxy": [],
  "nomad": [
    "http",
    "serf",
    "rpc"
  ],
  "nomad-client": [
    "http"
  ]
}

curl http://192.168.60.10:8500/v1/catalog/service/nginx-service | jq

[
  {
    "ID": "cb3ad82f-ce59-36c7-b1fc-7b5adb672411",
    "Node": "consul-server",
    "Address": "192.168.60.10",
    "Datacenter": "dc1",
    "TaggedAddresses": {
      "lan": "192.168.60.10",
      "lan_ipv4": "192.168.60.10",
      "wan": "192.168.60.10",
      "wan_ipv4": "192.168.60.10"
    },
    "NodeMeta": {
      "consul-network-segment": "",
      "consul-version": "1.19.2"
    },
    "ServiceKind": "",
    "ServiceID": "_nomad-task-5cb09d14-47fa-aacb-368f-50f9b0ca4970-nginx-nginx-service-http",
    "ServiceName": "nginx-service",
    "ServiceTags": [
      "nginx",
      "nomad"
    ],
    "ServiceAddress": "192.168.40.11",
    "ServiceTaggedAddresses": {
      "lan_ipv4": {
        "Address": "192.168.40.11",
        "Port": 20276
      },
      "wan_ipv4": {
        "Address": "192.168.40.11",
        "Port": 20276
      }
    },
    "ServiceWeights": {
      "Passing": 1,
      "Warning": 1
    },
    "ServiceMeta": {
      "external-source": "nomad"
    },
    "ServicePort": 20276,
    "ServiceSocketPath": "",
    "ServiceEnableTagOverride": false,
    "ServiceProxy": {
      "Mode": "",
      "MeshGateway": {},
      "Expose": {}
    },
    "ServiceConnect": {},
    "ServiceLocality": null,
    "CreateIndex": 80705,
    "ModifyIndex": 80705
  },
  {
    "ID": "",
    "Node": "k8s-cluster3-worker1-virtual",
    "Address": "192.168.50.11",
    "Datacenter": "dc1",
    "TaggedAddresses": null,
    "NodeMeta": {
      "synthetic-node": "true"
    },
    "ServiceKind": "",
    "ServiceID": "k8s-nginx-68d85bb657-2tgz7-nginx-service",
    "ServiceName": "nginx-service",
    "ServiceTags": [],
    "ServiceAddress": "30.0.1.103",
    "ServiceTaggedAddresses": {
      "virtual": {
        "Address": "10.107.180.39",
        "Port": 80
      }
    },
    "ServiceWeights": {
      "Passing": 1,
      "Warning": 1
    },
    "ServiceMeta": {
      "k8s-namespace": "default",
      "k8s-service-name": "nginx-service",
      "managed-by": "consul-k8s-endpoints-controller",
      "pod-name": "k8s-nginx-68d85bb657-2tgz7",
      "pod-uid": "330f7c09-fe33-41a3-8d11-ce98d358c873",
      "synthetic-node": "true"
    },
    "ServicePort": 80,
    "ServiceSocketPath": "",
    "ServiceEnableTagOverride": false,
    "ServiceProxy": {
      "Mode": "",
      "MeshGateway": {},
      "Expose": {}
    },
    "ServiceConnect": {},
    "ServiceLocality": null,
    "CreateIndex": 80732,
    "ModifyIndex": 80846
  }
]

As per the journalctl -u consul I can see the below error

consul-server consul[36093]: 2024-09-14T21:52:54.635Z [ERROR] agent.http: Request error: method=GET url=/v1/config/proxy-defaults/global?stale= from=54.243.71.191:7224 error="Config entry not found for \"proxy-defaults\" / \"global\""

I assume this can be an issue related to Proxy, but unfortunately I do not get it how to resolve it although I tried to refer the docs, hence seeking your kind support.

I also came across upstream, but I am only trying to curl to the service name to see how the requests are routed. I am not developing a frontend or backend services where backend is added as an upstream.

I am stuck with this thing for a while now and therefore seek your kind advice.

Thank you!

Hi @harsh.lif3,

The sidecar proxies would always require the connection to it be mTLS. What I would recommend is to have the ‘consul.hashicorp.com/connect-inject’: ‘true’ annotation for your multitool pod, so that it also gets a sidecar proxy injected.

Once this is done, you will be able to access nginx from the multitool pod.

Please try the same and let me know if it works for you.

Dear @Ranjandas,

Adding the inject annotation now allows me to curl to the service name as below.

k exec -it pod/multitool-pod -c network-multitool -- curl nginx-service
Hello World! Response from Kubernetes!

However, it doesn’t allow to curl the IP of the pod directly.

k exec -it pod/multitool-pod -c network-multitool -- curl 30.0.1.86
curl: (52) Empty reply from server
command terminated with exit code 52

I think maybe it is because of sidecar we can only use service name. Is there any way to see how the traffic flows. I tried checking the real-time logs in consul-dataplane container in the multitool-pod but, it didn’t show any after I execute the curl command.

Thank you!