Consul 1.16.0 api-gateway on k8s issue

I’m trying to go through the consul on k8s tutorial and having problem getting api-gateway configuration as shown below

I’m using local k8s installations (minikube, kind, and kubeadm). And some minor changes of yaml files (got from several consul github issues, like the class name, CRDS, etc)

My consul ui shows no api-gateway on service topology


Hello. Can you share the configuration of the api-gateway? Also the link to the main tutorial you are following?

Here is the api-gateway config:

---
apiVersion: gateway.networking.k8s.io/v1beta1
# The Gateway is the main infrastructure resource that links API gateway components.
kind: Gateway
metadata:
  name: api-gateway
  namespace: consul
spec:
  gatewayClassName: consul
  # Configures the listener that is bound to the gateway's address.
  listeners:
    # Defines the listener protocol (HTTP, HTTPS, or TCP)
  - protocol: HTTP
    port: 8085
    name: http
    allowedRoutes:
      namespaces:
        from: Same
    tls:
      # Defines the certificate to use for the HTTPS listener.
      certificateRefs:
        - name: consul-server-cert

And the link to the tutorial starts here, further down to fourth one.

Also for further clarification, is the gateway working in general? Is it just the topology view that seems off?

I don’t think so. Here is service record fro api-gateway:

api-gateway LoadBalancer 10.101.242.122 10.10.1.101 8085:31466/TCP

When I try to connect to the http://10.10.1.101:8085 I get connection refused.

And the following is consul-ui service:
consul-ui LoadBalancer 10.107.41.8 10.10.1.100 443:30937/TCP

And I can connect to the ui over https://10.10.1.100

BTW, direct url to the hashicups app through nginx works

If you are using KIND/minikube/any local k8s cluster you cannot use a LoadBalancer type for the api-gateway unless you setup something like MetalLB. k8s setup instructions here

api-gateway LoadBalancer 10.101.242.122 10.10.1.101 8085:31466/TCP

^^ this service type is LoadBalancer.

If you check out this part of the tutorial you linked it mentions that

In this local environment, API Gateway uses NodePort to let you access your application directly through the API Gateway without having to forward your Kubernetes cluster’s ports. In a cloud environment, API Gateway may use a LoadBalancer to automatically provision a publicly accessible DNS entry.

So you cannot directly access the loadbalancer api-gateway service because it is not supported on local k8s clusters unless you enable it. (Again with something like metallb. It is not required to setup MetalLb for the tutorial to work.). If you do setup your gateway like what is in the tutorial you should be able to access the Hashicups application on https://localhost:8443.

Can you give me the output of the following command? The type of the gateway service is dependent on the gatewayClassConfig that is deployed.

kubectl get gatewayClassConfigs -o yaml

As I said I’m using kind/minikube and my own local k8s cluster with metallb and the loadbalancer record from the last one. And consul-ui record shows the metallb loadbalancer details, also.
It turns out, I did show the records from that k8s cluster.

Here is the gatewayClassConfigs output:

apiVersion: v1
items:
- apiVersion: consul.hashicorp.com/v1alpha1
  kind: GatewayClassConfig
  metadata:
    creationTimestamp: "2023-08-09T16:55:11Z"
    finalizers:
    - gateway-class-exists-finalizer.consul.hashicorp.com
    generation: 1
    labels:
      app: consul
      chart: consul-helm
      component: api-gateway
      heritage: Helm
      release: consul
    name: consul-api-gateway
    resourceVersion: "37325759"
    uid: 4bfea9ea-43ec-430a-9f4e-bcd300d12426
  spec:
    copyAnnotations: {}
    deployment:
      defaultInstances: 1
      maxInstances: 1
      minInstances: 1
    serviceType: LoadBalancer
kind: List
metadata:
  resourceVersion: ""

I’ll setup a kind cluster and give you details during the day.
Thank you!

Here is the the output of command kubectl get gatewayClassConfigs -o yaml from local kind cluster:

apiVersion: v1
items:
- apiVersion: consul.hashicorp.com/v1alpha1
  kind: GatewayClassConfig
  metadata:
    creationTimestamp: "2023-08-12T18:33:07Z"
    finalizers:
    - gateway-class-exists-finalizer.consul.hashicorp.com
    generation: 2
    labels:
      app: consul
      chart: consul-helm
      component: api-gateway
      heritage: Helm
      release: consul
    name: consul-api-gateway
    resourceVersion: "6911"
    uid: 81209869-cbd7-4fe2-81be-aebe1a37ac85
  spec:
    copyAnnotations: {}
    deployment:
      defaultInstances: 1
      maxInstances: 1
      minInstances: 1
    serviceType: NodePort
kind: List
metadata:
  resourceVersion: ""

k8s services in consul namespace: kubectl get services -n consul

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                            AGE
api-gateway               NodePort    10.96.240.180   <none>        8443:30460/TCP                                                                     2d8h
consul-connect-injector   ClusterIP   10.96.48.150    <none>        443/TCP                                                                            2d9h
consul-dns                ClusterIP   10.96.157.136   <none>        53/TCP,53/UDP                                                                      2d9h
consul-server             ClusterIP   None            <none>        8501/TCP,8502/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP   2d9h
consul-ui                 NodePort    10.96.75.209    <none>        443:30252/TCP                                                                      2d9h

port forwarding with consequent access into 9443 errors out:

kubectl port-forward svc/api-gateway --namespace consul 9443:8443 --address=0.0.0.0
Forwarding from 0.0.0.0:9443 -> 8443
Handling connection for 9443
E0815 09:22:21.980361    5190 portforward.go:406] an error occurred forwarding 9443 -> 8443: error forwarding port 8443 to pod 30d65173fb8af4e3ab98c4f4e0ba2cac49ea15b13e986ab146c9e1a4f3a94121, uid : failed to execute portforward in network namespace "/var/run/netns/cni-65e3348c-a3d4-5d50-119f-326f7a7a5a9c": failed to connect to localhost:8443 inside namespace "30d65173fb8af4e3ab98c4f4e0ba2cac49ea15b13e986ab146c9e1a4f3a94121", IPv4: dial tcp4 127.0.0.1:8443: connect: connection refused IPv6 dial tcp6 [::1]:8443: connect: connection refused
E0815 09:22:21.980544    5190 portforward.go:234] lost connection to pod

Screenshots from consul-ui:



If you need more info, please let me know. Thanks!

I am experiencing this issue as well.

For me it appears to only be the Consul UI, as the gateway itself works fine (Azure AKS).

With previous/earlier helm-deployment, it was showing correctly in the UI, but latest version shows like the screenshots in this discussion.