Consul ingress gateway protocol http not working

I am trying to switch consul ingress gateway from from layer 4 to layer 7 so I can use host based matching. I currently have deployed the following which works fine.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-pod-a-ingress
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "off"
spec:
  rules:
    - host: test.cluster.internal
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: consul-ingress-gateway
                port:
                  number: 8443
---
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
  name: test-pod-a
spec:
  protocol: tcp
---
apiVersion: consul.hashicorp.com/v1alpha1
kind: IngressGateway
metadata:
  name: ingress-gateway
spec:
  tls:
    enabled: true
  listeners:
    - port: 8443
      protocol: tcp
      services:
        - name: test-pod-a
 #         hosts: ["test.cluster.internal"]
---
apiVersion: v1
kind: Pod
metadata:
  name: test-pod-a
  labels:
    role: test
    app: test
  annotations:
    "consul.hashicorp.com/connect-inject": "true"
    "consul.hashicorp.com/connect-service": "test-pod-a"
    "consul.hashicorp.com/connect-port": "80"
spec:
  containers:
    - name: test-pod-a
      image: nginx:latest
      ports:
        - name: web
          containerPort: 80
          protocol: TCP

And services deployed

$ kubectl get svc
NAME                                            TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                   AGE
consul-connect-injector-svc                     ClusterIP      10.233.32.168   <none>        443/TCP                                                                   26h
consul-controller-webhook                       ClusterIP      10.233.10.61    <none>        443/TCP                                                                   26h
consul-dns                                      ClusterIP      10.233.4.179    <none>        53/TCP,53/UDP                                                             26h
consul-ingress-gateway                          ClusterIP      10.233.46.180   <none>        8080/TCP,8443/TCP                                                         26h
consul-server                                   ClusterIP      None            <none>        8501/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP   26h
consul-ui                                       ClusterIP      10.233.40.238   <none>        443/TCP                                                                   26h

I change the IngressGateway and ServiceDefaults to the values below and kubectl delete -f mytest.yaml followed by kubectl apply -f mytest.yaml. However I am no longer able to reach my page. I get 404 not found responses. The test-pod-a container is not reporting any nginx access requests so it seems that something isn’t routing. Is there something else that needs to change to enable consul ingress gateway to communicate on Layer 7?

---
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
  name: test-pod-a
spec:
  protocol: http
---
apiVersion: consul.hashicorp.com/v1alpha1
kind: IngressGateway
metadata:
  name: ingress-gateway
spec:
  tls:
    enabled: true
  listeners:
    - port: 8443
      protocol: http
      services:
        - name: test-pod-a
 #         hosts: ["test.cluster.internal"]
---

When using protocol http, the Host header matters now (vs with tcp). With the config you linked, you’d need your request to have the Host header “test-pod-a.ingress.consul”, e.g. curl -H “Host: test-pod-a.ingress.consul” https://. Otherwise the ingress gateway doesn’t know where to route the traffic.

You can control which hosts match via the Hosts config on the ingress-gateway config entry: Configuration Entry Kind: Ingress Gateway | Consul by HashiCorp

@lkysow Thank you. However, when I uncomment the hosts it still does not work. I also tried wildcard routing with hosts: ["*"]. I still the the same 404 error with http routing.