Consul-helm: issue terminating TLS at AWS Load Balancer

Hi folks,

I’m having an issue with consul-helm and enabling TLS termination at my AWS Elastic Load Balancer for the Consul UI.

When I open the DNS for Consul UI in a browser (e.g. https://consul.company.com) I get the error:

Client sent an HTTP request to an HTTPS server.

It seems that the HTTPS->HTTP listener being used is always routing to the Consul NodePort running a HTTPS server. Instead, the HTTPS connection should be terminated at the load balancer, “stepped down” to HTTP and sent to the appropriate Consul HTTP port.

Here is my values-override.yaml file which I pass to consul-helm. Note the added annotations for the UI Service.

global:
  datacenter: sandbox

  gossipEncryption:
    secretName: "consul"
    secretKey: "CONSUL_GOSSIP_ENCRYPTION_KEY"

  tls:
    enabled: true
    httpsOnly: false
    enableAutoEncrypt: true
    serverAdditionalDNSSANs: ["'consul.service.consul'"]

  acls:
    manageSystemACLs: true

server:
  replicas: 3
  bootstrapExpect: 3
  storage: 20Gi

dns:
  clusterIP: 172.20.53.53

ui:
  enabled: true
  service:
    type: 'LoadBalancer'
    annotations: |
      "external-dns.alpha.kubernetes.io/hostname": "consul.company.com"
      "external-dns.alpha.kubernetes.io/ttl": "30"
      "service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "http"
      "service.beta.kubernetes.io/aws-load-balancer-ssl-cert": "<CERTIFICATE_ARN>"
      "service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy": "ELBSecurityPolicy-TLS-1-2-2017-01"
      "service.beta.kubernetes.io/aws-load-balancer-ssl-ports": "https"
      "service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags": "Foo=Bar,Environment=sandbox,Name=consul-public"


syncCatalog:
  enabled: true

As you can see, I have the tls.httpsOnly key set to false.

When I look at my ELB config, I see a valid HTTPS->HTTP listener but, evidently, the Instance Port that it’s routing to is a HTTPS port, not HTTP, hence the browser error.

Have I misconfigured something here?

Thanks - Aaron

Looks like we set the service’s port to 8501 when TLS is enabled in the helm chart: https://github.com/hashicorp/consul-helm/blob/master/templates/ui-service.yaml#L23-L32.

For now you’ll need to manually create another service with the right port. I’ll open up an issue to track: https://github.com/hashicorp/consul-helm/issues/489

1 Like

Great, thanks @lkysow - appreciate the response.

Has there been any movement on merging any PRs which provide Kubernetes Ingress capabilities with Consul? This would be hugely beneficial for us.

That’s on our backlog but unfortunately I don’t have an update as to when we’ll get to it.

1 Like

hey @thecosmicfrog,

We spent the last few days trying to get this working. We investigated switching the lb backend-protocol to be https but that caused the health checks to fail. If we manually updated those health check to use the tcp endpoint, the services became available but any request to them would just hang indefinitely.

However, we did get this working by updating the LoadBalancer service to forward both http and https requests to the port 8500 of the consul server.

apiVersion: v1
kind: Service
metadata:
  annotations:
    external-dns.alpha.kubernetes.io/hostname: aws-acm.consul.zone
    external-dns.alpha.kubernetes.io/ttl: "300"
    meta.helm.sh/release-name: consul
    meta.helm.sh/release-namespace: default
    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Foo=Bar,Environment=sandbox,Name=consul-public
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <cert-arn>
    service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS-1-2-2017-01
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
  name: consul-consul-ui
  namespace: default
spec:
  clusterIP: 10.100.236.103
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: <http-port>
    port: 80
    protocol: TCP
    targetPort: 8500
  - name: https
    nodePort: <https-port>
    port: 443
    protocol: TCP
    targetPort: 8500
  selector:
    app: consul
    component: server
    release: consul
  sessionAffinity: None
  type: LoadBalancer

Setting the ui.enabled to true and ui.service.enabled to false should allow the use of the above service definition to route traffic to the consul server.

Having said that, this is only for the short term. We are looking into a more long term fix that would require us having a clearer idea of why the health checks fail on the ssl endpoint when the backend protocol is https and then after that, why the requests just hang indefinitely.

I hope this unblocks you but do let us know if there are follow up questions.

Cheers!!
Ashwin

Hey @ashwin-venkatesh. Thanks for doing some research into this. We have a user story ourselves to investigate this further and attempt mitigation, but this new information will help massively - cheers!

We are looking into a more long term fix that would require us having a clearer idea of why the health checks fail on the ssl endpoint when the backend protocol is https and then after that, why the requests just hang indefinitely.

My halfway-educated guess for this was that the load balancer is acting as a HTTP “client” itself and attempting to open up TLS connections to the Consul server. But, as the Consul server is using a certificate not signed by a well-known CA, the load balancer closes the connection. That being said, I didn’t do a great deal of investigation into whether or not this was the case.

I’ll update here if the above solution rectifies the issue for us. At least until consul-helm supports Kubernetes Ingress (as vault-helm does already) :slight_smile:

Thanks again! - Aaron

@thecosmicfrog Thanks for the quick follow up.

the load balancer is acting as a HTTP “client” itself and attempting to open up TLS connections to the Consul server. But, as the Consul server is using a certificate not signed by a well-known CA, the load balancer closes the connection.

That does sound about right :slight_smile:

consul-helm supports Kubernetes Ingress

The latest consul-helm does support ingress gateways that should potentially address this. Here is a link to the ingress-gateway docs. It might be a little more work though as it requires further configuration on the consul server. It would be great if you could give it the old college try, but the solution mentioned above should work as well.

Ignore the above thing. I was wrong about ingress gateways

Cheers and HAPPY FRIDAY