Consul Connect + Kubernetes, Deny All policy not working?

I’ve tested a deployment of Consul Connect in Kubernetes but Enabling the Deny All policy is not blocking traffic to and from the pods that are running connect. I am using the docs and learn as a guide.

I’m testing it with Snipeit, a basic web app, I enabled the connect annotation on both deployments of the the app and it’s db. Both pods restarted and I see the side car containers are running along side them. I then enabled a policy to Deny All to Any Service in Consul.

Without adding an intention to allow the two services to communicate they are still able to communicate, I am not sure what I have deployed incorrectly here.

Hi @Mercwri could we see your Helm values config for Consul k8s? Also are you calling the pod through the connect upstream or are you calling it directly by its pod ip (bypassing Envoy)?

I am using the defaults found in Deploy Consul Service Mesh on Kubernetes | Consul - HashiCorp Learn, for testing.

For the application and the DB, I have the webapp calling the Kubernetes Service of the mysql Pod, not the Pod IP directly.

I guess then I am also confused on how Consul Connect/Proxy works. My assumption from reading the documentation and the shown examples, was that Consul Connect would inject the proxy into the pod and all traffic in or out of the pod would go through the proxy, and that enabling Deny All would prevent any requests that did not come from an authorized service?

Hi, currently you also need to ensure your application only binds to 127.0.0.1 so other applications can’t bypass the proxy.

You can see that in the learn tutorial as

            - name: 'LISTEN_ADDR'
              value: '127.0.0.1:9090'

This is an environment variable supported by that application but for your application you may need another way to support setting the bind address.

I set the bind address for the DB to 127.0.0.1, and tried connecting to the service both with and without Deny All enabled and no connections are possible.

I think we’re going to need help to reproduce.

Can you send us the yaml for your db deployment and the application calling it? Can you also confirm your Helm yaml is:

global:
  domain: consul
  datacenter: dc1

server:
  replicas: 1
  bootstrapExpect: 1

client:
  enabled: true
  grpc: true

ui:
  enabled: true

connectInject:
  enabled: true

controller:
  enabled: true

Are you trying to connect to the service using the upstream exposed by the local proxy (see Secure Applications with Service Sidecar Proxies: Understand the upstream concept), or by its DNS name within the Kubernetes cluster?

If you’re using the DNS name, that explains why connections will not work the service is only listening on localhost. Consul does not automatically route inbound/outbound requests through the sidecar proxy. You must configure your application to explicitly connect to the configured local proxy listener at localhost:<port>.

We plan to introduce support for transparent proxying in a future release of Consul which will allow you to connect to services using the cluster-local DNS names, and have that traffic be routed through the sidecar.