I’ve tested a deployment of Consul Connect in Kubernetes but Enabling the Deny All policy is not blocking traffic to and from the pods that are running connect. I am using the docs and learn as a guide.
I’m testing it with Snipeit, a basic web app, I enabled the connect annotation on both deployments of the the app and it’s db. Both pods restarted and I see the side car containers are running along side them. I then enabled a policy to Deny All to Any Service in Consul.
Without adding an intention to allow the two services to communicate they are still able to communicate, I am not sure what I have deployed incorrectly here.
For the application and the DB, I have the webapp calling the Kubernetes Service of the mysql Pod, not the Pod IP directly.
I guess then I am also confused on how Consul Connect/Proxy works. My assumption from reading the documentation and the shown examples, was that Consul Connect would inject the proxy into the pod and all traffic in or out of the pod would go through the proxy, and that enabling Deny All would prevent any requests that did not come from an authorized service?
If you’re using the DNS name, that explains why connections will not work the service is only listening on localhost. Consul does not automatically route inbound/outbound requests through the sidecar proxy. You must configure your application to explicitly connect to the configured local proxy listener at localhost:<port>.
We plan to introduce support for transparent proxying in a future release of Consul which will allow you to connect to services using the cluster-local DNS names, and have that traffic be routed through the sidecar.