Consul Connect & OpenShift SecurityContextConstraints

Hi,

I’ve been integrating Consul Connect service mesh on an OpenShift cluster. I am able to successfully deploy it and my services can talk to each other over the service mesh. Works great. However, in order to get this working in OpenShift, the ServiceAccount tied to the pod running my service must also have an OpenShift SecurityContextConstraint applied to it. For the purposes of experimenting, I simply set it to the highest SCC of “Privileged”, but I am now trying to tighten those SCCs down to more reasonable levels. I am running into an issue with one of the injected init containers for Consul (consul-connect-inject-init). This init container requests very high level permissions in its SecurityContext that seems to force my SA to be privileged. Additionally, since an SA can only be applied at the pod level, not at the container level, I am somewhat stumped on how to better secure this setup.

consul-connect-inject-init’s SecurityContext:

securityContext:
  capabilities:
    add:
      - NET_ADMIN
  privileged: true
  runAsUser: 0
  runAsGroup: 0
  runAsNonRoot: false

TLDR: For people that are running Consul Connect in OpenShift, is the expectation that ServiceAccounts for all of our services need to be at the Privileged SCC level or is there some other path that implementors have been taking?

Thanks!

Hi @ryan.cobb - Thanks for filing this issue.
The reason that we need CAP_NET_ADMIN is so that we can issue iptables commands to support transparent proxying for services. If this were a blocker in terms of access for you, you could disable transparent proxy for your services and this capability wouldn’t be necessary.

We have in our roadmap an alternative solution which is to build and utilize a CNI plugin which would replace the need for the iptables commands to be run local to the mesh application pods.

Thanks for the quick response. That is good to know that the transparent proxy functionality is the source of the capability requests. I don’t think it will give us too much heartache to disable the transparent proxy functionality, but I do have a follow up question with that regard.

My understanding with the transparent proxy, direct svc → svc connection attempts would be hijacked and routed over the service mesh transparently. If we disable the transparent proxy, are services able to now directly connect to another service and effectively sidestep the service mesh/intentions we may have setup to restrict access?

Hi @ryan.cobb!

Yes your understanding is correct, when transparent proxying is disabled there are no iptables enforced networking restrictions placed on the pod and as such it is possible that you could dial in/out of that pod bypassing the mesh on non-service ports.

If you want to put a :+1: on this issue it’d help us to prioritize the CNI work as well as you’d be able to track our progress on it : Transparent Proxy CNI Plug-in: escalated privileges required on Consul containers · Issue #635 · hashicorp/consul-k8s · GitHub

Best,
~Kyle

Will do! Thanks so much for your help @kschoche.

1 Like