How to avoid that the default ingresscontroller serves routes of all projects when using router sharding in OpenShift 4.x
Issue
When configuring router sharing according to the documentation [0], it may be desired to serve specific namespaces only by specific ingress controllers. However, with the default configuration, the default ingress operator will still serve all services due to an absence of namespaceSelector
or routeSelector
configuration for this operator. Additionally, in environments which use hostNetwork
for the ingress operators, such as vSphere and Bare Metal, it is not possible to use network policies to block traffic from those operators due to [1].
For example, in an environment where fh.apps.cluster43.example.com
is hosted by the default ingress controller and fh.test1.cluster43.example.com
by the test1 ingress controller:
$ curl fh.apps.cluster43.example.com
Apache default
$ curl fh.test1.cluster43.example.com
Apache test1
$ getent hosts fh.apps.cluster43.example.com | awk '{print $1}'
172.16.0.207
$ getent hosts fh.test1.cluster43.example.com | awk '{print $1}'
172.16.0.224
In the above environment, an attacker could take advantage of this by changing the resolver and point it to the IP of the default ingress controller, using IP 172.16.0.207
(the default namespace's ingress controller):
$ curl -I fh.test1.cluster43.example.com --resolve fh.test1.cluster43.example.com:80:172.16.0.207 2>/dev/null | grep HTTP
HTTP/1.1 200 OK
$ curl fh.test1.cluster43.example.com --resolve fh.test1.cluster43.example.com:80:172.16.0.207 2>/dev/null
Apache test1
How can one avoid that the default ingress controller serves routes of all projects when using router sharding in OpenShift 4.x?
Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by NetworkPolicy object rules.
Environment
- Red Hat OpenShift Container Platform 4.x
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.