OCP 4 Router Sharding

Latest response

Im looking for info regarding router sharding for a OCP DMZ environment and found: https://access.redhat.com/solutions/3880621
And I am a bit unsure about the specifics, I got a new vmware cluster up and running.

I would like to create a router-external ingresscontroller that serves only routes that are labeled type=public or similar.

If I understand everything correctly, labels are used on the ingresscontroller to place the different router instances on different worker nodes?

If my default/internal router is running on 3 infra worker nodes and an internal LB forwards to these infra workers, I guess I need to place router-external on the other non infra workers.
And point the external LB to these workers, would that be the correct design?

EDIT: so to clarify what Im after a bit.
I think I may need at least 3 router shards in the end.
Preferably I want to run these on infra nodes.
Is the recommended design that I create 6 infra nodes to run these (2 replicas per router)...and add 2 more infra nodes if another shard is needed.
Or is there a way to configure which port the routers are listening on and run multiple routers per infra node?


Hello Goran,

  • By default the ingresscontroller is configured to run default router pods on "worker" nodes:
apiVersion: v1
- apiVersion: operator.openshift.io/v1
  kind: IngressController
    name: sharded
    namespace: openshift-ingress-operator
    domain: <apps-sharded.basedomain.example.net>
          node-role.kubernetes.io/worker: ""
        type: sharded
  status: {}
kind: List
  resourceVersion: ""
  selfLink: ""
  • And you can use Infra nodes instead by labeling a couple of worker nodes with Infra role and modify the following line to point to Infra nodes:
          node-role.kubernetes.io/worker: ""  <----

But you ave to update the load balancer with those infra IPs/ports accordingly.

  • Now for your question, yes you will need to run the new router pods on different nodes "I am not sure about using different ports though on the same node though", and update the load balancer with the used nodes IPs.

This solution discuss the process on AWS IPI cluster.