Creating fully customizable non-default IngressController for ROSA and OSD
Table of Contents
What's new?
In version 4.14 of Red Hat OpenShift Dedicated (OSD) and Red Hat OpenShift Service on AWS (ROSA), the Ingress is changing to give customers more control over their workloads and configuration.
When an OSD/ROSA cluster is created with Openshift version 4.14, or upgraded from 4.13 to 4.14, the following new features are enabled:
- Customers will be able to create and manage HAProxy-based additional Ingress Controller resources alongside the default provided by the installer. Additional Ingresses will continue to be fully supported by Red Hat, and will be completely configurable.
- Customers will be able to configure the following additional options on the default Ingress before and after installation:
- Customers can switch between AWS Network Load Balancers and AWS Classic Load Balancers using the ROSA CLI1 .
- Customers can exclude namespaces from being resolved. Namespaces that begin with
openshift-
orkube-
cannot be excluded. This is equivalent to this field on an OCPIngressController
and includes all namespaces by default. - Customers can exclude routes matching label selectors from being resolved by the default Ingress. Route selectors whose labels match routes in the
openshift- or kube-
namespaces cannot be excluded. This is equivalent to this field on an OCPIngressController
and includes all routes by default. - Customers can allow routes to match paths in multiple namespaces. This is equivalent to this field on an OCP
IngressController
and is switched off by default. - Customers can create routes with the ability to have wildcard domains. This is equivalent to this field on an OCP
IngressController
and is switched off by default.
For more information on these configuration options, please refer to the rosa edit ingress
and rosa create cluster
commands in the latest version of the CLI1.
Important: Additional ingresscontrollers are considered customer workloads and must not run on infra node, as stated in the ROSA service definition page. Please, ensure the nodePlacement in the ingresscontroller definition is configured to run on worker nodes only.
What's changing?
On version 4.14, the following will change:
- New clusters with ROSA/OSD v4.14 and above will default to provisioning with Network Load Balancers (as opposed to Classic Load Balancers).
- Existing clusters after upgrading to OCP 4.14 will be able to change the default Ingress Controller's Load Balancer from NLB to CLB.
- Custom Domains will become native OpenShift
IngressController
resources. These can be fully configured with any feature available on the OpenshiftIngressController
API.- Customers will not be able to create new
CustomDomain
resources. For equivalent functionality, customers can instead create additionalIngressController
resources.
- Customers will not be able to create new
- Creating additional Ingresses managed by Red Hat SRE via
rosa edit ingress
, and the Hybrid Cloud Console will be deprecated on all versions of OSD/ROSA. - Customers upgrading from version 4.13 of OSD/ROSA to 4.14 must acknowledge these changes through the Hybrid Cloud Console before commencing an upgrade.
- Customers on 4.13 may start deploying additional
IngressController
resources alongside theirCustomDomain
resources.
What's not changing?
- The default Ingress is still fully managed by Red Hat SRE, and is scheduled onto Red Hat
infra
nodes. All other IngressController objects will continue to be supported by the Red Hat support team. - Existing clusters will continue to have Classic Load Balancer for default ingress when upgrading from 4.13 to 4.14.
- Editing the default Ingress with
rosa edit ingress
, and the Hybrid Cloud Console is still fully managed by Red Hat SRE. - Editing existing additional ingresses via
rosa edit ingress
, and the Hybrid Cloud Console is still supported for the lifetime of those objects. - Customers can create, edit, and delete Custom Domains as normal before version 4.14 of OSD/ROSA.
What does a customer need to do to benefit from these features?
For newly provisioned clusters on version 4.14 and above:
- Customers can use all of the features mentioned above through the OCM/ROSA CLI1 without any additional actions.
- Customers cannot create
CustomDomain
resources to create second Ingresses, and instead can create them using theoc create ingresscontroller
, as explained here.
For clusters without existing CustomDomain resources or second Ingresses managed by the OCM/ROSA CLI:
- It is possible to use the new
rosa edit ingress
features mentioned above out of the box, without any additional action.
For clusters with existing CustomDomain installations, or second Ingresses managed by the OCM/ROSA CLI:
- Red Hat will convert the
CustomDomain
resources into nativeIngressControllers
, able to be managed from theoc
CLI. - Changing the load balancer type for
CustomDomain
resources migrated toIngress Controllers
is optional. They are not automatically changed as part of the migration. They can be changed after the upgrade. - Additional ingress router pods will reschedule to worker nodes as part of the upgrade from Red Hat-managed Infra nodes. These router pods are backed by a deployment so the migration should be without downtime.
Early Access/Opting Out
- To request early access to this additional functionality (beyond creating additional
IngressController
resources on 4.13) in 4.13 prior to version 4.14, please contact Red Hat support and open a case to request access. A support case is not required only for deploying additionalIngressController
resources on 4.13. - To explicitly opt-out of this feature and keep using the Custom Domains on version 4.14, contact Red Hat support before upgrading.
- To keep using OSD/ROSA second Ingresses on version 4.14, contact Red Hat support before upgrading.
Known issues
There is a known issue (tracked in a private bug: OCPBUGS-16728) when switching from a Classic Load Balancer on AWS to a Network Load Balancer on versions 4.13 and above. When switching from either CLB to NLB, or the reverse, the underlying AWS LoadBalancer
that is being switched from will need to be manually removed from AWS. This resource is cleaned up on cluster deletion, but will have to be manually deleted when upgrading from a CLB to an NLB to avoid extra charges.
Comments