About using AWS Local Zones with ROSA

Updated -

Disclaimer: Links contained herein to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.

Why Use Local Zones?

AWS Local Zones are metropolis-centralized availability zones where customers can place latency-sensitive application workloads that require single-digit millisecond latency or local data processing. By running ROSA cluster on AWS Local Zones, customers can deploy and operate a fully managed kubernetes-based application platform to enable running applications closer to the end-users.

Please refer to AWS documentation for Local Zones to learn more and to identify different ingress/egress network requirements for connecting Local Zones to parent AWS Region or private data centers.

Requirements to use Local Zones for MachinePools

  • ROSA version must be at least 4.12.

  • The AWS account must have Local Zones enabled.

  • In order to use a specific Local Zone, the ROSA cluster must be initially built in a AWS region that has a coinciding Local Zone. Refer to Locations to determine the Local Zone available to specific AWS regions.

  • The ROSA cluster must be initially built in an existing AWS VPC (a.k.a. BYO-VPC)

  • A subnet in the desired Local Zone must be created within the VPC where the cluster is created. [The subnet can be created either ahead of the cluster creation or after the cluster creation]

  • The subnet must be associated with a Routing Table that has a route to a NAT gateway

  • Following tag must be added to the subnet: kubernetes.io/cluster/<infra_id>: shared. [Use rosa describe cluster -c | grep -i "Infra ID:" to identify cluster's infra_id]

  • A cluster-wide MTU (Maximum Transmission Units) of 1200 will be required since the MTU supported between the EC2 in parent region and the EC2 in local zone is 1300. Lowering the MTU cluster-wide can affect performance and should be done carefully. Changing the MTU is disruptive since the nodes in the cluster might be temporarily unavailable as the MTU update rolls out. To change the MTU on the cluster, the Administrator can run following commands:

$ oc patch network.operator.openshift.io/cluster --type=merge --patch "{\"spec\":{\"migration\":{\"mtu\":{\"network\":{\"from\":$(oc get network.config.openshift.io/cluster --output=jsonpath={.status.clusterNetworkMTU}),\"to\":1200},\"machine\":{\"to\":9001}}}}}"

$ oc get mcp
# Wait for the configuration rollout. i.e., wait till everything is `UPDATED=True`, `UPDATING=False`, `DEGRADED=False`. It could take several minutes until the configuration is applied to all nodes

$ oc patch network.operator.openshift.io/cluster --type=merge --patch '{"spec":{"migration":null,"defaultNetwork":{"ovnKubernetesConfig":{"mtu":1200}}}}'  

$ oc get mcp 

Creating a MachinePool in the Local Zone

  • Assumes all of the requirements above are met

The overview of steps are as follows:

  1. Use the ROSA CLI to create a machinepool in the cluster
  2. Provide the subnet and instance type for the machinepool to the ROSA CLI
  3. the cluster will provision the nodes in the cluster after several minutes.

This example shows interactive mode for all available options.

$ rosa create machinepool -c <cluster-name> -i
I: Enabling interactive mode
? Machine pool name: my-lz-mp
? Create multi-AZ machine pool: No
? Select subnet for a single AZ machine pool (optional): Yes
? Subnet ID:     subnet-<a> (region-info)
? Enable autoscaling (optional): No
? Replicas: 2
I: Fetching instance types

And then list your machinepools

$ rosa list machinepools -c <cluster-name>

Comments