OpenShift Quotas and Rolling Updates

Updated -

Introduction to Quotas in OpenShift

In OpenShift, you can apply various types of quotas to control and manage resource consumption by projects (namespaces) and individual users. Here are some common types of quotas in OpenShift:

  • Resource Quotas: Resource quotas allow you to limit the amount of compute resources (CPU, memory) and persistent storage that can be consumed by pods, containers, and persistent volume claims within a project (namespace). Resource quotas help prevent resource contention and ensure fair resource allocation among projects.

  • Limit Ranges: Limit ranges allow you to specify default resource limits and request values for containers within pods. Limit ranges help enforce consistent resource allocation practices and prevent excessive resource requests that could lead to resource exhaustion or contention.

  • Pod Quotas: Pod quotas allow you to limit the total number of pods that can be created within a project (namespace). Pod quotas help prevent overprovisioning of pods and ensure efficient utilization of cluster resources.

  • Persistent Volume (PV) Quotas: Persistent volume quotas allow you to limit the total amount of persistent volume storage that can be consumed by persistent volume claims (PVCs) within a project (namespace). PV quotas help prevent overconsumption of storage resources and ensure fair allocation of persistent storage among projects.

  • Resource Request and Limit Quotas: Resource request and limit quotas allow you to enforce specific resource requests and limit values for pods and containers within a project (namespace). These quotas help ensure that pods and containers are provisioned with appropriate resource allocations and prevent resource wastage or contention.

  • Object Quotas: Object quotas allow you to limit the total number of objects (such as pods, services, secrets, config maps, etc.) that can be created within a project (namespace). Object quotas help prevent excessive object creation and ensure efficient management of project resources.

These are some of the common types of quotas that can be applied in OpenShift to control and manage resource consumption at the project and user level. Quotas are a powerful tool for enforcing resource allocation policies, preventing resource abuse, and optimizing resource utilization in OpenShift clusters.

Quota enforcement vs deployment of new versions of an application

Can quota exhaustion block deployment of new versions of an application?

Yes, in OpenShift, for example, resource quotas are enforced at the project (namespace) level, and they apply to all pods, containers, and persistent volume claims (PVCs) within the project.

If the resource quota for a project is exceeded due to the resource consumption of deployed applications, it can potentially block the deployment of new versions of the same app or any other application within that project.

Here's how it typically works:

  1. Resource Quota : When you define a resource quota for a project, you specify limits on the amount of CPU, memory, and storage that can be consumed by pods, containers, and PVCs within that project.

  2. Deployment Attempt: When you attempt to deploy a new version of an application (or any new application) within the project, OpenShift checks the available resources against the defined resource quotas for the project.

  3. Quota Exceeded: If the deployment of the new application would exceed the resource limits defined by the quota (e.g., due to the existing resource consumption of other applications within the project), the deployment may be blocked or delayed until sufficient resources become available.

  4. Error or Rejection: Depending on the specific configuration and policies of the OpenShift cluster, the deployment attempt may result in an error message indicating that the resource quota has been exceeded, or it may be rejected outright.

In summary, if the resource consumption of deployed applications within a project exceeds the defined resource quotas, it can potentially block or delay the deployment of new versions of the same app or any other application within that project. Therefore, it's important to carefully manage resource quotas and monitor resource consumption to ensure efficient utilization of cluster resources and avoid deployment issues.

Using Rollout Strategies Can Help

The rollout strategy can help manage the deployment of a new version of an application in OpenShift, especially in scenarios where resources are limited or where you want to control the deployment process more granularly.

OpenShift provides various rollout strategies that can be configured to control how new versions of applications are deployed and updated.

These strategies include:

  1. Recreate Strategy: This strategy terminates all existing pods before deploying new ones. It ensures that all pods are replaced simultaneously, which can help minimize downtime during the deployment process.

  2. Rolling Strategy: This strategy gradually replaces existing pods with new ones, one at a time. It allows for a smooth and controlled transition between versions, with minimal impact on application availability.

  3. Blue-Green Strategy: This strategy involves deploying the new version of the application alongside the existing version (blue) and then routing traffic to the new version (green) once it's ready. It allows for zero-downtime deployments and easy rollback to the previous version if necessary.

  4. Canary Strategy: This strategy involves deploying the new version of the application to a subset of users or traffic (canary) before rolling it out to the entire user base. It allows for testing and validation of the new version in a controlled environment before full deployment.

  5. Custom Strategy: OpenShift also allows for custom rollout strategies to be defined based on specific requirements or workflows.

By choosing an appropriate rollout strategy and configuring it according to your needs, you can effectively manage the deployment of a new version of the same app, even in scenarios where resources are limited or quotas are fully utilized.

Each rollout strategy offers different benefits and trade-offs, so it's important to consider your specific requirements and constraints when selecting the appropriate strategy for your deployment.

Deployment vs DeploymentConfig

Take into consideration DeploymentConfig is deprecated in Red Hat OpenShift Container Platform 4.14+.

While the rolling strategy can be used with both Deployment and DeploymentConfig resources in Kubernetes and OpenShift, there are some differences in how they are implemented and managed:

  1. Resource Definition: Deployment is a native Kubernetes resource, while DeploymentConfig is specific to OpenShift. Deployment provides a higher-level abstraction for managing deployments and scaling of ReplicaSets, while DeploymentConfig is tailored for OpenShift's deployment management features.

  2. API Version and Kind: Deployment uses the apps/v1 API version and Deployment kind, while DeploymentConfig uses the apps.openshift.io/v1 API version and DeploymentConfig kind.

  3. RollingUpdate Strategy Definition: In Deployment, the rolling update strategy is defined within the strategy field as RollingUpdate. In DeploymentConfig, it's defined within the strategy field as Rolling and rollingParams.

  4. Parameters: The parameters for the rolling strategy may differ slightly between Deployment and DeploymentConfig, although the basic functionality remains the same. For example, the fields for controlling the number of pods to create or delete (maxSurge and maxUnavailable) may have slightly different default values or behavior.

  5. Tooling and Integration: OpenShift provides additional tooling and integration for managing DeploymentConfig resources, such as the web console, command-line tools (oc), and APIs. Deployment resources are more generic and can be managed using standard Kubernetes tooling.

DeploymentConfig objects prefer consistency, whereas Deployments objects take availability over consistency.

DeploymentConfig Deployment
Recreate Strategy: This strategy recreates all instances of the application at once. It terminates all existing pods before creating new ones with the updated configuration. This approach may result in downtime during the update process but ensures a clean and complete transition to the new version. Recreate: This strategy deletes all existing Pods before creating new ones with the updated configuration. It's a simpler approach compared to RollingUpdate but may result in downtime during the update process since all Pods are terminated before new ones are created.
Rolling Strategy: This strategy gradually replaces existing instances of the application with new ones, one at a time. It allows for a controlled and gradual rollout of changes, minimizing disruption to the application's availability. The rolling strategy can be further customized with parameters such as the maximum surge and maximum unavailable pods. RollingUpdate: This strategy gradually replaces old ReplicaSets with new ones, ensuring that a certain number of new Pods are available at all times during the update process. It allows for a controlled and gradual rollout of changes without causing downtime.
Custom Strategy: OpenShift also allows for custom deployment strategies to be defined based on specific requirements or workflows. Custom strategies provide flexibility in managing deployments and updates according to the unique needs of the application or environment.
DeploymentConfig Deployment
Automatic rollbacks yes no
Rollover no yes
Triggers A deployment is "triggered" when its configuration is changed or a tag in an Image Stream is changed. Implicit pod templage change. Deployment can be paused. $ oc rollout pause deployments/<name>
Custom strategies yes no
Lifecycle hooks yes no
Pausing mid-rollout no yes
Proportional scaling no yes

Troubleshoot and Workaround

If your deployment gets stuck during a rollout due to resource exhaustion, but it works when deployed manually, the issue could be related to how OpenShift handles rolling updates. During a rolling update, the new pods are created before the old ones are fully terminated, which can temporarily double the resource usage.

  • Increase Quota Temporarily : Allow more resources temporarily to accommodate the additional pods during the rollout.
  • Check Deployment Strategy : Consider using a Recreate strategy instead of Rolling, which stops all old pods before starting new ones.
  • Scale Down Existing Pods : Manually scale down the deployment to free up resources before applying the update:

oc scale deployment <deployment-name> --replicas=<desired-count>

These steps should help ensure that the rollout doesn't exceed the available resource quota.

Comments