Jump To Close Expand all Collapse all Table of contents Post-installation configuration 1. Post-installation cluster tasks Expand section "1. Post-installation cluster tasks" Collapse section "1. Post-installation cluster tasks" 1.1. Adjust worker nodes Expand section "1.1. Adjust worker nodes" Collapse section "1.1. Adjust worker nodes" 1.1.1. Understanding the difference between machine sets and the machine config pool 1.1.2. Scaling a machine set manually 1.1.3. The machine set deletion policy 1.2. Creating infrastructure machine sets Expand section "1.2. Creating infrastructure machine sets" Collapse section "1.2. Creating infrastructure machine sets" 1.2.1. OpenShift Container Platform infrastructure components 1.2.2. Creating default cluster-wide node selectors 1.3. Moving resources to infrastructure machine sets Expand section "1.3. Moving resources to infrastructure machine sets" Collapse section "1.3. Moving resources to infrastructure machine sets" 1.3.1. Moving the router 1.3.2. Moving the default registry 1.3.3. Creating infrastructure machine sets for production environments Expand section "1.3.3. Creating infrastructure machine sets for production environments" Collapse section "1.3.3. Creating infrastructure machine sets for production environments" 1.3.3.1. Creating a machine set 1.3.4. Creating machine sets for different clouds Expand section "1.3.4. Creating machine sets for different clouds" Collapse section "1.3.4. Creating machine sets for different clouds" 1.3.4.1. Sample YAML for a machine set custom resource on AWS 1.3.4.2. Sample YAML for a machine set custom resource on Azure 1.3.4.3. Sample YAML for a machine set custom resource on GCP 1.3.4.4. Sample YAML for a machine set custom resource on vSphere 1.4. Creating an infrastructure node 1.5. Creating infrastructure machines 1.6. About the cluster autoscaler Expand section "1.6. About the cluster autoscaler" Collapse section "1.6. About the cluster autoscaler" 1.6.1. ClusterAutoscaler resource definition 1.6.2. Deploying the cluster autoscaler 1.7. About the machine autoscaler Expand section "1.7. About the machine autoscaler" Collapse section "1.7. About the machine autoscaler" 1.7.1. MachineAutoscaler resource definition 1.7.2. Deploying the machine autoscaler 1.8. Enabling Technology Preview features using FeatureGates 1.9. etcd tasks Expand section "1.9. etcd tasks" Collapse section "1.9. etcd tasks" 1.9.1. About etcd encryption 1.9.2. Enabling etcd encryption 1.9.3. Disabling etcd encryption 1.9.4. Backing up etcd data 1.9.5. Defragmenting etcd data 1.9.6. Restoring to a previous cluster state 1.10. Pod disruption budgets Expand section "1.10. Pod disruption budgets" Collapse section "1.10. Pod disruption budgets" 1.10.1. Understanding how to use pod disruption budgets to specify the number of pods that must be up 1.10.2. Specifying the number of pods that must be up with pod disruption budgets 1.11. Removing cloud provider credentials 1.12. Configuring image streams for a disconnected cluster Expand section "1.12. Configuring image streams for a disconnected cluster" Collapse section "1.12. Configuring image streams for a disconnected cluster" 1.12.1. Using Cluster Samples Operator image streams with alternate or mirrored registries 1.12.2. Preparing your cluster to gather support data 2. Post-installation node tasks Expand section "2. Post-installation node tasks" Collapse section "2. Post-installation node tasks" 2.1. Adding RHEL compute machines to an OpenShift Container Platform cluster Expand section "2.1. Adding RHEL compute machines to an OpenShift Container Platform cluster" Collapse section "2.1. Adding RHEL compute machines to an OpenShift Container Platform cluster" 2.1.1. About adding RHEL compute nodes to a cluster 2.1.2. System requirements for RHEL compute nodes Expand section "2.1.2. System requirements for RHEL compute nodes" Collapse section "2.1.2. System requirements for RHEL compute nodes" 2.1.2.1. Certificate signing requests management 2.1.3. Preparing the machine to run the playbook 2.1.4. Preparing a RHEL compute node 2.1.5. Adding a RHEL compute machine to your cluster 2.1.6. Required parameters for the Ansible hosts file 2.1.7. Optional: Removing RHCOS compute machines from a cluster 2.2. Adding RHCOS compute machines to an OpenShift Container Platform cluster Expand section "2.2. Adding RHCOS compute machines to an OpenShift Container Platform cluster" Collapse section "2.2. Adding RHCOS compute machines to an OpenShift Container Platform cluster" 2.2.1. Prerequisites 2.2.2. Creating more RHCOS machines using an ISO image 2.2.3. Creating more RHCOS machines by PXE or iPXE booting 2.2.4. Approving the certificate signing requests for your machines 2.3. Deploying machine health checks Expand section "2.3. Deploying machine health checks" Collapse section "2.3. Deploying machine health checks" 2.3.1. About machine health checks Expand section "2.3.1. About machine health checks" Collapse section "2.3.1. About machine health checks" 2.3.1.1. MachineHealthChecks on Bare Metal 2.3.1.2. Limitations when deploying machine health checks 2.3.2. Sample MachineHealthCheck resource Expand section "2.3.2. Sample MachineHealthCheck resource" Collapse section "2.3.2. Sample MachineHealthCheck resource" 2.3.2.1. Short-circuiting machine health check remediation Expand section "2.3.2.1. Short-circuiting machine health check remediation" Collapse section "2.3.2.1. Short-circuiting machine health check remediation" 2.3.2.1.1. Setting maxUnhealthy by using an absolute value 2.3.2.1.2. Setting maxUnhealthy by using percentages 2.3.3. Creating a MachineHealthCheck resource 2.3.4. Scaling a machine set manually 2.3.5. Understanding the difference between machine sets and the machine config pool 2.4. Recommended node host practices Expand section "2.4. Recommended node host practices" Collapse section "2.4. Recommended node host practices" 2.4.1. Creating a KubeletConfig CRD to edit kubelet parameters 2.4.2. Control plane node sizing 2.4.3. Setting up CPU Manager 2.5. Huge pages Expand section "2.5. Huge pages" Collapse section "2.5. Huge pages" 2.5.1. What huge pages do 2.5.2. How huge pages are consumed by apps 2.5.3. Configuring huge pages Expand section "2.5.3. Configuring huge pages" Collapse section "2.5.3. Configuring huge pages" 2.5.3.1. At boot time 2.6. Understanding device plug-ins Expand section "2.6. Understanding device plug-ins" Collapse section "2.6. Understanding device plug-ins" 2.6.1. Methods for deploying a device plug-in 2.6.2. Understanding the Device Manager 2.6.3. Enabling Device Manager 2.7. Taints and tolerations Expand section "2.7. Taints and tolerations" Collapse section "2.7. Taints and tolerations" 2.7.1. Understanding taints and tolerations Expand section "2.7.1. Understanding taints and tolerations" Collapse section "2.7.1. Understanding taints and tolerations" 2.7.1.1. Understanding how to use toleration seconds to delay pod evictions 2.7.1.2. Understanding how to use multiple taints 2.7.1.3. Understanding pod scheduling and node conditions (taint node by condition) 2.7.1.4. Understanding evicting pods by condition (taint-based evictions) 2.7.1.5. Tolerating all taints 2.7.2. Adding taints and tolerations 2.7.3. Adding taints and tolerations using a machine set 2.7.4. Binding a user to a node using taints and tolerations 2.7.5. Controlling nodes with special hardware using taints and tolerations 2.7.6. Removing taints and tolerations 2.8. Topology Manager Expand section "2.8. Topology Manager" Collapse section "2.8. Topology Manager" 2.8.1. Topology Manager policies 2.8.2. Setting up Topology Manager 2.8.3. Pod interactions with Topology Manager policies 2.9. Resource requests and overcommitment 2.10. Cluster-level overcommit using the Cluster Resource Override Operator Expand section "2.10. Cluster-level overcommit using the Cluster Resource Override Operator" Collapse section "2.10. Cluster-level overcommit using the Cluster Resource Override Operator" 2.10.1. Installing the Cluster Resource Override Operator using the web console 2.10.2. Installing the Cluster Resource Override Operator using the CLI 2.10.3. Configuring cluster-level overcommit 2.11. Node-level overcommit Expand section "2.11. Node-level overcommit" Collapse section "2.11. Node-level overcommit" 2.11.1. Understanding compute resources and containers Expand section "2.11.1. Understanding compute resources and containers" Collapse section "2.11.1. Understanding compute resources and containers" 2.11.1.1. Understanding container CPU requests 2.11.1.2. Understanding container memory requests 2.11.2. Understanding overcomitment and quality of service classes Expand section "2.11.2. Understanding overcomitment and quality of service classes" Collapse section "2.11.2. Understanding overcomitment and quality of service classes" 2.11.2.1. Understanding how to reserve memory across quality of service tiers 2.11.3. Understanding swap memory and QOS 2.11.4. Understanding nodes overcommitment 2.11.5. Disabling or enforcing CPU limits using CPU CFS quotas 2.11.6. Reserving resources for system processes 2.11.7. Disabling overcommitment for a node 2.12. Project-level limits Expand section "2.12. Project-level limits" Collapse section "2.12. Project-level limits" 2.12.1. Disabling overcommitment for a project 2.13. Freeing node resources using garbage collection Expand section "2.13. Freeing node resources using garbage collection" Collapse section "2.13. Freeing node resources using garbage collection" 2.13.1. Understanding how terminated containers are removed though garbage collection 2.13.2. Understanding how images are removed though garbage collection 2.13.3. Configuring garbage collection for containers and images 2.14. Using the Node Tuning Operator Expand section "2.14. Using the Node Tuning Operator" Collapse section "2.14. Using the Node Tuning Operator" 2.14.1. Accessing an example Node Tuning Operator specification 2.14.2. Custom tuning specification 2.14.3. Default profiles set on a cluster 2.14.4. Supported Tuned daemon plug-ins 2.15. Configuring the maximum number of pods per node 3. Post-installation network configuration Expand section "3. Post-installation network configuration" Collapse section "3. Post-installation network configuration" 3.1. Configuring network policy with OpenShift SDN Expand section "3.1. Configuring network policy with OpenShift SDN" Collapse section "3.1. Configuring network policy with OpenShift SDN" 3.1.1. About network policy 3.1.2. Example NetworkPolicy object 3.1.3. Creating a network policy 3.1.4. Deleting a network policy 3.1.5. Viewing network policies 3.1.6. Configuring multitenant isolation by using network policy 3.1.7. Creating default network policies for a new project 3.1.8. Modifying the template for new projects Expand section "3.1.8. Modifying the template for new projects" Collapse section "3.1.8. Modifying the template for new projects" 3.1.8.1. Adding network policies to the new project template 3.2. Setting DNS to private 3.3. Enabling the cluster-wide proxy 3.4. Cluster Network Operator configuration 3.5. Configuring ingress cluster traffic 3.6. Red Hat OpenShift Service Mesh supported configurations Expand section "3.6. Red Hat OpenShift Service Mesh supported configurations" Collapse section "3.6. Red Hat OpenShift Service Mesh supported configurations" 3.6.1. Supported configurations for Kiali on Red Hat OpenShift Service Mesh 3.6.2. Supported Mixer adapters 3.6.3. Red Hat OpenShift Service Mesh installation activities 3.7. Optimizing routing Expand section "3.7. Optimizing routing" Collapse section "3.7. Optimizing routing" 3.7.1. Baseline Ingress Controller (router) performance 3.7.2. Ingress Controller (router) performance optimizations 4. Post-installation storage configuration Expand section "4. Post-installation storage configuration" Collapse section "4. Post-installation storage configuration" 4.1. Dynamic provisioning Expand section "4.1. Dynamic provisioning" Collapse section "4.1. Dynamic provisioning" 4.1.1. About dynamic provisioning 4.1.2. Available dynamic provisioning plug-ins 4.2. Defining a storage class Expand section "4.2. Defining a storage class" Collapse section "4.2. Defining a storage class" 4.2.1. Basic StorageClass object definition 4.2.2. Storage class annotations 4.2.3. RHOSP Cinder object definition 4.2.4. AWS Elastic Block Store (EBS) object definition 4.2.5. Azure Disk object definition 4.2.6. Azure File object definition Expand section "4.2.6. Azure File object definition" Collapse section "4.2.6. Azure File object definition" 4.2.6.1. Considerations when using Azure File 4.2.7. GCE PersistentDisk (gcePD) object definition 4.2.8. VMware vSphere object definition 4.3. Changing the default storage class 4.4. Optimizing storage 4.5. Available persistent storage options 4.6. Recommended configurable storage technology Expand section "4.6. Recommended configurable storage technology" Collapse section "4.6. Recommended configurable storage technology" 4.6.1. Specific application storage recommendations Expand section "4.6.1. Specific application storage recommendations" Collapse section "4.6.1. Specific application storage recommendations" 4.6.1.1. Registry 4.6.1.2. Scaled registry 4.6.1.3. Metrics 4.6.1.4. Logging 4.6.1.5. Applications 4.6.2. Other specific application storage recommendations 4.7. Deploy Red Hat OpenShift Container Storage 5. Preparing for users Expand section "5. Preparing for users" Collapse section "5. Preparing for users" 5.1. Understanding identity provider configuration Expand section "5.1. Understanding identity provider configuration" Collapse section "5.1. Understanding identity provider configuration" 5.1.1. About identity providers in OpenShift Container Platform 5.1.2. Supported identity providers 5.1.3. Identity provider parameters 5.1.4. Sample identity provider CR 5.2. Using RBAC to define and apply permissions Expand section "5.2. Using RBAC to define and apply permissions" Collapse section "5.2. Using RBAC to define and apply permissions" 5.2.1. RBAC overview Expand section "5.2.1. RBAC overview" Collapse section "5.2.1. RBAC overview" 5.2.1.1. Default cluster roles 5.2.1.2. Evaluating authorization Expand section "5.2.1.2. Evaluating authorization" Collapse section "5.2.1.2. Evaluating authorization" 5.2.1.2.1. Cluster role aggregation 5.2.2. Projects and namespaces 5.2.3. Default projects 5.2.4. Viewing cluster roles and bindings 5.2.5. Viewing local roles and bindings 5.2.6. Adding roles to users 5.2.7. Creating a local role 5.2.8. Creating a cluster role 5.2.9. Local role binding commands 5.2.10. Cluster role binding commands 5.2.11. Creating a cluster admin 5.3. The kubeadmin user Expand section "5.3. The kubeadmin user" Collapse section "5.3. The kubeadmin user" 5.3.1. Removing the kubeadmin user 5.4. Image configuration resources Expand section "5.4. Image configuration resources" Collapse section "5.4. Image configuration resources" 5.4.1. Image controller configuration parameters 5.4.2. Configuring image settings Expand section "5.4.2. Configuring image settings" Collapse section "5.4.2. Configuring image settings" 5.4.2.1. Configuring additional trust stores for image registry access 5.4.2.2. Allowing insecure registries 5.4.2.3. Configuring image registry repository mirroring 5.5. Operator installation with OperatorHub Expand section "5.5. Operator installation with OperatorHub" Collapse section "5.5. Operator installation with OperatorHub" 5.5.1. Installing from OperatorHub using the web console 5.5.2. Installing from OperatorHub using the CLI 法律通告 Settings Close Language: 简体中文 日本語 English Language: 简体中文 日本語 English Format: Multi-page Single-page Format: Multi-page Single-page Language and Page Formatting Options Language: 简体中文 日本語 English Language: 简体中文 日本語 English Format: Multi-page Single-page Format: Multi-page Single-page 4.4. Optimizing storage Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner. Previous Next