6.2. 为生产环境创建基础架构机器集
在生产环境部署中,至少部署三个机器集来容纳基础架构组件。日志记录聚合解决方案和服务网格都部署 Elasticsearch,而且 Elasticsearch 需要三个安装到不同节点上的实例。为获得高可用性,将这些节点部署到不同的可用区。由于每个可用区需要不同的机器集,因此至少创建三台机器集。
6.2.1. 为不同云创建机器集
使用云的示例机器集。
6.2.1.1. AWS 上机器设置自定义资源的 YAML 示例
此 YAML 示例定义了一个在 us-east-1a Amazon Web Services (AWS) 区域中运行的机器集,并创建通过 node-role.kubernetes.io/<role>: "" 标记的节点。
在本例中,<infrastructureID> 是基础架构 ID 标签,该标签基于您在置备集群时设定的集群 ID,而 <role> 则是要添加的节点标签。
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructureID> 1
name: <infrastructureID>-<role>-<zone> 2
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructureID> 3
machine.openshift.io/cluster-api-machineset: <infrastructureID>-<role>-<zone> 4
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructureID> 5
machine.openshift.io/cluster-api-machine-role: <role> 6
machine.openshift.io/cluster-api-machine-type: <role> 7
machine.openshift.io/cluster-api-machineset: <infrastructureID>-<role>-<zone> 8
spec:
metadata:
labels:
node-role.kubernetes.io/<role>: "" 9
providerSpec:
value:
ami:
id: ami-046fe691f52a953f9 10
apiVersion: awsproviderconfig.openshift.io/v1beta1
blockDevices:
- ebs:
iops: 0
volumeSize: 120
volumeType: gp2
credentialsSecret:
name: aws-cloud-credentials
deviceIndex: 0
iamInstanceProfile:
id: <infrastructureID>-worker-profile 11
instanceType: m4.large
kind: AWSMachineProviderConfig
placement:
availabilityZone: us-east-1a
region: us-east-1
securityGroups:
- filters:
- name: tag:Name
values:
- <infrastructureID>-worker-sg 12
subnet:
filters:
- name: tag:Name
values:
- <infrastructureID>-private-us-east-1a 13
tags:
- name: kubernetes.io/cluster/<infrastructureID> 14
value: owned
userDataSecret:
name: worker-user-data- 1 3 5 11 12 13 14
- 指定基于置备集群时所设置的集群 ID 的基础架构 ID。如果已安装 OpenShift CLI,您可以通过运行以下命令来获取基础架构 ID:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster - 2 4 8
- 指定基础架构 ID、节点标签和区域。
- 6 7 9
- 指定要添加的节点标签。
- 10
- 为您的 OpenShift Container Platform 节点的 AWS 区域指定有效的 Red Hat Enterprise Linux CoreOS (RHCOS) AMI。
6.2.1.2. Azure 上机器设置自定义资源的 YAML 示例
此 YAML 示例定义了一个在 centralus 地区(region)的 1 Microsoft Azure 区域(zone)中运行的机器集,并创建了通过 node-role.kubernetes.io/<role>: "" 标记的节点。
在本例中,<infrastructureID> 是基础架构 ID 标签,该标签基于您在置备集群时设定的集群 ID,而 <role> 则是要添加的节点标签。
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructureID> 1
machine.openshift.io/cluster-api-machine-role: <role> 2
machine.openshift.io/cluster-api-machine-type: <role> 3
name: <infrastructureID>-<role>-<region> 4
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructureID> 5
machine.openshift.io/cluster-api-machineset: <infrastructureID>-<role>-<region> 6
template:
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-cluster: <infrastructureID> 7
machine.openshift.io/cluster-api-machine-role: <role> 8
machine.openshift.io/cluster-api-machine-type: <role> 9
machine.openshift.io/cluster-api-machineset: <infrastructureID>-<role>-<region> 10
spec:
metadata:
creationTimestamp: null
labels:
node-role.kubernetes.io/<role>: "" 11
providerSpec:
value:
apiVersion: azureproviderconfig.openshift.io/v1beta1
credentialsSecret:
name: azure-cloud-credentials
namespace: openshift-machine-api
image:
offer: ""
publisher: ""
resourceID: /resourceGroups/<infrastructureID>-rg/providers/Microsoft.Compute/images/<infrastructureID>
sku: ""
version: ""
internalLoadBalancer: ""
kind: AzureMachineProviderSpec
location: centralus
managedIdentity: <infrastructureID>-identity 12
metadata:
creationTimestamp: null
natRule: null
networkResourceGroup: ""
osDisk:
diskSizeGB: 128
managedDisk:
storageAccountType: Premium_LRS
osType: Linux
publicIP: false
publicLoadBalancer: ""
resourceGroup: <infrastructureID>-rg 13
sshPrivateKey: ""
sshPublicKey: ""
subnet: <infrastructureID>-<role>-subnet 14 15
userDataSecret:
name: <role>-user-data 16
vmSize: Standard_D2s_v3
vnet: <infrastructureID>-vnet 17
zone: "1" 186.2.1.3. GCP 上机器设置自定义资源的 YAML 示例
此 YAML 示例定义了一个在 Google Cloud Platform (GCP) 中运行的机器集,并创建通过 node-role.kubernetes.io/<role>: "" 标记的节点。
在本例中,<infrastructureID> 是基础架构 ID 标签,该标签基于您在置备集群时设定的集群 ID,而 <role> 则是要添加的节点标签。
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructureID> 1
name: <infrastructureID>-w-a 2
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructureID> 3
machine.openshift.io/cluster-api-machineset: <infrastructureID>-w-a 4
template:
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-cluster: <infrastructureID> 5
machine.openshift.io/cluster-api-machine-role: <role> 6
machine.openshift.io/cluster-api-machine-type: <role> 7
machine.openshift.io/cluster-api-machineset: <infrastructureID>-w-a 8
spec:
metadata:
labels:
node-role.kubernetes.io/<role>: "" 9
providerSpec:
value:
apiVersion: gcpprovider.openshift.io/v1beta1
canIPForward: false
credentialsSecret:
name: gcp-cloud-credentials
deletionProtection: false
disks:
- autoDelete: true
boot: true
image: <infrastructureID>-rhcos-image 10
labels: null
sizeGb: 128
type: pd-ssd
kind: GCPMachineProviderSpec
machineType: n1-standard-4
metadata:
creationTimestamp: null
networkInterfaces:
- network: <infrastructureID>-network 11
subnetwork: <infrastructureID>-<role>-subnet 12
projectID: <project_name> 13
region: us-central1
serviceAccounts:
- email: <infrastructureID>-w@<project_name>.iam.gserviceaccount.com 14 15
scopes:
- https://www.googleapis.com/auth/cloud-platform
tags:
- <infrastructureID>-<role> 16
userDataSecret:
name: worker-user-data
zone: us-central1-a6.2.2. 创建机器集
除了安装程序创建的机器集之外,还可创建自己的机器集来动态管理您选择的特定工作负载的机器计算资源。
先决条件
- 部署一个 OpenShift Container Platform 集群。
-
安装 OpenShift CLI(
oc)。 -
以具有
cluster-admin权限的用户身份登录oc。
流程
创建一个包含机器集自定义资源(CR)示例的新 YAML 文件,并将其命名为
<file_name>.yaml。确保设置
<clusterID>和<role>参数值。如果不确定要为特定字段设置哪个值,您可以从集群中检查现有的机器集。
$ oc get machinesets -n openshift-machine-api NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
检查特定机器集的值:
$ oc get machineset <machineset_name> -n \ openshift-machine-api -o yaml .... template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker 2 machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a
创建新的
MachineSetCR:$ oc create -f <file_name>.yaml
查看机器集列表:
$ oc get machineset -n openshift-machine-api NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m
当新机器集可用时,
DESIRED和CURRENT的值会匹配。如果机器集不可用,请等待几分钟,然后再次运行命令。新机器设置可用后,检查机器及其引用的节点的状态:
$ oc describe machine <name> -n openshift-machine-api
例如:
$ oc describe machine agl030519-vplxk-infra-us-east-1a -n openshift-machine-api status: addresses: - address: 10.0.133.18 type: InternalIP - address: "" type: ExternalDNS - address: ip-10-0-133-18.ec2.internal type: InternalDNS lastUpdated: "2019-05-03T10:38:17Z" nodeRef: kind: Node name: ip-10-0-133-18.ec2.internal uid: 71fb8d75-6d8f-11e9-9ff3-0e3f103c7cd8 providerStatus: apiVersion: awsproviderconfig.openshift.io/v1beta1 conditions: - lastProbeTime: "2019-05-03T10:34:31Z" lastTransitionTime: "2019-05-03T10:34:31Z" message: machine successfully created reason: MachineCreationSucceeded status: "True" type: MachineCreation instanceId: i-09ca0701454124294 instanceState: running kind: AWSMachineProviderStatus查看新节点,并确认新节点具有您指定的标签:
$ oc get node <node_name> --show-labels
查看命令输出,并确认
node-role.kubernetes.io/<your_label>列在LABELS列表中。
对机器集的任何更改都不会应用到机器集拥有的现有机器。例如,对现有机器集编辑或添加的标签不会传播到与该机器集关联的现有机器和节点。
后续步骤
如果需要其他可用区中的机器集,请重复此过程来创建更多 MachineSet。