Deploy OpenShift sandboxed containers on AWS Bare Metal nodes (Tech Preview)

Updated -

This article describes how to deploy OpenShift Virtualization on AWS Bare Metal nodes. Please note this is a Tech Preview in OpenShift sandboxed containers 1.2 release.

Prerequisites

1. Obtain the AWS IPI installer from here
2. Download the aws-cli tool from here
3. Configure the aws-cli tool using aws configure and provide the account credentials (AWS Access Key ID, AWS Secret Access Key)

Preparation

  • Create the cluster manifests using openshift-installer ./openshift-install create manifests --dir aws-manifests and follow the interactive installer prompts and respond accordingly. Also, it is required to provide the pull secret, which can be obtained from here

  • (optional) : Backup the manifests-dir, as the manifests are being deleted by the installation program upon installation completion.

  • The following manifests will be created :
aws-manifests/
├── manifests
│   ├── 04-openshift-machine-config-operator.yaml
│   ├── cluster-config.yaml
│   ├── cluster-dns-02-config.yml
│   ├── cluster-infrastructure-02-config.yml
│   ├── cluster-ingress-02-config.yml
│   ├── cluster-network-01-crd.yml
│   ├── cluster-network-02-config.yml
│   ├── cluster-proxy-01-config.yaml
│   ├── cluster-scheduler-02-config.yml
│   ├── cvo-overrides.yaml
│   ├── etcd-ca-bundle-configmap.yaml
│   ├── etcd-client-secret.yaml
│   ├── etcd-metric-client-secret.yaml
│   ├── etcd-metric-serving-ca-configmap.yaml
│   ├── etcd-metric-signer-secret.yaml
│   ├── etcd-namespace.yaml
│   ├── etcd-service.yaml
│   ├── etcd-serving-ca-configmap.yaml
│   ├── etcd-signer-secret.yaml
│   ├── kube-cloud-config.yaml
│   ├── kube-system-configmap-root-ca.yaml
│   ├── machine-config-server-tls-secret.yaml
│   ├── openshift-config-secret-pull-secret.yaml
│   └── openshift-kubevirt-infra-namespace.yaml
└── openshift
    ├── 99_cloud-creds-secret.yaml
    ├── 99_kubeadmin-password-secret.yaml
    ├── 99_openshift-cluster-api_master-machines-0.yaml
    ├── 99_openshift-cluster-api_master-machines-1.yaml
    ├── 99_openshift-cluster-api_master-machines-2.yaml
    ├── 99_openshift-cluster-api_master-user-data-secret.yaml
    ├── 99_openshift-cluster-api_worker-machineset-0.yaml
    ├── 99_openshift-cluster-api_worker-machineset-1.yaml
    ├── 99_openshift-cluster-api_worker-machineset-2.yaml
    ├── 99_openshift-cluster-api_worker-machineset-3.yaml
    ├── 99_openshift-cluster-api_worker-machineset-4.yaml
    ├── 99_openshift-cluster-api_worker-machineset-5.yaml
    ├── 99_openshift-cluster-api_worker-user-data-secret.yaml
    ├── 99_openshift-machineconfig_99-master-ssh.yaml
    ├── 99_openshift-machineconfig_99-worker-ssh.yaml
    ├── 99_role-cloud-creds-secret-reader.yaml
    └── openshift-install-manifests.yaml
  • Edit the manifests for aws-manifests/openshift/99_openshift-cluster-api_worker-machineset-[0,1].yaml and change its spec.template.spec.providerSpec.value.instanceType from m4.large to a bare metal type instance, for example c5n.metal. It is possible to consult the full list of available AWS instance types here.

  • Create the cluster on AWS by running:

# ./openshift-install create cluster --dir aws-manifests/

Deploying with mixed machinesets

  • You could also leave the default machine type as m4.large, then create a new machineset with bare metal instances. More infromation on how to create machinesets can be found here.

  • In a mixed setup, you should label your machine set uniquely, for example, kata-node=”true”. This will allow you to choose the nodes belonging to this machineset to run Kata Containers.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  name: my-kata--worker-metal-<region>
  namespace: openshift-machine-api
spec:
  # Replicas are 0 in case you don't immediately need to use the instances 
  # as bare metal instances cost more than normal instances
  replicas: 0 
  selector:
    matchLabels:
      ...
      machine.openshift.io/cluster-api-machineset: my-kata--worker-metal-<region>
  template:
    metadata:
      labels:
        ...
        machine.openshift.io/cluster-api-machine-role: worker
        machine.openshift.io/cluster-api-machine-type: worker
        machine.openshift.io/cluster-api-machineset: my-kata--worker-metal-<region>
    spec:
      metadata: 
        labels: 
          kata-node: "true" 
      providerSpec:
          ...
          instanceType: m5.metal 

Deploying OpenShift sandboxed containers

To deploy sandboed containers please consult the documentation.

Mixed Machinesets (BM Nodes for Kata Containers)

If you don't want to deploy an all bare-metal worker cluster, you have the option to create dedicated baremetal machinesets, which you could use for running sandboxed containers workloads. To do so, please follow the following steps:

  • If you had initially configured your bare metal machineset with 0 replicas, it's now time to scale it up. For example, oc scale -n openshift-machine-api machineset my-kata--worker-metal-<region> --replicas=1

  • Wait until the status is ready:

# watch -n 30 "oc get machines -n openshift-machine-api; echo;oc get nodes"

my-kata--worker-metal-<region>   Provisioned   m5.metal    eu-north-1   eu-north-1c   9m14s
  • Check that the node got the label kata-node=true we assigned to the machineset.
  • You can now proceed with the rest of the install guidelines for selecting nodes and running sandboxed containers as described in the documentation

Destroying the AWS cluster

  • In order to destroy the AWS cluster and clean up all AWS resources created for this OCP cluster (instances, network interfaces, subnets, elastic IPs, load balancers, Route53 DNS entries, etc.), the following command should be executed:
# ./openshift-install destroy cluster --dir aws-manifests
  • The command can be run in debug mode by passing the --log-level debug argument.
  • Note : it is important to keep the manifests directory and not delete it, as its contents are being used by the installer program to remove all associated AWS resources.

Comments