AWS EFS CSI Driver Operator installation guide in OCP

Updated -

Disclaimer: Links contained herein to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.

This is a guide for the installation steps of the AWS EFS CSI Driver Operator installation in Red Hat OpenShift Container Platform 4.10+. The installation is also part of the migration from aws-efs-operator to aws-csi-efs-driver-operator (in case the unsupported aws-efs-operator was previously installed on the cluster).

Installing the AWS EFS CSI Driver Operator on the OpenShift cluster

To install the AWS EFS CSI Driver Operator from the web console:

  1. Login to the cluster Web Console

  2. Install the AWS EFS CSI Operator

    a. Click Operators -> OperatorHub

    b. Locate the AWS EFS CSI Operator by typing "AWS EFS CSI" in the filter box

    c. Select the AWS EFS CSI Driver Operator from the result

    d. On the AWS EFS CSI Driver Operator page, click Install

    e. On the Install Operator page, ensure that:

    • All namespaces on the cluster (default) is selected
    • Installed Namespace is set to openshift-cluster-csi-drivers

    f. Click Install

  3. (For STS cluster only)(CLI only) Configure the Cloud credential and IAM role for Secure Token Service

    a. Login to the cluster via CLI with cluster-admin

    b. Find the pod for cloud-credential-operator(CCO)

    $ oc get po -n openshift-cloud-credential-operator -l app=cloud-credential-operator
    

    c. Copy the ccoctl binary from the pod to local

    $ oc cp -c cloud-credential-operator openshift-cloud-credential-operator/<CCO_POD name>:/usr/bin/ccoctl ./ccoctl 
    

    d. Set execute permission for the binary

    $ chmod 775 ./ccoctl
    

    e. Prepare the CredentialsRequest yaml for the EFS access

    apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
        name: openshift-aws-efs-csi-driver
        namespace: openshift-cloud-credential-operator
    spec:
        providerSpec:
            apiVersion: cloudcredential.openshift.io/v1
            kind: AWSProviderSpec
            statementEntries:
            - action:
              - elasticfilesystem:*
              effect: Allow
              resource: '*'
        secretRef:
            name: aws-efs-cloud-credentials
            namespace: openshift-cluster-csi-drivers
        serviceAccountNames:
        - aws-efs-csi-driver-operator
        - aws-efs-csi-driver-controller-sa
    

    f. Use the ccoctl to create the IAM role in AWS

    Note: This step needs to talk with the AWS, so you will also need to set the environment variables for AWS, like AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY

    $ ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_the_credentials_requests>/ --identity-provider-arn=<OIDC_provider_arn>
    
    • name: is the name used to tag any cloud resources that are created for tracking.

    • aws_region: is the AWS region where cloud resources are created.

    • path_to_directory_with_the_credentials_requests: is the directory containing the EFS CredentialsRequest file in previous step. Please make sure there is no other files are placed in the directory, since the ccoctl will try to scan the whole directory and will raise errors if any files is not yaml or json format.

    • OIDC_provider_arn: the ARN for the oidc provider which associates with your cluster

    g. Create the Secret which is generated above

    $ oc create -f <path_to_ccoctl_output_dir>/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml
    
    • path_to_ccoctl_output_dir: working dir for the above step where you run the ccoctl
  4. Install the AWS EFS CSI Driver

    a. Click administration -> CustomResourceDefinitions -> ClusterCSIDriver

    b. On the Instances tab, click Create ClusterCSIDriver

    c. Use the following YAML file:

    apiVersion: operator.openshift.io/v1
    kind: ClusterCSIDriver
    metadata:
        name: efs.csi.aws.com
    spec:
        managementState: Managed
    

    d. Click Create

    e. Wait for the following Conditions to change to a "true" status:

    • AWSEFSDriverCredentialsRequestControllerAvailable

    • AWSEFSDriverNodeServiceControllerAvailable

    • AWSEFSDriverControllerServiceControllerAvailable

Configure access to EFS volumes in AWS

Security Group permissions must be applied in AWS before EFS volumes can be used in OpenShift Container Platform.

This procedure assumes that you have created an EFS filesystem in your cluster's VPC ready to be used by the cluster. Please take note of the EFS filesystem ID as it will be used in the next steps.

This can be performed either via the AWS CLI or web user interface. Instructions for both have been included below.

Via the AWS CLI

  1. Establish a oc CLI login session to your cluster.

  2. Establish an aws CLI login session to your AWS account.

  3. Run this set of commands to identify the VPC CIDR and the Security Group associated with your EFS filesystem:

    EFSID=<please replace with the EFS filesystem ID>
    NODE=$(oc get nodes --selector=node-role.kubernetes.io/worker \
      -o jsonpath='{.items[0].metadata.name}')
    VPC=$(aws ec2 describe-instances \
      --filters "Name=private-dns-name,Values=$NODE" \
      --query 'Reservations[*].Instances[*].{VpcId:VpcId}' \
      | jq -r '.[0][0].VpcId')
    CIDR=$(aws ec2 describe-vpcs \
      --filters "Name=vpc-id,Values=$VPC" \
      --query 'Vpcs[*].CidrBlock' \
      | jq -r '.[0]')
    MOUNTTARGET=$(aws efs describe-mount-targets --file-system-id $EFSID \
      | jq -r '.MountTargets[0].MountTargetId')
    SG=$(aws efs describe-mount-target-security-groups --mount-target-id $MOUNTTARGET \
      | jq -r '.SecurityGroups[0]')
    
  4. Verify that the CIDR and Security Group represent the VPC CIDR and EFS security group respectively.

    echo "CIDR - $CIDR,  SG - $SG"
    
  5. Assuming the CIDR and SG values are correct, update the security group to allow port 2049 (NFS) ingress:

    aws ec2 authorize-security-group-ingress \
     --group-id $SG \
     --protocol tcp \
     --port 2049 \
     --cidr $CIDR | jq .
    

Via the AWS UI

1. Go to https://console.aws.amazon.com/efs#/file-systems and ensure you are in the correct AWS region for your cluster.

2. Click your volume, and on the Network tab ensure that all mount targets are available.

3. On the Network tab, copy the Security Group ID (you will need this in the next step).

4. Go to https://console.aws.amazon.com/ec2/v2/home#SecurityGroups, and find the Security Group used by the EFS volume.

5. On the Inbound rules tab, click Edit inbound rules, and then add a new rule with the following settings to allow OpenShift Container Platform nodes to access EFS volumes:

  • Type: NFS
  • Protocol: TCP
  • Port range: 2049
  • Source: CIDR range for your cluster's VPC (eg. "10.0.0.0/16")

6. Save the rule.

Comments