Red Hat Training

A Red Hat training course is available for Red Hat Gluster Storage

18.2. Setting up S3 Compatible Object Store for Container-Native Storage

Execute the following steps from the /usr/share/heketi/templates/ directory to set up S3 compatible object store for Container-Native Storage:
  1. (Optional): If you want to create a secret for heketi, then execute the following command:
    # oc create secret generic heketi-${NAMESPACE}-admin-secret
    --from-literal=key=${ADMIN_KEY} --type=kubernetes.io/glusterfs
    For example:
    # oc create secret generic heketi-storage-project-admin-secret
    --from-literal=key=  --type=kubernetes.io/glusterfs
    1. Execute the following command to label the secret:
      # oc label --overwrite secret heketi-${NAMESPACE}-admin-secret
      glusterfs=s3-heketi-${NAMESPACE}-admin-secret
      gluster-s3=heketi-${NAMESPACE}-admin-secret
      For example:
      # oc label --overwrite secret heketi-storage-project-admin-secret
      glusterfs=s3-heketi-storage-project-admin-secret
      gluster-s3=heketi-storage-project-admin-secret
  2. Create a GlusterFS StorageClass file. Use the HEKETI_URL and NAMESPACE from the current setup and set a STORAGE_CLASS name.
    # sed  -e 's/${HEKETI_URL}/heketi-storage-project.cloudapps.mystorage.com/g'  -e 's/${STORAGE_CLASS}/gluster-s3-store/g'      -e  's/${NAMESPACE}/storage-project/g'   /usr/share/heketi/templates/gluster-s3-storageclass.yaml | oc create -f -  
    storageclass "gluster-s3-store" created
  3. Create the Persistent Volume Claims using the storage class.
    # sed -e 's/${VOLUME_CAPACITY}/2Gi/g'  -e  's/${STORAGE_CLASS}/gluster-s3-store/g'  /usr/share/heketi/templates/gluster-s3-pvcs.yaml | oc create -f - 
    persistentvolumeclaim "gluster-s3-claim" created
    persistentvolumeclaim "gluster-s3-meta-claim" created
    Use the STORAGE_CLASS created from the previous step. Modify the VOLUME_CAPACITY as per the environment requirements. Wait till the PVC is bound. Verify the same using the following command:
    # oc get pvc 
    NAME                    STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
    gluster-s3-claim        Bound     pvc-0b7f75ef-9920-11e7-9309-00151e000016   2Gi        RWX           2m
    gluster-s3-meta-claim   Bound     pvc-0b87a698-9920-11e7-9309-00151e000016   1Gi        RWX           2m
  4. Start the glusters3 object storage service using the template:

    Note

    Set the S3_ACCOUNT name, S3_USER name, and S3_PASSWORD. PVC and META_PVC are obtained from the previous step.
    # oc new-app  /usr/share/heketi/templates/gluster-s3-template.yaml \
    --param=S3_ACCOUNT=testvolume  --param=S3_USER=adminuser \
    --param=S3_PASSWORD=itsmine --param=PVC=gluster-s3-claim \
    --param=META_PVC=gluster-s3-meta-claim
    --> Deploying template "storage-project/gluster-s3" for "/usr/share/heketi/templates/gluster-s3-template.yaml" to project storage-project
    
         gluster-s3
         ---------
         Gluster s3 service template
    
    
         * With parameters:
            * S3 Account Name=testvolume
            * S3 User=adminuser
            * S3 User Password=itsmine
            * Primary GlusterFS-backed PVC=gluster-s3-claim
            * Metadata GlusterFS-backed PVC=gluster-s3-meta-claim
    
    --> Creating resources ...
        service "gluster-s3-service" created
        route "gluster-s3-route" created
        deploymentconfig "gluster-s3-dc" created
    --> Success
        Run 'oc status' to view your app.
  5. Execute the following command to verify if the S3 pod is up:
    # oc get route
    NAME              HOST/PORT                                                            PATH      SERVICES           PORT      TERMINATION   WILDCARD
    gluster-S3-route   gluster-s3-route-storage-project.cloudapps.mystorage.com ... 1 more             gluster-s3-service   <all>                   None
    heketi            heketi-storage-project.cloudapps.mystorage.com ... 1 more                      heketi             <all>