Appendix A. Optional Deployment Method (with cns-deploy)

Following sections provides an optional method to deploy Red Hat Openshift Container Storage using cns-deploy.

Note

CNS-deploy is deprecated and will not be supported in future Openshift Container Storage versions for new deployments.

A.1. Setting up Converged mode

The converged mode environment addresses the use-case where applications require both shared storage and the flexibility of a converged infrastructure with compute and storage instances being scheduled and run from the same set of hardware.

A.1.1. Configuring Port Access

  • On each of the OpenShift nodes that will host the Red Hat Gluster Storage container, add the following rules to /etc/sysconfig/iptables in order to open the required ports:

    -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 24007 -j ACCEPT
    -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 2222 -j ACCEPT
    -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m multiport --dports 49152:49664 -j ACCEPT
    -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 24010 -j ACCEPT
    -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 3260 -j ACCEPT
    -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT
    Note
    • Port 24010 and 3260 are for gluster-blockd and iSCSI targets respectively.
    • The port range starting at 49664 defines the range of ports that can be used by GlusterFS for communication to its volume bricks. In the above example the total number of bricks allowed is 512. Configure the port range based on the maximum number of bricks that could be hosted on each node.

    For more information about Red Hat Gluster Storage Server ports, see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-getting_started.

    • Execute the following command to reload the iptables:

      # systemctl reload iptables
    • Execute the following command on each node to verify if the iptables are updated:
# iptables -L

A.1.2. Enabling Kernel Modules

Before running the cns-deploy tool, you must ensure that the dm_thin_pool, dm_multipath, and target_core_user modules are loaded in the OpenShift Container Platform node. Execute the following commands only on Gluster nodes to verify if the modules are loaded:

# lsmod | grep dm_thin_pool
# lsmod | grep dm_multipath
# lsmod | grep target_core_user

If the modules are not loaded, then execute the following command to load the modules:

# modprobe dm_thin_pool
# modprobe dm_multipath
# modprobe target_core_user
Note

To ensure these operations are persisted across reboots, create the following files and update each with the content as mentioned:

# cat /etc/modules-load.d/dm_thin_pool.conf
dm_thin_pool
# cat /etc/modules-load.d/dm_multipath.conf
dm_multipath
# cat /etc/modules-load.d/target_core_user.conf
target_core_user

A.1.3. Starting and Enabling Services

Execute the following commands to enable and run rpcbind on all the nodes hosting the gluster pod:

# systemctl add-wants multi-user rpcbind.service
# systemctl enable rpcbind.service
# systemctl start rpcbind.service

Execute the following command to check the status of rpcbind

# systemctl status rpcbind

rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2017-08-30 21:24:21 IST; 1 day 13h ago
 Main PID: 9945 (rpcbind)
   CGroup: /system.slice/rpcbind.service
└─9945 /sbin/rpcbind -w

Next Step: Proceed to Section A.3, “Setting up the Environment” to prepare the environment for Red Hat Gluster Storage Container Converged in OpenShift.

Note

To remove an installation of Red Hat Openshift Container Storage done using cns-deploy, run the cns-deploy --abort command. Use the -g option if Gluster is containerized.

When the pods are deleted, not all Gluster states are removed from the node. Therefore, you must also run rm -rf /var/lib/heketi /etc/glusterfs /var/lib/glusterd /var/log/glusterfs command on every node that was running a Gluster pod and also run wipefs -a <device> for every storage device that was consumed by Heketi. This erases all the remaining Gluster states from each node. You must be an administrator to run the device wiping command

A.2. Setting up Independent Mode

In an independent mode set-up, a dedicated Red Hat Gluster Storage cluster is available external to the OpenShift Container Platform. The storage is provisioned from the Red Hat Gluster Storage cluster.

A.2.1. Installing Red Hat Gluster Storage Server on Red Hat Enterprise Linux (Layered Install)

Layered install involves installing Red Hat Gluster Storage over Red Hat Enterprise Linux.

Important

It is recommended to create a separate /var partition that is large enough (50GB - 100GB) for log files, geo-replication related miscellaneous files, and other files.

  1. Perform a base install of Red Hat Enterprise Linux 7 Server

    Independent mode is supported only on Red Hat Enterprise Linux 7.

  2. Register the System with Subscription Manager

    Run the following command and enter your Red Hat Network username and password to register the system with the Red Hat Network:

    # subscription-manager register
  3. Identify Available Entitlement Pools

    Run the following commands to find entitlement pools containing the repositories required to install Red Hat Gluster Storage:

    # subscription-manager list --available
  4. Attach Entitlement Pools to the System

    Use the pool identifiers located in the previous step to attach the Red Hat Enterprise Linux Server and Red Hat Gluster Storage entitlements to the system. Run the following command to attach the entitlements:

    # subscription-manager attach --pool=[POOLID]

    For example:

    # subscription-manager attach --pool=8a85f9814999f69101499c05aa706e47
  5. Enable the Required Channels

    For Red Hat Gluster Storage 3.5 on Red Hat Enterprise Linux 7.7

    Run the following commands to enable the repositories required to install Red Hat Gluster Storage

    # subscription-manager repos --enable=rhel-7-server-rpms
    # subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
  6. Verify if the Channels are Enabled

    Run the following command to verify if the channels are enabled:

    # yum repolist
  7. Update all packages

    Ensure that all packages are up to date by running the following command.

    # yum update
  8. Kernel Version Requirement

    Independent mode requires the kernel-3.10.0-862.14.4.el7.x86_64 version or higher to be used on the system. Verify the installed and running kernel versions by running the following command:

    # rpm -q kernel
    kernel-3.10.0-862.14.4.el7.x86_64
    # uname -r
    3.10.0-862.14.4.el7.x86_64
    Important

    If any kernel packages are updated, reboot the system with the following command.

    +

    # shutdown -r now
  9. Install Red Hat Gluster Storage

    Run the following command to install Red Hat Gluster Storage:

    # yum install redhat-storage-server
    1. To enable gluster-block execute the following command:
# yum install gluster-block
  1. Reboot

    Reboot the system.

A.2.2. Configuring Port Access

This section provides information about the ports that must be open for the independent mode.

Red Hat Gluster Storage Server uses the listed ports. You must ensure that the firewall settings do not prevent access to these ports.

Execute the following commands to open the required ports for both runtime and permanent configurations on all Red Hat Gluster Storage nodes:

# firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=49152-49664/tcp
# firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=49152-49664/tcp --permanent
Note
  • Port 24010 and 3260 are for gluster-blockd and iSCSI targets respectively.
  • The port range starting at 49664 defines the range of ports that can be used by GlusterFS for communication to its volume bricks. In the above example, the total number of bricks allowed is 512. Configure the port range based on the maximum number of bricks that could be hosted on each node.

A.2.3. Enabling Kernel Modules

Execute the following commands to enable kernel modules:

  1. You must ensure that the dm_thin_pool and target_core_user modules are loaded in the Red Hat Gluster Storage nodes.

    # modprobe target_core_user
    # modprobe dm_thin_pool

    Execute the following command to verify if the modules are loaded:

    # lsmod | grep dm_thin_pool
    # lsmod | grep target_core_user
    Note

    To ensure these operations are persisted across reboots, create the following files and update each file with the content as mentioned:

    # cat /etc/modules-load.d/dm_thin_pool.conf
    dm_thin_pool
    # cat /etc/modules-load.d/target_core_user.conf
    target_core_user
  2. You must ensure that the dm_multipath module is loaded on all OpenShift Container Platform nodes.

    # modprobe dm_multipath

    Execute the following command to verify if the modules are loaded:

    # lsmod | grep dm_multipath
    Note

    To ensure these operations are persisted across reboots, create the following file and update it with the content as mentioned:

    # cat /etc/modules-load.d/dm_multipath.conf
    dm_multipath

A.2.4. Starting and Enabling Services

Execute the following commands to start glusterd and gluster-blockd:

# systemctl start sshd
# systemctl enable sshd
# systemctl start glusterd
# systemctl enable glusterd
# systemctl start gluster-blockd
# systemctl enable gluster-blockd

Next Step: Proceed to Section A.3, “Setting up the Environment” to prepare the environment for Red Hat Gluster Storage Container Converged in OpenShift.

A.3. Setting up the Environment

This chapter outlines the details for setting up the environment for Red Hat Openshift Container Platform.

A.3.1. Preparing the Red Hat OpenShift Container Platform Cluster

Execute the following steps to prepare the Red Hat OpenShift Container Platform cluster:

  1. On the master or client, execute the following command to login as the cluster admin user:

    # oc login

    For example:

    # oc login
    Authentication required for https://dhcp46-24.lab.eng.blr.redhat.com:8443 (openshift)
    Username: test
    Password:
    Login successful.
    
    You have access to the following projects and can switch between them with 'oc project <project_name>':
    
      * default
        kube-system
        logging
        management-infra
        openshift
        openshift-infra
    
    
    Using project "default".
  2. On the master or client, execute the following command to create a project, which will contain all the containerized Red Hat Gluster Storage services:

    # oc new-project <project_name>

    For example:

    # oc new-project storage-project
    
    Now using project "storage-project" on server "https://master.example.com:8443"
  3. After the project is created, execute the following command on the master node to enable the deployment of the privileged containers as Red Hat Gluster Storage container can only run in the privileged mode.

    # oc  adm policy add-scc-to-user privileged -z default
  4. Execute the following steps on the master to set up the router:

    Note

    If a router already exists, proceed to Step 5. To verify if the router is already deployed, execute the following command:

    # oc get dc --all-namespaces

    To list all routers in all namespaces execute the following command:

    # oc get dc --all-namespaces --selector=router=router
    NAME                                  REVISION   DESIRED   CURRENT   TRIGGERED BY
    glusterblock-storage-provisioner-dc   1          1         0         config
    heketi-storage                        4          1         1         config
    1. Execute the following command to enable the deployment of the router:

      # oc adm policy add-scc-to-user privileged -z router
    2. Execute the following command to deploy the router:

      # oc adm router storage-project-router --replicas=1
    3. Edit the subdomain name in the config.yaml file located at /etc/origin/master/master-config.yaml.

      For example:

      subdomain: "cloudapps.mystorage.com"

      For more information, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html-single/configuring_clusters/#customizing-the-default-routing-subdomain.

    4. For OpenShift Container Platform 3.7 and 3.9 execute the following command to restart the services:

      # systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers

    For more information regarding router setup, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html/configuring_clusters/setting-up-a-router

  5. Execute the following command to verify if the router is running:

    # oc get dc <_router_name_>

    For example:

    # oc get dc storage-project-router
    NAME                                  REVISION   DESIRED   CURRENT   TRIGGERED BY
    glusterblock-storage-provisioner-dc   1          1         0         config
    heketi-storage                        4          1         1         config
Note

Ensure you do not edit the */etc/dnsmasq.conf *file until the router has started.

  1. After the router is running, the client has to be setup to access the services in the OpenShift cluster. Execute the following steps on the client to set up the DNS.

    1. Execute the following command to find the IP address of the router:

      # oc get pods -o wide --all-namespaces | grep router
      storage-project storage-project-router-1-cm874        1/1       Running   119d       10.70.43.132   dhcp43-132.lab.eng.blr.redhat.com
    2. Edit the /etc/dnsmasq.conf file and add the following line to the file:

      address=/.cloudapps.mystorage.com/<Router_IP_Address>

      where, Router_IP_Address is the IP address of the node where the router is running.

    3. Restart the dnsmasq service by executing the following command:

      # systemctl restart dnsmasq
    4. Edit /etc/resolv.conf and add the following line:

      nameserver 127.0.0.1

For more information regarding setting up the DNS, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html/installing_clusters/install-config-install-prerequisites#prereq-dns.

A.3.2. Deploying Containerized Red Hat Gluster Storage Solutions

The following section covers deployment of the converged mode pods, independent mode pods, and using the *cns-deploy *tool.

Note
  1. You must first provide a topology file for heketi which describes the topology of the Red Hat Gluster Storage nodes and their attached storage devices. A sample, formatted topology file (topology-sample.json) is installed with the ‘heketi-client’ package in the /usr/share/heketi/ directory.

    {
        "clusters": [
            {
                "nodes": [
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "node1.example.com"
                                ],
                                "storage": [
                                    "192.168.68.3"
                                ]
                            },
                            "zone": 1
                        },
                        "devices": [
                            "/dev/sdb",
                            "/dev/sdc",
                            "/dev/sdd",
                            "/dev/sde",
                            "/dev/sdf",
                            "/dev/sdg",
                            "/dev/sdh",
                            "/dev/sdi"
                        ]
                    },
                    {
                        "node": {
                            "hostnames": {
                                "manage": [
                                    "node2.example.com"
                                ],
                                "storage": [
                                    "192.168.68.2"
                                ]
                            },
                            "zone": 2
                        },
                        "devices": [
                            "/dev/sdb",
                            "/dev/sdc",
                            "/dev/sdd",
                            "/dev/sde",
                            "/dev/sdf",
                            "/dev/sdg",
                            "/dev/sdh",
                            "/dev/sdi"
                        ]
                    },
    .......
    .......

    where, ** clusters: Array of clusters.

    + Each element on the array is a map which describes the cluster as follows.

    • nodes: Array of OpenShift nodes that will host the Red Hat Gluster Storage container

      Each element on the array is a map which describes the node as follows

    • node: It is a map of the following elements:

      • zone: The value represents the zone number that the node belongs to; the zone number is used by heketi for choosing optimum position of bricks by having replicas of bricks in different zones. Hence zone number is similar to a failure domain.
      • hostnames: It is a map which lists the manage and storage addresses

        • manage: It is the hostname/IP Address that is used by Heketi to communicate with the node
        • storage: It is the IP address that is used by other OpenShift nodes to communicate with the node. Storage data traffic will use the interface attached to this IP. This must be the IP address and not the hostname because, in an OpenShift environment, Heketi considers this to be the endpoint too.
    • devices: Name of each disk to be added
Note

Copy the topology file from the default location to your location and then edit it:

# cp /usr/share/heketi/topology-sample.json /<_Path_>/topology.json

Edit the topology file based on the Red Hat Gluster Storage pod hostname under the node.hostnames.manage section and node.hostnames.storage section with the IP address. For simplicity, the /usr/share/heketi/topology-sample.json file only sets up 4 nodes with 8 drives each.

Important

Heketi stores its database on a Red Hat Gluster Storage volume. In cases where the volume is down, the Heketi service does not respond due to the unavailability of the volume served by a disabled trusted storage pool.To resolve this issue, restart the trusted storage pool which contains the Heketi volume.

A.3.3. Deploying Converged Mode

Execute the following commands to deploy converged mode:

  1. Execute the following command on the client to deploy the heketi and Red Hat Gluster Storage pods:

    # cns-deploy -v -n <namespace> -g --admin-key <admin-key> --user-key <user-key> topology.json
    Note
    • From Container-Native Storage 3.6, support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To deploy S3 compatible object store in Red Hat Openshift Container Storage see substep i below.
    • In the above command, the value for admin-key is the secret string for heketi admin user. The heketi administrator will have access toall APIs and commands. Default is to use no secret.
    • The BLOCK_HOST_SIZE parameter in cns-deploy controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes. This default configuration will dynamically create block-hosting volumes of 500GB in size when more space is required. If you want to change this value then use --block-host in cns-deploy. For example:

      # cns-deploy -v -n storage-project -g --admin-key secret --user-key mysecret --block-host 1000 topology.json

    For example:

    # cns-deploy -v -n storage-project -g --admin-key secret --user-key mysecret topology.json
    
    Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.
    
    Before getting started, this script has some requirements of the execution
    environment and of the container platform that you should verify.
    
    The client machine that will run this script must have:
     * Administrative access to an existing Kubernetes or OpenShift cluster
     * Access to a python interpreter 'python'
    
    Each of the nodes that will host GlusterFS must also have appropriate firewall
    rules for the required GlusterFS ports:
     * 111   - rpcbind (for glusterblock)
     * 2222  - sshd (if running GlusterFS in a pod)
     * 3260  - iSCSI targets (for glusterblock)
     * 24010 - glusterblockd
     * 24007 - GlusterFS Management
     * 24008 - GlusterFS RDMA
     * 49152 to 49251 - Each brick for every volume on the host requires its own
       port. For every new brick, one new port will be used starting at 49152. We
       recommend a default range of 49152-49251 on each host, though you can adjust
       this to fit your needs.
    
    The following kernel modules must be loaded:
     * dm_snapshot
     * dm_mirror
     * dm_thin_pool
     * dm_multipath
     * target_core_user
    
    For systems with SELinux, the following settings need to be considered:
     * virt_sandbox_use_fusefs should be enabled on each node to allow writing to
       remote GlusterFS volumes
    
    In addition, for an OpenShift deployment you must:
     * Have 'cluster_admin' role on the administrative account doing the deployment
     * Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
     * Have a router deployed that is configured to allow apps to access services
       running in the cluster
    
    Do you wish to proceed with deployment?
    
    [Y]es, [N]o? [Default: Y]: Y
    Using OpenShift CLI.
    Using namespace "storage-project".
    Checking for pre-existing resources...
      GlusterFS pods ... not found.
      deploy-heketi pod ... not found.
      heketi pod ... not found.
      glusterblock-provisioner pod ... not found.
      gluster-s3 pod ... not found.
    Creating initial resources ... template "deploy-heketi" created
    serviceaccount "heketi-service-account" created
    template "heketi" created
    template "glusterfs" created
    role "edit" added: "system:serviceaccount:storage-project:heketi-service-account"
    OK
    node "ip-172-18-5-29.ec2.internal" labeled
    node "ip-172-18-8-205.ec2.internal" labeled
    node "ip-172-18-6-100.ec2.internal" labeled
    daemonset "glusterfs" created
    Waiting for GlusterFS pods to start ... OK
    secret "heketi-config-secret" created
    secret "heketi-config-secret" labeled
    service "deploy-heketi" created
    route "deploy-heketi" created
    deploymentconfig "deploy-heketi" created
    Waiting for deploy-heketi pod to start ... OK
    Creating cluster ... ID: 30cd12e60f860fce21e7e7457d07db36
    Allowing file volumes on cluster.
    Allowing block volumes on cluster.
    Creating node ip-172-18-5-29.ec2.internal ... ID: 4077242c76e5f477a27c5c47247cb348
    Adding device /dev/xvdc ... OK
    Creating node ip-172-18-8-205.ec2.internal ... ID: dda0e7d568d7b2f76a7e7491cfc26dd3
    Adding device /dev/xvdc ... OK
    Creating node ip-172-18-6-100.ec2.internal ... ID: 30a1795ca515c85dca32b09be7a68733
    Adding device /dev/xvdc ... OK
    heketi topology loaded.
    Saving /tmp/heketi-storage.json
    secret "heketi-storage-secret" created
    endpoints "heketi-storage-endpoints" created
    service "heketi-storage-endpoints" created
    job "heketi-storage-copy-job" created
    service "heketi-storage-endpoints" labeled
    deploymentconfig "deploy-heketi" deleted
    route "deploy-heketi" deleted
    service "deploy-heketi" deleted
    job "heketi-storage-copy-job" deleted
    pod "deploy-heketi-1-frjpt" deleted
    secret "heketi-storage-secret" deleted
    template "deploy-heketi" deleted
    service "heketi" created
    route "heketi" created
    deploymentconfig "heketi" created
    Waiting for heketi pod to start ... OK
    
    heketi is now running and accessible via http://heketi-storage-project.cloudapps.mystorage.com . To run
    administrative commands you can install 'heketi-cli' and use it as follows:
    
      # heketi-cli -s http://heketi-storage-project.cloudapps.mystorage.com --user admin --secret '<ADMIN_KEY>' cluster list
    
    You can find it at https://github.com/heketi/heketi/releases . Alternatively,
    use it from within the heketi pod:
    
      # /bin/oc -n storage-project exec -it <HEKETI_POD> -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list
    
    For dynamic provisioning, create a StorageClass similar to this:
    
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: glusterfs-storage
    provisioner: kubernetes.io/glusterfs
    parameters:
      resturl: "http://heketi-storage-project.cloudapps.mystorage.com"
    
    Ready to create and provide GlusterFS volumes.
    clusterrole "glusterblock-provisioner-runner" created
    serviceaccount "glusterblock-provisioner" created
    clusterrolebinding "glusterblock-provisioner" created
    deploymentconfig "glusterblock-provisioner-dc" created
    Waiting for glusterblock-provisioner pod to start ... OK
    Ready to create and provide Gluster block volumes.
    
    Deployment complete!
    Note
    For more information on the cns-deploy commands, refer to the man page of  cns-deploy.

    +

    # cns-deploy --help
    1. To deploy S3 compatible object store along with Heketi and Red Hat Gluster Storage pods, execute the following command:

      #  cns-deploy /opt/topology.json --deploy-gluster  --namespace <namespace> --yes --admin-key <admin-key>  --user-key <user-key> --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name>  --object-password <object user password> --verbose

      object-account, object-user, and object-password are required credentials for deploying the gluster-s3 container.If any of these are missing, gluster-s3 container deployment will be skipped.

      object-sc and object-capacity are optional parameters. Where, object-sc is used to specify a pre-existing StorageClass to use to create Red Hat Gluster Storage volumes to back the object store and object-capacity is the total capacity of the Red Hat Gluster Storage volume which will store the object data.

      For example:

      #  cns-deploy /opt/topology.json --deploy-gluster --namespace storage-project --yes --admin-key secret --user-key mysecret --log-file=/var/log/cns-deploy/444-cns-deploy.log --object-account testvolume --object-user adminuser --object-password itsmine --verbose
      Using OpenShift CLI.
      
      Checking status of namespace matching 'storage-project':
      storage-project   Active    56m
      Using namespace "storage-project".
      Checking for pre-existing resources...
        GlusterFS pods ...
      Checking status of pods matching '--selector=glusterfs=pod':
      No resources found.
      Timed out waiting for pods matching '--selector=glusterfs=pod'.
      not found.
        deploy-heketi pod ...
      Checking status of pods matching '--selector=deploy-heketi=pod':
      No resources found.
      Timed out waiting for pods matching '--selector=deploy-heketi=pod'.
      not found.
        heketi pod ...
      Checking status of pods matching '--selector=heketi=pod':
      No resources found.
      Timed out waiting for pods matching '--selector=heketi=pod'.
      not found.
        glusterblock-provisioner pod ...
      Checking status of pods matching '--selector=glusterfs=block-provisioner-pod':
      No resources found.
      Timed out waiting for pods matching '--selector=glusterfs=block-provisioner-pod'.
      not found.
        gluster-s3 pod ...
      Checking status of pods matching '--selector=glusterfs=s3-pod':
      No resources found.
      Timed out waiting for pods matching '--selector=glusterfs=s3-pod'.
      not found.
      Creating initial resources ... /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/deploy-heketi-template.yaml 2>&1
      template "deploy-heketi" created
      /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-service-account.yaml 2>&1
      serviceaccount "heketi-service-account" created
      /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-template.yaml 2>&1
      template "heketi" created
      /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/glusterfs-template.yaml 2>&1
      template "glusterfs" created
      /usr/bin/oc -n storage-project policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account 2>&1
      role "edit" added: "system:serviceaccount:storage-project:heketi-service-account"
      /usr/bin/oc -n storage-project adm policy add-scc-to-user privileged -z heketi-service-account
      OK
      Marking 'dhcp46-122.lab.eng.blr.redhat.com' as a GlusterFS node.
      /usr/bin/oc -n storage-project label nodes dhcp46-122.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
      node "dhcp46-122.lab.eng.blr.redhat.com" labeled
      Marking 'dhcp46-9.lab.eng.blr.redhat.com' as a GlusterFS node.
      /usr/bin/oc -n storage-project label nodes dhcp46-9.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
      node "dhcp46-9.lab.eng.blr.redhat.com" labeled
      Marking 'dhcp46-134.lab.eng.blr.redhat.com' as a GlusterFS node.
      /usr/bin/oc -n storage-project label nodes dhcp46-134.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
      node "dhcp46-134.lab.eng.blr.redhat.com" labeled
      Deploying GlusterFS pods.
      /usr/bin/oc -n storage-project process -p NODE_LABEL=glusterfs glusterfs | /usr/bin/oc -n storage-project create -f - 2>&1
      daemonset "glusterfs" created
      Waiting for GlusterFS pods to start ...
      Checking status of pods matching '--selector=glusterfs=pod':
      glusterfs-6fj2v   1/1       Running   0         52s
      glusterfs-ck40f   1/1       Running   0         52s
      glusterfs-kbtz4   1/1       Running   0         52s
      OK
      /usr/bin/oc -n storage-project create secret generic heketi-config-secret --from-file=private_key=/dev/null --from-file=./heketi.json --from-file=topology.json=/opt/topology.json
      secret "heketi-config-secret" created
      /usr/bin/oc -n storage-project label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret
      secret "heketi-config-secret" labeled
      /usr/bin/oc -n storage-project process -p HEKETI_EXECUTOR=kubernetes -p HEKETI_FSTAB=/var/lib/heketi/fstab -p HEKETI_ADMIN_KEY= -p HEKETI_USER_KEY= deploy-heketi | /usr/bin/oc -n storage-project create -f - 2>&1
      service "deploy-heketi" created
      route "deploy-heketi" created
      deploymentconfig "deploy-heketi" created
      Waiting for deploy-heketi pod to start ...
      Checking status of pods matching '--selector=deploy-heketi=pod':
      deploy-heketi-1-hf9rn   1/1       Running   0         2m
      OK
      Determining heketi service URL ... OK
      /usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- heketi-cli -s http://localhost:8080 --user admin --secret '' topology load --json=/etc/heketi/topology.json 2>&1
      Creating cluster ... ID: 252509038eb8568162ec5920c12bc243
      Allowing file volumes on cluster.
      Allowing block volumes on cluster.
      Creating node dhcp46-122.lab.eng.blr.redhat.com ... ID: 73ad287ae1ef231f8a0db46422367c9a
      Adding device /dev/sdd ... OK
      Adding device /dev/sde ... OK
      Adding device /dev/sdf ... OK
      Creating node dhcp46-9.lab.eng.blr.redhat.com ... ID: 0da1b20daaad2d5c57dbfc4f6ab78001
      Adding device /dev/sdd ... OK
      Adding device /dev/sde ... OK
      Adding device /dev/sdf ... OK
      Creating node dhcp46-134.lab.eng.blr.redhat.com ... ID: 4b3b62fc0efd298dedbcdacf0b498e65
      Adding device /dev/sdd ... OK
      Adding device /dev/sde ... OK
      Adding device /dev/sdf ... OK
      heketi topology loaded.
      /usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- heketi-cli -s http://localhost:8080 --user admin --secret '' setup-openshift-heketi-storage --listfile=/tmp/heketi-storage.json --image rhgs3/rhgs-volmanager-rhel7:3.3.0-17 2>&1
      Saving /tmp/heketi-storage.json
      /usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- cat /tmp/heketi-storage.json | /usr/bin/oc -n storage-project create -f - 2>&1
      secret "heketi-storage-secret" created
      endpoints "heketi-storage-endpoints" created
      service "heketi-storage-endpoints" created
      job "heketi-storage-copy-job" created
      
      Checking status of pods matching '--selector=job-name=heketi-storage-copy-job':
      heketi-storage-copy-job-87v6n   0/1       Completed   0         7s
      /usr/bin/oc -n storage-project label --overwrite svc heketi-storage-endpoints glusterfs=heketi-storage-endpoints heketi=storage-endpoints
      service "heketi-storage-endpoints" labeled
      /usr/bin/oc -n storage-project delete all,service,jobs,deployment,secret --selector="deploy-heketi" 2>&1
      deploymentconfig "deploy-heketi" deleted
      route "deploy-heketi" deleted
      service "deploy-heketi" deleted
      job "heketi-storage-copy-job" deleted
      pod "deploy-heketi-1-hf9rn" deleted
      secret "heketi-storage-secret" deleted
      /usr/bin/oc -n storage-project delete dc,route,template --selector="deploy-heketi" 2>&1
      template "deploy-heketi" deleted
      /usr/bin/oc -n storage-project process -p HEKETI_EXECUTOR=kubernetes -p HEKETI_FSTAB=/var/lib/heketi/fstab -p HEKETI_ADMIN_KEY= -p HEKETI_USER_KEY= heketi | /usr/bin/oc -n storage-project create -f - 2>&1
      service "heketi" created
      route "heketi" created
      deploymentconfig "heketi" created
      Waiting for heketi pod to start ...
      Checking status of pods matching '--selector=heketi=pod':
      heketi-1-zzblp   1/1       Running   0         31s
      OK
      Determining heketi service URL ... OK
      
      heketi is now running and accessible via http://heketi-storage-project.cloudapps.mystorage.com . To run
      administrative commands you can install 'heketi-cli' and use it as follows:
      
        # heketi-cli -s http://heketi-storage-project.cloudapps.mystorage.com --user admin --secret '<ADMIN_KEY>' cluster list
      
      You can find it at https://github.com/heketi/heketi/releases . Alternatively,
      use it from within the heketi pod:
      
        # /usr/bin/oc -n storage-project exec -it <HEKETI_POD> -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list
      
      For dynamic provisioning, create a StorageClass similar to this:
      
      ---
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        name: glusterfs-storage
      provisioner: kubernetes.io/glusterfs
      parameters:
        resturl: "http://heketi-storage-project.cloudapps.mystorage.com"
      
      Ready to create and provide GlusterFS volumes.
      sed -e 's/\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | /usr/bin/oc -n storage-project create -f - 2>&1
      clusterrole "glusterblock-provisioner-runner" created
      serviceaccount "glusterblock-provisioner" created
      clusterrolebinding "glusterblock-provisioner" created
      deploymentconfig "glusterblock-provisioner-dc" created
      Waiting for glusterblock-provisioner pod to start ...
      Checking status of pods matching '--selector=glusterfs=block-provisioner-pod':
      glusterblock-provisioner-dc-1-xm6bv   1/1       Running   0         6s
      OK
      Ready to create and provide Gluster block volumes.
      /usr/bin/oc -n storage-project create secret generic heketi-storage-project-admin-secret --from-literal=key= --type=kubernetes.io/glusterfs
      secret "heketi-storage-project-admin-secret" created
      /usr/bin/oc -n storage-project label --overwrite secret heketi-storage-project-admin-secret glusterfs=s3-heketi-storage-project-admin-secret gluster-s3=heketi-storage-project-admin-secret
      secret "heketi-storage-project-admin-secret" labeled
      sed -e 's/\${STORAGE_CLASS}/glusterfs-for-s3/' -e 's/\${HEKETI_URL}/heketi-storage-project.cloudapps.mystorage.com/' -e 's/\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/gluster-s3-storageclass.yaml | /usr/bin/oc -n storage-project create -f - 2>&1
      storageclass "glusterfs-for-s3" created
      sed -e 's/\${STORAGE_CLASS}/glusterfs-for-s3/' -e 's/\${VOLUME_CAPACITY}/2Gi/' /usr/share/heketi/templates/gluster-s3-pvcs.yaml | /usr/bin/oc -n storage-project create -f - 2>&1
      persistentvolumeclaim "gluster-s3-claim" created
      persistentvolumeclaim "gluster-s3-meta-claim" created
      
      Checking status of persistentvolumeclaims matching '--selector=glusterfs in (s3-pvc, s3-meta-pvc)':
      gluster-s3-claim        Bound     pvc-35b6c1f0-9c65-11e7-9c8c-005056b3ded1   2Gi       RWX       glusterfs-for-s3   18s
      gluster-s3-meta-claim   Bound     pvc-35b86e7a-9c65-11e7-9c8c-005056b3ded1   1Gi       RWX       glusterfs-for-s3   18s
      /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/gluster-s3-template.yaml 2>&1
      template "gluster-s3" created
      /usr/bin/oc -n storage-project process -p S3_ACCOUNT=testvolume -p S3_USER=adminuser -p S3_PASSWORD=itsmine gluster-s3 | /usr/bin/oc -n storage-project create -f - 2>&1
      service "gluster-s3-service" created
      route "gluster-s3-route" created
      deploymentconfig "gluster-s3-dc" created
      Waiting for gluster-s3 pod to start ...
      Checking status of pods matching '--selector=glusterfs=s3-pod':
      gluster-s3-dc-1-x3x4q   1/1       Running   0         6s
      OK
      Ready to create and provide Gluster object volumes.
      
      Deployment complete!
  2. Execute the following command to let the client communicate with the container:

    # export  HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>

    For example:

    # export  HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com

    To verify if Heketi is loaded with the topology execute the following command:

    # heketi-cli topology info
Note
The cns-deploy tool does not support scaling up of the cluster. To manually scale-up the cluster, see link:https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Managing_Clusters[]

Next step: If you are installing the independent mode 3.11, proceed to https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Updating_Registry.

A.3.3.1. Deploying Independent Mode

Execute the following commands to deploy Red Hat Openshift Container Storage in Independent mode:

  1. To set a passwordless SSH to all Red Hat Gluster Storage nodes, execute the following command on the client for each of the Red Hat Gluster Storage node:

    # ssh-copy-id -i /root/.ssh/id_rsa root@<hostname>
  2. Execute the following command on the client to deploy heketi pod and to create a cluster of Red Hat Gluster Storage nodes:

    # cns-deploy -v -n <namespace> -g --admin-key <admin-key> --user-key <user-key> topology.json
    Note
    • Support for S3 compatible Object Store is under technology preview. To deploy S3 compatible object store see substep i below.
    • In the above command, the value for admin-key is the secret string for heketi admin user. The heketi administrator will have access toall APIs and commands. Default is to use no secret.
    • The BLOCK_HOST_SIZE parameter in cns-deploy controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes. This default configuration will dynamically create block-hosting volumes of 500GB in size when more space is required. If you want to change this value then use --block-host in cns-deploy. For example:

      # cns-deploy -v -n storage-project -g --admin-key secret --user-key mysecret --block-host 1000 topology.json

    For example:

    # cns-deploy -v -n storage-project -g --admin-key secret -s /root/.ssh/id_rsa --user-key mysecret topology.json
    Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.
    
    Before getting started, this script has some requirements of the execution
    environment and of the container platform that you should verify.
    
    The client machine that will run this script must have:
     * Administrative access to an existing Kubernetes or OpenShift cluster
     * Access to a python interpreter 'python'
    
    Each of the nodes that will host GlusterFS must also have appropriate firewall
    rules for the required GlusterFS ports:
     * 2222  - sshd (if running GlusterFS in a pod)
     * 24007 - GlusterFS Management
     * 24008 - GlusterFS RDMA
     * 49152 to 49251 - Each brick for every volume on the host requires its own
       port. For every new brick, one new port will be used starting at 49152. We
       recommend a default range of 49152-49251 on each host, though you can adjust
       this to fit your needs.
    
    The following kernel modules must be loaded:
     * dm_snapshot
     * dm_mirror
     * dm_thin_pool
    
    For systems with SELinux, the following settings need to be considered:
     * virt_sandbox_use_fusefs should be enabled on each node to allow writing to
       remote GlusterFS volumes
    
    In addition, for an OpenShift deployment you must:
     * Have 'cluster_admin' role on the administrative account doing the deployment
     * Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
     * Have a router deployed that is configured to allow apps to access services
       running in the cluster
    
    Do you wish to proceed with deployment?
    
    [Y]es, [N]o? [Default: Y]: y
    Using OpenShift CLI.
    Using namespace "storage-project".
    Checking for pre-existing resources...
      GlusterFS pods ... not found.
      deploy-heketi pod ... not found.
      heketi pod ... not found.
    Creating initial resources ... template "deploy-heketi" created
    serviceaccount "heketi-service-account" created
    template "heketi" created
    role "edit" added: "system:serviceaccount:storage-project:heketi-service-account"
    OK
    secret "heketi-config-secret" created
    secret "heketi-config-secret" labeled
    service "deploy-heketi" created
    route "deploy-heketi" created
    deploymentconfig "deploy-heketi" created
    Waiting for deploy-heketi pod to start ... OK
    Creating cluster ... ID: 60bf06636eb4eb81d4e9be4b04cfce92
    Allowing file volumes on cluster.
    Allowing block volumes on cluster.
    Creating node dhcp47-104.lab.eng.blr.redhat.com ... ID: eadc66f9d03563bcfc3db3fe636c34be
    Adding device /dev/sdd ... OK
    Adding device /dev/sde ... OK
    Adding device /dev/sdf ... OK
    Creating node dhcp47-83.lab.eng.blr.redhat.com ... ID: 178684b0a0425f51b8f1a032982ffe4d
    Adding device /dev/sdd ... OK
    Adding device /dev/sde ... OK
    Adding device /dev/sdf ... OK
    Creating node dhcp46-152.lab.eng.blr.redhat.com ... ID: 08cd7034ef7ac66499dc040d93cf4a93
    Adding device /dev/sdd ... OK
    Adding device /dev/sde ... OK
    Adding device /dev/sdf ... OK
    heketi topology loaded.
    Saving /tmp/heketi-storage.json
    secret "heketi-storage-secret" created
    endpoints "heketi-storage-endpoints" created
    service "heketi-storage-endpoints" created
    job "heketi-storage-copy-job" created
    service "heketi-storage-endpoints" labeled
    deploymentconfig "deploy-heketi" deleted
    route "deploy-heketi" deleted
    service "deploy-heketi" deleted
    job "heketi-storage-copy-job" deleted
    pod "deploy-heketi-1-30c06" deleted
    secret "heketi-storage-secret" deleted
    template "deploy-heketi" deleted
    service "heketi" created
    route "heketi" created
    deploymentconfig "heketi" created
    Waiting for heketi pod to start ... OK
    
    heketi is now running and accessible via http://heketi-storage-project.cloudapps.mystorage.com . To run
    administrative commands you can install 'heketi-cli' and use it as follows:
    
      # heketi-cli -s http://heketi-storage-project.cloudapps.mystorage.com --user admin --secret '<ADMIN_KEY>' cluster list
    
    You can find it at https://github.com/heketi/heketi/releases . Alternatively,
    use it from within the heketi pod:
    
      # /usr/bin/oc -n storage-project exec -it <HEKETI_POD> -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list
    
    For dynamic provisioning, create a StorageClass similar to this:
    
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: glusterfs-storage
    provisioner: kubernetes.io/glusterfs
    parameters:
      resturl: "http://heketi-storage-project.cloudapps.mystorage.com"
    
    
    Deployment complete!
    Note
    For more information on the cns-deploy commands, refer to the man page of the cns-deploy.

    +

    # cns-deploy --help
    1. To deploy S3 compatible object store along with Heketi and Red Hat Gluster Storage pods, execute the following command:

      #  cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --admin-key <admin-key> --user-key <user-key> --yes --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name>  --object-password <object user password> --verbose

      object-account, object-user, and object-password are required credentials for deploying the gluster-s3 container.If any of these are missing, gluster-s3 container deployment will be skipped.

      object-sc and object-capacity are optional parameters. Where, object-sc is used to specify a pre-existing StorageClass to use to create Red Hat Gluster Storage volumes to back the object store and object-capacity is the total capacity of the Red Hat Gluster Storage volume which will store the object data.

      For example:

      #  cns-deploy /opt/topology.json --deploy-gluster  --namespace storage-project --admin-key secret --user-key mysecret --yes --log-file=/var/log/cns-deploy/444-cns-deploy.log --object-account testvolume --object-user adminuser --object-password itsmine --verbose
      Using OpenShift CLI.
      
      Checking status of namespace matching 'storage-project':
      storage-project   Active    56m
      Using namespace "storage-project".
      Checking for pre-existing resources...
        GlusterFS pods ...
      Checking status of pods matching '--selector=glusterfs=pod':
      No resources found.
      Timed out waiting for pods matching '--selector=glusterfs=pod'.
      not found.
        deploy-heketi pod ...
      Checking status of pods matching '--selector=deploy-heketi=pod':
      No resources found.
      Timed out waiting for pods matching '--selector=deploy-heketi=pod'.
      not found.
        heketi pod ...
      Checking status of pods matching '--selector=heketi=pod':
      No resources found.
      Timed out waiting for pods matching '--selector=heketi=pod'.
      not found.
        glusterblock-provisioner pod ...
      Checking status of pods matching '--selector=glusterfs=block-provisioner-pod':
      No resources found.
      Timed out waiting for pods matching '--selector=glusterfs=block-provisioner-pod'.
      not found.
        gluster-s3 pod ...
      Checking status of pods matching '--selector=glusterfs=s3-pod':
      No resources found.
      Timed out waiting for pods matching '--selector=glusterfs=s3-pod'.
      not found.
      Creating initial resources ... /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/deploy-heketi-template.yaml 2>&1
      template "deploy-heketi" created
      /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-service-account.yaml 2>&1
      serviceaccount "heketi-service-account" created
      /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-template.yaml 2>&1
      template "heketi" created
      /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/glusterfs-template.yaml 2>&1
      template "glusterfs" created
      /usr/bin/oc -n storage-project policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account 2>&1
      role "edit" added: "system:serviceaccount:storage-project:heketi-service-account"
      /usr/bin/oc -n storage-project adm policy add-scc-to-user privileged -z heketi-service-account
      OK
      Marking 'dhcp46-122.lab.eng.blr.redhat.com' as a GlusterFS node.
      /usr/bin/oc -n storage-project label nodes dhcp46-122.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
      node "dhcp46-122.lab.eng.blr.redhat.com" labeled
      Marking 'dhcp46-9.lab.eng.blr.redhat.com' as a GlusterFS node.
      /usr/bin/oc -n storage-project label nodes dhcp46-9.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
      node "dhcp46-9.lab.eng.blr.redhat.com" labeled
      Marking 'dhcp46-134.lab.eng.blr.redhat.com' as a GlusterFS node.
      /usr/bin/oc -n storage-project label nodes dhcp46-134.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
      node "dhcp46-134.lab.eng.blr.redhat.com" labeled
      Deploying GlusterFS pods.
      /usr/bin/oc -n storage-project process -p NODE_LABEL=glusterfs glusterfs | /usr/bin/oc -n storage-project create -f - 2>&1
      daemonset "glusterfs" created
      Waiting for GlusterFS pods to start ...
      Checking status of pods matching '--selector=glusterfs=pod':
      glusterfs-6fj2v   1/1       Running   0         52s
      glusterfs-ck40f   1/1       Running   0         52s
      glusterfs-kbtz4   1/1       Running   0         52s
      OK
      /usr/bin/oc -n storage-project create secret generic heketi-config-secret --from-file=private_key=/dev/null --from-file=./heketi.json --from-file=topology.json=/opt/topology.json
      secret "heketi-config-secret" created
      /usr/bin/oc -n storage-project label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret
      secret "heketi-config-secret" labeled
      /usr/bin/oc -n storage-project process -p HEKETI_EXECUTOR=kubernetes -p HEKETI_FSTAB=/var/lib/heketi/fstab -p HEKETI_ADMIN_KEY= -p HEKETI_USER_KEY= deploy-heketi | /usr/bin/oc -n storage-project create -f - 2>&1
      service "deploy-heketi" created
      route "deploy-heketi" created
      deploymentconfig "deploy-heketi" created
      Waiting for deploy-heketi pod to start ...
      Checking status of pods matching '--selector=deploy-heketi=pod':
      deploy-heketi-1-hf9rn   1/1       Running   0         2m
      OK
      Determining heketi service URL ... OK
      /usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- heketi-cli -s http://localhost:8080 --user admin --secret '' topology load --json=/etc/heketi/topology.json 2>&1
      Creating cluster ... ID: 252509038eb8568162ec5920c12bc243
      Allowing file volumes on cluster.
      Allowing block volumes on cluster.
      Creating node dhcp46-122.lab.eng.blr.redhat.com ... ID: 73ad287ae1ef231f8a0db46422367c9a
      Adding device /dev/sdd ... OK
      Adding device /dev/sde ... OK
      Adding device /dev/sdf ... OK
      Creating node dhcp46-9.lab.eng.blr.redhat.com ... ID: 0da1b20daaad2d5c57dbfc4f6ab78001
      Adding device /dev/sdd ... OK
      Adding device /dev/sde ... OK
      Adding device /dev/sdf ... OK
      Creating node dhcp46-134.lab.eng.blr.redhat.com ... ID: 4b3b62fc0efd298dedbcdacf0b498e65
      Adding device /dev/sdd ... OK
      Adding device /dev/sde ... OK
      Adding device /dev/sdf ... OK
      heketi topology loaded.
      /usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- heketi-cli -s http://localhost:8080 --user admin --secret '' setup-openshift-heketi-storage --listfile=/tmp/heketi-storage.json --image rhgs3/rhgs-volmanager-rhel7:3.3.0-17 2>&1
      Saving /tmp/heketi-storage.json
      /usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- cat /tmp/heketi-storage.json | /usr/bin/oc -n storage-project create -f - 2>&1
      secret "heketi-storage-secret" created
      endpoints "heketi-storage-endpoints" created
      service "heketi-storage-endpoints" created
      job "heketi-storage-copy-job" created
      
      Checking status of pods matching '--selector=job-name=heketi-storage-copy-job':
      heketi-storage-copy-job-87v6n   0/1       Completed   0         7s
      /usr/bin/oc -n storage-project label --overwrite svc heketi-storage-endpoints glusterfs=heketi-storage-endpoints heketi=storage-endpoints
      service "heketi-storage-endpoints" labeled
      /usr/bin/oc -n storage-project delete all,service,jobs,deployment,secret --selector="deploy-heketi" 2>&1
      deploymentconfig "deploy-heketi" deleted
      route "deploy-heketi" deleted
      service "deploy-heketi" deleted
      job "heketi-storage-copy-job" deleted
      pod "deploy-heketi-1-hf9rn" deleted
      secret "heketi-storage-secret" deleted
      /usr/bin/oc -n storage-project delete dc,route,template --selector="deploy-heketi" 2>&1
      template "deploy-heketi" deleted
      /usr/bin/oc -n storage-project process -p HEKETI_EXECUTOR=kubernetes -p HEKETI_FSTAB=/var/lib/heketi/fstab -p HEKETI_ADMIN_KEY= -p HEKETI_USER_KEY= heketi | /usr/bin/oc -n storage-project create -f - 2>&1
      service "heketi" created
      route "heketi" created
      deploymentconfig "heketi" created
      Waiting for heketi pod to start ...
      Checking status of pods matching '--selector=heketi=pod':
      heketi-1-zzblp   1/1       Running   0         31s
      OK
      Determining heketi service URL ... OK
      
      heketi is now running and accessible via http://heketi-storage-project.cloudapps.mystorage.com . To run
      administrative commands you can install 'heketi-cli' and use it as follows:
      
        # heketi-cli -s http://heketi-storage-project.cloudapps.mystorage.com --user admin --secret '<ADMIN_KEY>' cluster list
      
      You can find it at https://github.com/heketi/heketi/releases . Alternatively,
      use it from within the heketi pod:
      
        # /usr/bin/oc -n storage-project exec -it <HEKETI_POD> -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list
      
      For dynamic provisioning, create a StorageClass similar to this:
      
      ---
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        name: glusterfs-storage
      provisioner: kubernetes.io/glusterfs
      parameters:
        resturl: "http://heketi-storage-project.cloudapps.mystorage.com"
      
      Ready to create and provide GlusterFS volumes.
      sed -e 's/\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | /usr/bin/oc -n storage-project create -f - 2>&1
      clusterrole "glusterblock-provisioner-runner" created
      serviceaccount "glusterblock-provisioner" created
      clusterrolebinding "glusterblock-provisioner" created
      deploymentconfig "glusterblock-provisioner-dc" created
      Waiting for glusterblock-provisioner pod to start ...
      Checking status of pods matching '--selector=glusterfs=block-provisioner-pod':
      glusterblock-provisioner-dc-1-xm6bv   1/1       Running   0         6s
      OK
      Ready to create and provide Gluster block volumes.
      /usr/bin/oc -n storage-project create secret generic heketi-storage-project-admin-secret --from-literal=key= --type=kubernetes.io/glusterfs
      secret "heketi-storage-project-admin-secret" created
      /usr/bin/oc -n storage-project label --overwrite secret heketi-storage-project-admin-secret glusterfs=s3-heketi-storage-project-admin-secret gluster-s3=heketi-storage-project-admin-secret
      secret "heketi-storage-project-admin-secret" labeled
      sed -e 's/\${STORAGE_CLASS}/glusterfs-for-s3/' -e 's/\${HEKETI_URL}/heketi-storage-project.cloudapps.mystorage.com/' -e 's/\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/gluster-s3-storageclass.yaml | /usr/bin/oc -n storage-project create -f - 2>&1
      storageclass "glusterfs-for-s3" created
      sed -e 's/\${STORAGE_CLASS}/glusterfs-for-s3/' -e 's/\${VOLUME_CAPACITY}/2Gi/' /usr/share/heketi/templates/gluster-s3-pvcs.yaml | /usr/bin/oc -n storage-project create -f - 2>&1
      persistentvolumeclaim "gluster-s3-claim" created
      persistentvolumeclaim "gluster-s3-meta-claim" created
      
      Checking status of persistentvolumeclaims matching '--selector=glusterfs in (s3-pvc, s3-meta-pvc)':
      gluster-s3-claim        Bound     pvc-35b6c1f0-9c65-11e7-9c8c-005056b3ded1   2Gi       RWX       glusterfs-for-s3   18s
      gluster-s3-meta-claim   Bound     pvc-35b86e7a-9c65-11e7-9c8c-005056b3ded1   1Gi       RWX       glusterfs-for-s3   18s
      /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/gluster-s3-template.yaml 2>&1
      template "gluster-s3" created
      /usr/bin/oc -n storage-project process -p S3_ACCOUNT=testvolume -p S3_USER=adminuser -p S3_PASSWORD=itsmine gluster-s3 | /usr/bin/oc -n storage-project create -f - 2>&1
      service "gluster-s3-service" created
      route "gluster-s3-route" created
      deploymentconfig "gluster-s3-dc" created
      Waiting for gluster-s3 pod to start ...
      Checking status of pods matching '--selector=glusterfs=s3-pod':
      gluster-s3-dc-1-x3x4q   1/1       Running   0         6s
      OK
      Ready to create and provide Gluster object volumes.
      
      Deployment complete!
  3. Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. Execute the following commands on one of the Red Hat Gluster Storage nodes on each cluster to enable brick-multiplexing:

    1. Execute the following command to enable brick multiplexing:

      # gluster vol set all cluster.brick-multiplex on

      For example:

      # gluster vol set all cluster.brick-multiplex on
      Brick-multiplexing is supported only for container workloads (Independent or Converged mode). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y
      volume set: success
    2. Restart the heketidb volumes:

      # gluster vol stop heketidbstorage
      Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
      volume stop: heketidbstorage: success
      # gluster vol start heketidbstorage
      volume start: heketidbstorage: success
  4. Execute the following command to let the client communicate with the container:

    # export  HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>

    For example:

    # export  HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com

    To verify if Heketi is loaded with the topology execute the following command:

    # heketi-cli topology info
Note
The cns-deploy tool does not support scaling up of the cluster. To manually scale-up the cluster, see link:https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Managing_Clusters[].

Next step: If you are installing converged mode, proceed to https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Updating_Registry.