Chapter 6. Red Hat Gluster Storage Volumes

A Red Hat Gluster Storage volume is a logical collection of bricks, where each brick is an export directory on a server in the trusted storage pool. Most of the Red Hat Gluster Storage Server management operations are performed on the volume. For a detailed information about configuring Red Hat Gluster Storage for enhancing performance see, Chapter 13, Configuring Red Hat Gluster Storage for Enhancing Performance

Warning

Red Hat does not support writing data directly into the bricks. Read and write data only through the Native Client, or through NFS or SMB mounts.

Note

Red Hat Gluster Storage supports IP over Infiniband (IPoIB). Install Infiniband packages on all Red Hat Gluster Storage servers and clients to support this feature. Run the yum groupinstall "Infiniband Support" to install Infiniband packages.

Volume Types

Distributed
Distributes files across bricks in the volume.
Use this volume type where scaling and redundancy requirements are not important, or provided by other hardware or software layers.
See Section 6.5, “Creating Distributed Volumes” for additional information about this volume type.
Replicated
Replicates files across bricks in the volume.
Use this volume type in environments where high-availability and high-reliability are critical.
See Section 6.6, “Creating Replicated Volumes” for additional information about this volume type.
Distributed Replicated
Distributes files across replicated bricks in the volume.
Use this volume type in environments where high-reliability and scalability are critical. This volume type offers improved read performance in most environments.
See Section 6.7, “Creating Distributed Replicated Volumes” for additional information about this volume type.
Dispersed
Disperses the file's data across the bricks in the volume.
Use this volume type where you need a configurable level of reliability with a minimum space waste.
See Section 6.8, “Creating Dispersed Volumes” for additional information about this volume type.
Distributed Dispersed
Distributes file's data across the dispersed subvolume.
Use this volume type where you need a configurable level of reliability with a minimum space waste.
See Section 6.9, “Creating Distributed Dispersed Volumes” for additional information about this volume type.

6.1. Setting up Gluster Storage Volumes using gdeploy

The gdeploy tool automates the process of creating, formatting, and mounting bricks. With gdeploy, the manual steps listed between Section 6.3 Formatting and Mounting Bricks and Section 6.8 Creating Distributed Dispersed Volumes are automated.
When setting-up a new trusted storage pool, gdeploy could be the preferred choice of trusted storage pool set up, as manually executing numerous commands can be error prone.
The advantages of using gdeploy to automate brick creation are as follows:
  • Setting-up the backend on several machines can be done from one's laptop/desktop. This saves time and scales up well when the number of nodes in the trusted storage pool increase.
  • Flexibility in choosing the drives to configure. (sd, vd, ...).
  • Flexibility in naming the logical volumes (LV) and volume groups (VG).

6.1.1. Getting Started

Prerequisites
  1. Generate the passphrase-less SSH keys for the nodes which are going to be part of the trusted storage pool by running the following command:
    # ssh-keygen -f id_rsa -t rsa -N ''
  2. Set up password-less SSH access between the gdeploy controller and servers by running the following command:
    # ssh-copy-id -i root@server

    Note

    If you are using a Red Hat Gluster Storage node as the deployment node and not an external node, then the password-less SSH must be set up for the Red Hat Gluster Storage node from where the installation is performed using the following command:
    # ssh-copy-id -i root@localhost
  3. Install ansible by executing the following command:
    • For Red Hat Gluster Storage 3.1.3 on Red Hat Enterprise Linux 7.2, execute the following command:
      # yum install ansible
    • For Red Hat Gluster Storage 3.1.3 on Red Hat Enterprise Linux 6.8, execute the following command:
      # yum install ansible1.9
  4. You must also ensure the following:
    • Devices should be raw and unused
    • For multiple devices, use multiple volume groups, thinpool and thinvol in the gdeploy configuration file
gdeploy can be used to deploy Red Hat Gluster Storage in two ways::
  • Using a node in a trusted storage pool
  • Using a machine outside the trusted storage pool
Using a node in a cluster
The gdeploy package is bundled as part of the initial installation of Red Hat Gluster Storage.
Using a machine outside the trusted storage pool
You must ensure that the Red Hat Gluster Storage is subscribed to the required channels. For more information see, Subscribing to the Red Hat Gluster Storage Server Channels in the Red Hat Gluster Storage 3.1 Installation Guide.
Execute the following command to install gdeploy:
# yum install gdeploy
For more information on installing gdeploy see, Installing Ansible to Support Gdeploy section in the Red Hat Gluster Storage 3.1 Installation Guide.

6.1.2. Setting up a Trusted Storage Pool

Creating a trusted storage pool is a tedious task and becomes more tedious as the nodes in the trusted storage pool grow. With gdeploy, just a configuration file can be used to set up a trusted storage pool. When gdeploy is installed, a sample configuration file will be created at:
/usr/share/doc/ansible/gdeploy/examples/gluster.conf.sample

Note

The trusted storage pool can be created either by performing each tasks, such as, setting up a backend, creating a volume, and mounting volumes independently or summed up as a single configuration.
For example, for a basic trusted storage pool of a 2 x 2 replicated volume the configuration details in the configuration file will be as follows:
2x2-volume-create.conf:
#
# Usage:
#       gdeploy -c 2x2-volume-create.conf
#
# This does backend setup first and then create the volume using the
# setup bricks.
#
#

[hosts]
10.70.46.13
10.70.46.17


# Common backend setup for 2 of the hosts.
[backend-setup]
devices=sdb,sdc
vgs=vg1,vg2
pools=pool1,pool2
lvs=lv1,lv2
mountpoints=/mnt/data1,/mnt/data2
brick_dirs=/mnt/data1/1,/mnt/data2/2

# If backend-setup is different for each host
# [backend-setup:10.70.46.13]
# devices=sdb
# brick_dirs=/rhgs/brick1
#
# [backend-setup:10.70.46.17]
# devices=sda,sdb,sdc
# brick_dirs=/gluster/brick/brick{1,2,3}
#

[volume]
action=create
volname=sample_volname
replica=yes
replica_count=2
force=yes


[clients]
action=mount
volname=sample_volname
hosts=10.70.46.15
fstype=glusterfs
client_mount_points=/mnt/gluster
With this configuration a 2 x 2 replica trusted storage pool with the given IP addresses and backend device as /dev/sdb,/dev/sdc with the volume name as sample_volname will be created.
For more information on possible values, see Section 6.1.7, “Configuration File”
After modifying the configuration file, invoke the configuration using the command:
# gdeploy -c conf.txt

Note

You can create a new configuration file by referencing the template file available at /usr/share/doc/ansible/gdeploy/examples/gluster.conf.sample . To invoke the new configuration file, run gdeploy -c /path_to_file/config.txt command.
To only setup the backend see, Section 6.1.3, “Setting up the Backend ”
To only create a volume see, Section 6.1.4, “Creating Volumes”
To only mount clients see, Section 6.1.5, “Mounting Clients”

6.1.3. Setting up the Backend

In order to setup a Gluster Storage volume, the LVM thin-p must be set up on the storage disks. If the number of machines in the trusted storage pool is huge, these tasks takes a long time, as the number of commands involved are huge and error prone if not cautious. With gdeploy, just a configuration file can be used to set up a backend. The backend is setup at the time of setting up a fresh trusted storage pool, which requires bricks to be setup before creating a volume. When gdeploy is installed, a sample configuration file will be created at:
/usr/share/doc/ansible/gdeploy/examples/gluster.conf.sample
A backend can be setup in two ways:
  • Using the [backend-setup] module
  • Creating Physical Volume (PV), Volume Group (VG), and Logical Volume (LV) individually

6.1.3.1. Using the [backend-setup] Module

Backend setup can be done on specific machines or on all the machines. The backend-setup module internally creates PV, VG, and LV and mounts the device. Thin-p logical volumes are created as per the performance recommendations by Red Hat.
The backend can be setup based on the requirement, such as:
  • Generic
  • Specific
Generic
If the disk names are uniform across the machines then backend setup can be written as below. The backend is setup for all the hosts in the `hosts’ section.
For more information on possible values, see Section 6.1.7, “Configuration File”
Example configuration file: Backend-setup-generic.conf
#
# Usage:
#       gdeploy -c backend-setup-generic.conf
#
# This configuration creates backend for GlusterFS clusters
#

[hosts]
10.70.46.130
10.70.46.32
10.70.46.110
10.70.46.77

# Backend setup for all the nodes in the `hosts' section. This will create
# PV, VG, and LV with gdeploy generated names.
[backend-setup]
devices=vdb
Specific
If the disks names vary across the machines in the cluster then backend setup can be written for specific machines with specific disk names. gdeploy is quite flexible in allowing to do host specific setup in a single configuration file.
For more information on possible values, see Section 6.1.7, “Configuration File”
Example configuration file: backend-setup-hostwise.conf
#
# Usage:
#       gdeploy -c backend-setup-hostwise.conf
#
# This configuration creates backend for GlusterFS clusters
#

[hosts]
10.70.46.130
10.70.46.32
10.70.46.110
10.70.46.77

# Backend setup for 10.70.46.77 with default gdeploy generated names for
# Volume Groups and Logical Volumes. Volume names will be GLUSTER_vg1,
# GLUSTER_vg2...
[backend-setup:10.70.46.77]
devices=vda,vdb

# Backend setup for remaining 3 hosts in the `hosts' section with custom names
# for Volumes Groups and Logical Volumes.
[backend-setup:10.70.46.{130,32,110}]
devices=vdb,vdc,vdd
vgs=vg1,vg2,vg3
pools=pool1,pool2,pool3
lvs=lv1,lv2,lv3
mountpoints=/mnt/data1,/mnt/data2,/mnt/data3
brick_dirs=/mnt/data1/1,/mnt/data2/2,/mnt/data3/3

6.1.3.2. Creating Backend by Setting up PV, VG, and LV

If the user needs more control over setting up the backend, then pv, vg, and lv can be created individually. LV module provides flexibility to create more than one LV on a VG. For example, the `backend-setup’ module setups up a thin-pool by default and applies default performance recommendations. However, if the user has a different use case which demands more than one LV, and a combination of thin and thick pools then `backend-setup’ is of no help. The user can use PV, VG, and LV modules to achieve this.
For more information on possible values, see Section 6.1.7, “Configuration File”
The below example shows how to create four logical volumes on a single volume group. The examples shows a mix of thin and thickpool LV creation.
[hosts]
10.70.46.130
10.70.46.32

[pv]
action=create
devices=vdb

[vg1]
action=create
vgname=RHS_vg1
pvname=vdb

[lv1]
action=create
vgname=RHS_vg1
lvname=engine_lv
lvtype=thick
size=10GB
mount=/rhgs/brick1

[lv2]
action=create
vgname=RHS_vg1
poolname=lvthinpool
lvtype=thinpool
poolmetadatasize=10MB
chunksize=1024k
size=30GB

[lv3]
action=create
lvname=lv_vmaddldisks
poolname=lvthinpool
vgname=RHS_vg1
lvtype=thinlv
mount=/rhs/brick2
virtualsize=9GB

[lv4]
action=create
lvname=lv_vmrootdisks
poolname=lvthinpool
vgname=RHS_vg1
size=19GB
lvtype=thinlv
mount=/rhs/brick3
virtualsize=19GB
Example to extend an existing VG:
#
# Extends a given given VG. pvname and vgname is mandatory, in this example the
# vg `RHS_vg1' is extended by adding pv, vdd. If the pv is not alreay present, it
# is created by gdeploy.
#
[hosts]
10.70.46.130
10.70.46.32

[vg2]
action=extend
vgname=RHS_vg1
pvname=vdd

6.1.4. Creating Volumes

Setting up volume involves writing long commands by choosing the hostname/IP and brick order carefully and this could be error prone. gdeploy helps in simplifying this task. When gdeploy is installed, a sample configuration file will be created at:
/usr/share/doc/ansible/gdeploy/examples/gluster.conf.sample
For example, for a basic trusted storage pool of a 2 x 2 replicate volume the configuration details in the configuration file will be as follows:
[hosts]
10.0.0.1
10.0.0.2
10.0.0.3
10.0.0.4

[volume]
action=create
volname=glustervol
transport=tcp,rdma
replica=yes
replica_count=2
force=yes
For more information on possible values, see Section 6.1.7, “Configuration File”
After modifying the configuration file, invoke the configuration using the command:
# gdeploy -c conf.txt
Creating Multiple Volumes

Note

Support of creating multiple volumes only from gdeploy 2.0, please check your gdeploy version before trying this configuration.
While creating multiple volumes in a single configuration, the [volume] modules should be numbered. For example, if there are two volumes they will be numbered [volume1], [volume2]
vol-create.conf
[hosts]
10.70.46.130
10.70.46.32

[backend-setup]
devices=vdb,vdc
mountpoints=/mnt/data1,/mnt/data2

[volume1]
action=create
volname=vol-one
transport=tcp
replica=yes
replica_count=2
brick_dirs=/mnt/data1/1

[volume2]
action=create
volname=vol-two
transport=tcp
replica=yes
replica_count=2
brick_dirs=/mnt/data2/2
With gdeploy 2.0, a volume can be created with multiple volume options set. Number of keys should match number of values.
[hosts]
10.70.46.130
10.70.46.32

[backend-setup]
devices=vdb,vdc
mountpoints=/mnt/data1,/mnt/data2

[volume1]
action=create
volname=vol-one
transport=tcp
replica=yes
replica_count=2
key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm
value=virt,36,36,on,512MB,32,full
brick_dirs=/mnt/data1/1

[volume2]
action=create
volname=vol-two
transport=tcp
replica=yes
key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm
value=virt,36,36,on,512MB,32,full
replica_count=2
brick_dirs=/mnt/data2/2
The above configuration will create two volumes with multiple volume options set.

6.1.5. Mounting Clients

When mounting clients, instead of logging into every client which has to be mounted, gdeploy can be used to mount clients remotely. When gdeploy is installed, a sample configuration file will be created at:
/usr/share/doc/ansible/gdeploy/examples/gluster.conf.sample
Following is an example of the modifications to the configuration file in order to mount clients:
[clients]
action=mount
hosts=10.70.46.159
fstype=glusterfs
client_mount_points=/mnt/gluster
volname=10.0.0.1:glustervol

Note

If the fstype is NFS, then mention it as nfs-version. By default it is 3.
For more information on possible values, see Section 6.1.7, “Configuration File”
After modifying the configuration file, invoke the configuration using the command:
# gdeploy -c conf.txt

6.1.6. Configuring a Volume

The volumes can be configured using the configuration file. The volumes can be configured remotely using the configuration file without having to log into the trusted storage pool. For more information regarding the sections and options in the configuration file, see Section 6.1.7, “Configuration File”

6.1.6.1. Adding and Removing a Brick

The configuration file can be modified to add or remove a brick:
Adding a Brick
Modify the [volume] section in the configuration file to add a brick. For example:
[volume]
action=add-brick
volname=10.0.0.1:glustervol
bricks=10.0.0.1:/mnt/new_brick
After modifying the configuration file, invoke the configuration using the command:
# gdeploy -c conf.txt
Removing a Brick
Modify the [volume] section in the configuration file to remove a brick. For example:
[volume]
action=remove-brick
volname=10.0.0.1:glustervol
bricks=10.0.0.2:/mnt/brick
state=commit
Other options for state are stop, start, and force.
After modifying the configuration file, invoke the configuration using the command:
# gdeploy -c conf.txt
For more information on possible values, see Section 6.1.7, “Configuration File”

6.1.6.2. Rebalancing a Volume

Modify the [volume] section in the configuration file to rebalance a volume. For example:
[volume]
action=rebalance
volname=10.70.46.13:glustervol
state=start
Other options for state are stop, and fix-layout.
After modifying the configuration file, invoke the configuration using the command:
# gdeploy -c conf.txt
For more information on possible values, see Section 6.1.7, “Configuration File”

6.1.6.3. Starting, Stopping, or Deleting a Volume

The configuration file can be modified to start, stop, or delete a volume:
Starting a Volume
Modify the [volume] section in the configuration file to start a volume. For example:
[volume]
action=start
volname=10.0.0.1:glustervol
After modifying the configuration file, invoke the configuration using the command:
# gdeploy -c conf.txt
Stopping a Volume
Modify the [volume] section in the configuration file to start a volume. For example:
[volume]
action=stop
volname=10.0.0.1:glustervol
After modifying the configuration file, invoke the configuration using the command:
# gdeploy -c conf.txt
Deleting a Volume
Modify the [volume] section in the configuration file to start a volume. For example:
[volume]
action=delete
volname=10.70.46.13:glustervol
After modifying the configuration file, invoke the configuration using the command:
# gdeploy -c conf.txt
For more information on possible values, see Section 6.1.7, “Configuration File”

6.1.7. Configuration File

The configuration file includes the various options that can be used to change the settings for gdeploy. The following options are currently supported:
With the new release of gdeploy the configuration file has added many more sections and has enhanced the variables in the existing sections.
  • [hosts]
  • [devices]
  • [disktype]
  • [diskcount]
  • [stripesize]
  • [vgs]
  • [pools]
  • [lvs]
  • [mountpoints]
  • {host-specific-data-for-above}
  • [clients]
  • [volume]
  • [backend-setup]
  • [pv]
  • [vg]
  • [lv]
  • [RH-subscription]
  • [yum]
  • [shell]
  • [update-file]
  • [service]
  • [script]
  • [firewalld]
The options are briefly explained in the following list:
  • hosts
    This is a mandatory section which contains the IP address or hostname of the machines in the trusted storage pool. Each hostname or IP address should be listed in a separate line.
    For example:
    [hosts]
    10.0.0.1
    10.0.0.2
  • devices
    This is a generic section and is applicable to all the hosts listed in the [hosts] section. However, if sections of hosts such as the [hostname] or [IP-address] is present, then the data in the generic sections like [devices] is ignored. Host specific data take precedence. This is an optional section.
    For example:
    [devices]
    /dev/sda
    /dev/sdb

    Note

    When configuring the backend setup, the devices should be either listed in this section or in the host specific section.
  • disktype
    This section specifies the disk configuration that is used while setting up the backend. gdeploy supports RAID 10, RAID 6, and JBOD configurations. This is an optional section and if the field is left empty, JBOD is taken as the default configuration.
    For example:
    [disktype]
    raid6
  • diskcount
    This section specifies the number of data disks in the setup. This is a mandatory field if the [disktype] specified is either RAID 10 or RAID 6. If the [disktype] is JBOD the [diskcount] value is ignored. This is a host specific data.
    For example:
    [diskcount]
    10
  • stripesize
    This section specifies the stripe_unit size in KB.
    Case 1: This field is not necessary if the [disktype] is JBOD, and any given value will be ignored.
    Case 2: This is a mandatory field if [disktype] is specified as RAID 6.
    For [disktype] RAID 10, the default value is taken as 256KB. If you specify any other value the following warning is displayed:
    "Warning: We recommend a stripe unit size of 256KB for RAID 10"

    Note

    Do not add any suffixes like K, KB, M, etc. This is host specific data and can be added in the hosts section.
    For example:
    [stripesize]
    128
  • vgs
    This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section specifies the volume group names for the devices listed in [devices]. The number of volume groups in the [vgs] section should match the one in [devices]. If the volume group names are missing, the volume groups will be named as GLUSTER_vg{1, 2, 3, ...} as default.
    For example:
    [vgs]
    CUSTOM_vg1
    CUSTOM_vg2
  • pools
    This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section specifies the pool names for the volume groups specified in the [vgs] section. The number of pools listed in the [pools] section should match the number of volume groups in the [vgs] section. If the pool names are missing, the pools will be named as GLUSTER_pool{1, 2, 3, ...}.
    For example:
    [pools]
    CUSTOM_pool1
    CUSTOM_pool2
  • lvs
    This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section provides the logical volume names for the volume groups specified in [vgs]. The number of logical volumes listed in the [lvs] section should match the number of volume groups listed in [vgs]. If the logical volume names are missing, it is named as GLUSTER_lv{1, 2, 3, ...}.
    For example:
    [lvs]
    CUSTOM_lv1
    CUSTOM_lv2
  • mountpoints
    This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section specifies the brick mount points for the logical volumes. The number of mount points should match the number of logical volumes specified in [lvs] If the mount points are missing, the mount points will be names as /gluster/brick{1, 2, 3…}.
    For example:
    [mountpoints]
    /rhs/mnt1
    /rhs/mnt2
  • brick_dirs
    This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This is the directory which will be used as a brick while creating the volume. A mount point cannot be used as a brick directory, hence brick_dir should be a directory inside the mount point.
    This field can be left empty, in which case a directory will be created inside the mount point with a default name. If the backend is not setup, then this field will be ignored. In case mount points have to be used as brick directory, then use the force option in the volume section.

    Important

    If you only want to create a volume and not setup the back-end, then provide the absolute path of brick directories for each host specified in the [hosts] section under this section along with the volume section.
    For example:
    [brick_dirs]
    /mnt/rhgs/brick1
    /mnt/rhgs/brick2
  • host-specific-data
    This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. For the hosts (IP/hostname) listed under [hosts] section, each host can have its own specific data. The following are the variables that are supported for hosts.
    * devices - List of devices to use
    * vgs - Custom volume group names
    * pools - Custom pool names
    * lvs - Custom logical volume names
    * mountpoints - Mount points for the logical names
    * brick_dirs - This is the directory which will be used as a brick while creating the volume
    For example:
    [10.0.01]
    devices=/dev/vdb,/dev/vda
    vgs=CUSTOM_vg1,CUSTOM_vg2
    pools=CUSTOM_pool1,CUSTOM_pool1
    lvs=CUSTOM_lv1,CUSTOM_lv2
    mountpoints=/rhs/mount1,/rhs/mount2
    brick_dirs=brick1,brick2
  • peer
    This section specifies the configurations for the Trusted Storage Pool management (TSP). This section helps in making all the hosts specified in the [hosts] section to either probe each other to create the trusted storage pool or detach all of them from the trusted storage pool. The only option in this section is the option names 'manage' which can have it's values to be either probe or detach.
    For example:
    [peer]
    manage=probe
  • clients
    This section specifies the client hosts and client_mount_points to mount the gluster storage volume created. The 'action' option is to be specified for the framework to determine the action that has to be performed. The options are 'mount' and 'unmount'. The Client hosts field is mandatory. If the mount points are not specified, default will be taken as /mnt/gluster for all the hosts.
    The option fstype specifies how the gluster volume is to be mounted. Default is glusterfs (FUSE mount). The volume can also be mounted as NFS. Each client can have different types of volume mount, which has to be specified with a comma separated. The following fields are included:
    * action
    * hosts
    * fstype
    * client_mount_points
    For example:
    [clients]
    action=mount
    hosts=10.0.0.10
    fstype=nfs
    nfs-version=3
    client_mount_points=/mnt/rhs
  • volume
    The section specifies the configuration options for the volume. The following fields are included in this section:
    * action
    * volname
    * transport
    * replica
    * replica_count
    * disperse
    * disperse_count
    * redundancy_count
    * force
    • action
      This option specifies what action must be performed in the volume. The choices can be [create, delete, add-brick, remove-brick].
      create: This choice is used to create a volume.
      delete: If the delete choice is used, all the options other than 'volname' will be ignored.
      add-brick or remove-brick: If the add-brick or remove-brick is chosen, extra option bricks with a comma separated list of brick names(in the format <hostname>:<brick path> should be provided. In case of remove-brick, state option should also be provided specifying the state of the volume after brick removal.
    • volname
      This option specifies the volume name. Default name is glustervol

      Note

      • In case of a volume operation, the 'hosts' section can be omitted, provided volname is in the format <hostname>:<volname>, where hostname is the hostname / IP of one of the nodes in the cluster
      • Only single volume creation/deletion/configuration is supported.
    • transport
      This option specifies the transport type. Default is tcp. Options are tcp or rdma or tcp,rdma.
    • replica
      This option will specify if the volume should be of type replica. options are yes and no. Default is no. If 'replica' is provided as yes, the 'replica_count' should be provided.
    • disperse
      This option specifies if the volume should be of type disperse. Options are yes and no. Default is no.
    • disperse_count
      This field is optional even if 'disperse' is yes. If not specified, the number of bricks specified in the command line is taken as the disperse_count value.
    • redundancy_count
      If this value is not specified, and if 'disperse' is yes, it's default value is computed so that it generates an optimal configuration.
    • force
      This is an optional field and can be used during volume creation to forcefully create the volume.
    For example:
    [volname]
    action=create
    volname=glustervol
    transport=tcp,rdma
    replica=yes
    replica_count=3
    force=yes
  • backend-setup
    Available in gdeploy 2.0. This section sets up the backend for using with GlusterFS volume. If more than one backend-setup has to be done, they can be done by numbering the section like [backend-setup1], [backend-setup2], ...
    backend-setup section supports the following variables:
    • devices: This replaces the [pvs] section in gdeploy 1.x. devices variable lists the raw disks which should be used for backend setup. For example:
      [backend-setup]
      devices=sda,sdb,sdc
      This is a mandatory field.
    • vgs: This is an optional variable. This variable replaces the [vgs] section in gdeploy 1.x. vgs variable lists the names to be used while creating volume groups. The number of VG names should match the number of devices or should be left blank. gdeploy will generate names for the VGs. For example:
      [backend-setup]
      devices=sda,sdb,sdc
      vgs=custom_vg1,custom_vg2,custom_vg3
      A pattern can be provided for the vgs like custom_vg{1..3}, this will create three vgs.
      [backend-setup]
      devices=sda,sdb,sdc
      vgs=custom_vg{1..3}
    • pools: This is an optional variable. The variable replaces the [pools] section in gdeploy 1.x. pools lists the thin pool names for the volume.
      [backend-setup]
      devices=sda,sdb,sdc
      vgs=custom_vg1,custom_vg2,custom_vg3
      pools=custom_pool1,custom_pool2,custom_pool3
      Similar to vg, pattern can be provided for thin pool names. For example custom_pool{1..3}
    • lvs: This is an optional variable. This variable replaces the [lvs] section in gdeploy 1.x. lvs lists the logical volume name for the volume.
      [backend-setup]
      devices=sda,sdb,sdc
      vgs=custom_vg1,custom_vg2,custom_vg3
      pools=custom_pool1,custom_pool2,custom_pool3
      lvs=custom_lv1,custom_lv2,custom_lv3
      Patterns for LV can be provided similar to vg. For example custom_lv{1..3}.
    • mountpoints: This variable deprecates the [mountpoints] section in gdeploy 1.x. Mountpoints lists the mount points where the logical volumes should be mounted. Number of mount points should be equal to the number of logical volumes. For example:
      [backend-setup]
      devices=sda,sdb,sdc
      vgs=custom_vg1,custom_vg2,custom_vg3
      pools=custom_pool1,custom_pool2,custom_pool3
      lvs=custom_lv1,custom_lv2,custom_lv3
      mountpoints=/gluster/data1,/gluster/data2,/gluster/data3
    • ssd - This variable is set if caching has to be added. For example, the backed setup with ssd for caching should be:
      [backend-setup]
      ssd=sdc
      vgs=RHS_vg1
      datalv=lv_data
      chachedatalv=lv_cachedata:1G
      chachemetalv=lv_cachemeta:230G

      Note

      Specifying the name of the data LV is necessary while adding SSD. Make sure the datalv is created already. Otherwise ensure to create it in one of the earlier `backend-setup’ sections.
  • PV
    Available in gdeploy 2.0. If the user needs to have more control over setting up the backend, and does not want to use backend-setup section, then pv, vg, and lv modules are to be used. The pv module supports the following variables.
    • action: Supports two values `create’ and `resize’
    • devices: The list of devices to use for pv creation.
    `action’ and `devices’ variables are mandatory. When `resize’ value is used for action then we have two more variables `expand’ and `shrink’ which can be set. Please see below for examples.
    Example 1: Creating a few physical volumes
    [pv]
    action=create
    devices=vdb,vdc,vdd
    Example 2: Creating a few physical volumes on a host
    [pv:10.0.5.2]
    action=create
    devices=vdb,vdc,vdd
    Example 3: Expanding an already created pv
    [pv]
    action=resize
    devices=vdb
    expand=yes
    Example 4: Shrinking an already created pv
    [pv]
    action=resize
    devices=vdb
    shrink=100G
  • VG
    Available in gdeploy 2.0. This module is used to create and extend volume groups. The vg module supports the following variables.
    • action - Action can be one of create or extend.
    • pvname - PVs to use to create the volume. For more than one PV use comma separated values.
    • vgname - The name of the vg. If no name is provided GLUSTER_vg will be used as default name.
    • one-to-one - If set to yes, one-to-one mapping will be done between pv and vg.
    If action is set to extend, the vg will be extended to include pv provided.
    Example1: Create a vg named images_vg with two PVs
    [vg]
    action=create
    vgname=images_vg
    pvname=sdb,sdc
    Example2: Create two vgs named rhgs_vg1 and rhgs_vg2 with two PVs
    [vg]
    action=create
    vgname=rhgs_vg
    pvname=sdb,sdc
    one-to-one=yes
    Example3: Extend an existing vg with the given disk.
    [vg]
    action=extend
    vgname=rhgs_images
    pvname=sdc
  • LV
    Available in gdeploy 2.0. This module is used to create, setup-cache, and convert logical volumes. The lv module supports the following variables:
    action - The action variable allows three values `create’, `setup-cache’, `convert’, and `change’. If the action is 'create', the following options are supported:
    • lvname: The name of the logical volume, this is an optional field. Default is GLUSTER_lv
    • poolname - Name of the thinpool volume name, this is an optional field. Default is GLUSTER_pool
    • lvtype - Type of the logical volume to be created, allowed values are `thin’ and `thick’. This is an optional field, default is thick.
    • size - Size of the logical volume volume. Default is to take all available space on the vg.
    • extent - Extent size, default is 100%FREE
    • force - Force lv create, do not ask any questions. Allowed values `yes’, `no’. This is an optional field, default is yes.
    • vgname - Name of the volume group to use.
    • pvname - Name of the physical volume to use.
    • chunksize - Size of chunk for snapshot.
    • poolmetadatasize - Sets the size of pool's metadata logical volume.
    • virtualsize - Creates a thinly provisioned device or a sparse device of the given size
    • mkfs - Creates a filesystem of the given type. Default is to use xfs.
    • mkfs-opts - mkfs options.
    • mount - Mount the logical volume.
    If the action is setup-cache, the below options are supported:
    • ssd - Name of the ssd device. For example sda/vda/ … to setup cache.
    • vgname - Name of the volume group.
    • poolname - Name of the pool.
    • cache_meta_lv - Due to requirements from dm-cache (the kernel driver), LVM further splits the cache pool LV into two devices - the cache data LV and cache metadata LV. Provide the cache_meta_lv name here.
    • cache_meta_lvsize - Size of the cache meta lv.
    • cache_lv - Name of the cache data lv.
    • cache_lvsize - Size of the cache data.
    • force - Force
    If the action is convert, the below options are supported:
    • lvtype - type of the lv, available options are thin and thick
    • force - Force the lvconvert, default is yes.
    • vgname - Name of the volume group.
    • poolmetadata - Specifies cache or thin pool metadata logical volume.
    • cachemode - Allowed values writeback, writethrough. Default is writethrough.
    • cachepool - This argument is necessary when converting a logical volume to a cache LV. Name of the cachepool.
    • lvname - Name of the logical volume.
    • chunksize - Gives the size of chunk for snapshot, cache pool and thin pool logical volumes. Default unit is in kilobytes.
    • poolmetadataspare - Controls creation and maintanence of pool metadata spare logical volume that will be used for automated pool recovery.
    • thinpool - Specifies or converts logical volume into a thin pool's data volume. Volume’s name or path has to be given.
    If the action is change, the below options are supported:
    • lvname - Name of the logical volume.
    • vgname - Name of the volume group.
    • zero - Set zeroing mode for thin pool.
    Example 1: Create a thin LV
    [lv]
    action=create
    vgname=RHGS_vg1
    poolname=lvthinpool
    lvtype=thinpool
    poolmetadatasize=10MB
    chunksize=1024k
    size=30GB
    Example 2: Create a thick LV
    [lv]
    action=create
    vgname=RHGS_vg1
    lvname=engine_lv
    lvtype=thick
    size=10GB
    mount=/rhgs/brick1
    If there are more than one LVs, then the LVs can be created by numbering the LV sections, like [lv1], [lv2] …
  • RH-subscription
    Available in gdeploy 2.0. This module is used to subscribe, unsubscribe, attach, enable repos etc. The RH-subscription module allows the following variables:
    This module is used to subscribe, unsubscribe, attach, enable repos etc. The RH-subscription module allows the following variables:
    If the action is register, the following options are supported:
    • username/activationkey: Username or activationkey.
    • password/activationkey: Password or activation key
    • auto-attach: true/false
    • pool: Name of the pool.
    • repos: Repos to subscribe to.
    • disable-repos: Repo names to disable. Leaving this option blank will disable all the repos.
    • If the action is attach-pool the following options are supported:
      pool - Pool name to be attached.
    • If the action is enable-repos the following options are supported:
      repos - List of comma separated repos that are to be subscribed to.
    • If the action is disable-repos the following options are supported:
      repos - List of comma separated repos that are to be subscribed to.
    • If the action is unregister the systems will be unregistered.
    Example 1: Subscribe to Red Hat Subscription network:
    [RH-subscription1]
    action=register
    username=qa@redhat.com
    password=<passwd>
    pool=<pool>
    Example 2: Disable all the repos:
    [RH-subscription2]
    action=disable-repos
    repos=
    Example 3: Enable a few repos
    [RH-subscription3]
    action=enable-repos
    repos=rhel-7-server-rpms,rh-gluster-3-for-rhel-7-server-rpms,rhel-7-server-rhev-mgmt-agent-rpms
  • yum
    Available in gdeploy 2.0. This module is used to install or remove rpm packages, with the yum module we can add repos as well during the install time.
    The action variable allows two values `install’ and `remove’.
    If the action is install the following options are supported:
    • packages - Comma separated list of packages that are to be installed.
    • repos - The repositories to be added.
    • gpgcheck - yes/no values have to be provided.
    • update - Whether yum update has to be initiated.
    If the action is remove then only one option has to be provided:
    • remove - The comma separated list of packages to be removed.
    For example
    [yum1]
    action=install
    gpgcheck=no
    # Repos should be an url; eg: http://repo-pointing-glusterfs-builds
    repos=<glusterfs.repo>,<vdsm.repo>
    packages=vdsm,vdsm-gluster,ovirt-hosted-engine-setup,screen,gluster-nagios-addons,xauth
    update=yes
    Install a package on a particular host.
    [yum2:host1]
    action=install
    gpgcheck=no
    packages=rhevm-appliance
  • shell
    Available in gdeploy 2.0. This module allows user to run shell commands on the remote nodes.
    Currently shell provides a single action variable with value execute. And a command variable with any valid shell command as value.
    The below command will execute vdsm-tool on all the nodes.
    [shell]
    action=execute
    command=vdsm-tool configure --force
  • update-file
    Available in gdeploy 2.0. update-file module allows users to copy a file, edit a line in a file, or add new lines to a file. action variable can be any of copy, edit, or add.
    When the action variable is set to copy, the following variables are supported.
    • src - The source path of the file to be copied from.
    • dest - The destination path on the remote machine to where the file is to be copied to.
    When the action variable is set to edit, the following variables are supported.
    • dest - The destination file name which has to be edited.
    • replace - A regular expression, which will match a line that will be replaced.
    • line - Text that has to be replaced.
    When the action variable is set to add, the following variables are supported.
    • dest - File on the remote machine to which a line has to be added.
    • line - Line which has to be added to the file. Line will be added towards the end of the file.
    Example 1: Copy a file to a remote machine.
    [update-file]
    action=copy
    src=/tmp/foo.cfg
    dest=/etc/nagios/nrpe.cfg
    Example 2: Edit a line in the remote machine, in the below example lines that have allowed_hosts will be replaced with allowed_hosts=host.redhat.com
    [update-file]
    action=edit
    dest=/etc/nagios/nrpe.cfg
    replace=allowed_hosts
    line=allowed_hosts=host.redhat.com
    Example 3: Add a line to the end of a file
    [update-file]
    action=add
    dest=/etc/ntp.conf
    line=server clock.redhat.com iburst
  • service
    Available in gdeploy 2.0. The service module allows user to start, stop, restart, reload, enable, or disable a service. The action variable specifies these values.
    When action variable is set to any of start, stop, restart, reload, enable, disable the variable servicename specifies which service to start, stop etc.
    • service - Name of the service to start, stop etc.
    Example: enable and start ntp daemon.
    [service1]
    action=enable
    service=ntpd
    [service2]
    action=restart
    service=ntpd
  • script
    Available in gdeploy 2.0. script module enables user to execute a script/binary on the remote machine. action variable is set to execute. Allows user to specify two variables file and args.
    • file - An executable on the local machine.
    • args - Arguments to the above program.
    Example: Execute script disable-multipath.sh on all the remote nodes listed in `hosts’ section.
    [script]
    action=execute
    file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh
  • firewalld
    Available in gdeploy 2.0. firewalld module allows the user to manipulate firewall rules. action variable supports two values `add’ and `delete’. Both add and delete support the following variables:
    • ports/services - The ports or services to add to firewall.
    • permanent - Whether to make the entry permanent. Allowed values are true/false
    • zone - Default zone is public
    For example:
    [firewalld]
    action=add
    ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp
    services=glusterfs