Chapter 5. Configure performance improvements

Some deployments benefit from additional configuration to achieve optimal performance. This section covers recommended additional configuration for certain deployments.

5.1. Improving volume performance by changing shard size

The default value of the shard-block-size parameter changed from 4MB to 64MB between Red Hat Hyperconverged Infrastructure for Virtualization version 1.0 and 1.1. This means that all new volumes are created with a shard-block-size value of 64MB. However, existing volumes retain the original shard-block-size value of 4MB.

There is no safe way to modify the shard-block-size value on volumes that contain data. Because shard block size applies only to writes that occur after the value is set, attempting to change the value on a volume that contains data results in a mixed shard block size, which results in poor performance.

This section shows you how to safely modify the shard block size on an existing volume after upgrading to Red Hat Hyperconverged Infrastructure for Virtualization 1.1 or higher, in order to take advantage of the performance benefits of a larger shard size.

5.1.1. Prerequisites

  • A logical thin pool with sufficient free space to create additional logical volumes that are large enough to contain all existing virtual machines.
  • A complete backup of your data. For details on how to achieve this, see Configuring backup and recovery options.

5.1.2. Safely changing the shard block size parameter value

A. Create a new storage domain

  1. Create new thin provisioned logical volumes

    1. For an arbitrated replicated volume:

      1. Create an lv_create_arbitrated.conf file with the following contents:

        [lv10:{<Gluster_Server_IP1>,<Gluster_Server_IP2>}]
        action=create
        lvname=<lv_name>
        ignore_lv_errors=no
        vgname=<volgroup_name>
        mount=<brick_mountpoint>
        lvtype=thinlv
        poolname=<thinpool_name>
        virtualsize=<size>
        
        [lv11:<Gluster_Server_IP3>]
        action=create
        lvname=<lv_name>
        ignore_lv_errors=no
        vgname=<volgroup_name>
        mount=<brick_mountpoint>
        lvtype=thinlv
        poolname=<thinpool_name>
        virtualsize=<size>
      2. Run the following command:

        # gdeploy -c lv_create_arbitrated.conf
    2. For a normal replicated volume:

      1. Create an lv_create_replicated.conf file with the following contents:

        [lv3]
        action=create
        lvname=<lv_name>
        ignore_lv_errors=no
        vgname=<volgroup_name>
        mount=<brick_mountpoint>
        lvtype=thinlv
        poolname=<thinpool_name>
        virtualsize=<size>
      2. Run the following command:

        # gdeploy -c lv_create_replicated.conf
  2. Create new gluster volumes on the new logical volumes

    1. For an arbitrated replicated volume

      1. Create a gluster_arb_volume.conf file with the following contents:

        [volume4]
        action=create
        volname=data_one
        transport=tcp
        replica=yes
        replica_count=3
        key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,features.shard-block-size,server.ssl,client.ssl,auth.ssl-allow
        value=virt,36,36,30,on,off,enable,64MB,on,on,"<Gluster_Server_IP1>;<Gluster_Server_IP2>;<Gluster_Server_IP3>"
        brick_dirs=<Gluster_Server_IP1>:<brick1_mountpoint>,<Gluster_Server_IP2>:<brick2_mountpoint>,<Gluster_Server_IP3>:<brick3_mountpoint>
        ignore_volume_errors=no
        arbiter_count=1
      2. Run the following command:

        # gdeploy -c gluster_arb_volume.conf
    2. For a normal replicated volume:

      1. Create a gluster_rep_volume.conf file with the following contents:

        [volume2]
        action=create
        volname=data
        transport=tcp
        replica=yes
        replica_count=3
        key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,features.shard-block-size
        value=virt,36,36,30,on,off,enable,64MB
        brick_dirs=<Gluster_Server_IP1>:<brick1_mountpoint>,<Gluster_Server_IP2>:<brick2_mountpoint>,<Gluster_Server_IP3>:<brick3_mountpoint>
        ignore_volume_errors=no
      2. Run the following command:

        # gdeploy -c gluster_rep_volume.conf
  3. Create a new storage domain using the new gluster volumes

    1. Log in to the Administration Portal.
    2. Click StorageDomains and then click New Domain.
    3. Set the Storage Type to GlusterFS and provide a Name for the domain.
    4. Check the Use managed gluster volume option and select the volume to use.
    5. Click OK to save.

B. Migrate any virtual machine templates

If your virtual machines are created from templates, copy each template to the new Storage Domain.

In Red Hat Virtualization Manager, click ComputeTemplates. For each template to migrate:

  1. Click the name of the template you want to migrate.
  2. Click the Disks subtab.
  3. Select the current disk and click Copy. The Copy Disk(s) dialog appears.
  4. Select the new storage domain as the target domain and click OK.

C. Migrate virtual machine disks to the new storage domain

For each virtual machine:

  1. Click StorageDisks.
  2. Select the disk to move and click Move. The Move Disk(s) dialog opens.
  3. Select the new storage domain as the target domain and click OK.

You can monitor progress in the Tasks tab.

D. Verify that disk images migrated correctly

  1. Click StorageDisks.
  2. For each disk:

    1. Select the disk to check.
    2. Click the Storage subtab.
    3. Verify that the domain listed is the new storage domain.
Important

Do not skip this step. There is no way to retrieve a disk image after a domain is detached and removed, so be sure that all disk images have correctly migrated before you move on.

E. Remove and reclaim the old storage domain

  1. Move the old storage domain into maintenance mode.
  2. Detach the old storage domain from the data center.
  3. Remove the old storage domain from the data center.

5.2. Configuring a logical volume cache (lvmcache) for improved performance

If your main storage devices are not Solid State Disks (SSDs), Red Hat recommends configuring a logical volume cache (lvmcache) to achieve the required performance for Red Hat Hyperconverged Infrastructure for Virtualization deployments.

  1. Create the gdeploy configuration file

    Create a gdeploy configuration file named lvmcache.conf that contains at least the following information. Note that the ssd value should be the device name, not the device path (for example, use sdb not /dev/sdb).

    Example lvmcache.conf file

    [hosts]
    <Gluster_Network_NodeA>
    <Gluster_Network_NodeB>
    <Gluster_Network_NodeC>
    
    [lv1]
    action=setup-cache
    ssd=sdb
    vgname=gluster_vg_sdb
    poolname=gluster_thinpool_sdb
    cache_lv=lvcache
    cache_lvsize=220GB
    #cachemode=writethrough

    Important

    Ensure that disks specified as part of this deployment process do not have any partitions or labels.

    Important

    The default cache mode is writethrough, but writeback mode is also supported. To avoid the potential for data loss when implementing lvmcache in writeback mode, two separate SSD/NVMe devices are highly recommended. By configuring the two devices in a RAID-1 configuration (via software or hardware), the potential of data loss from lost writes is reduced significantly.

  2. Run gdeploy

    Run the following command to apply the configuration specified in lvmcache.conf.

    # gdeploy -c lvmcache.conf

For further information about lvmcache configuration, see Red Hat Enterprise Linux 7 LVM Administration: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Logical_Volume_Manager_Administration/LV.html#lvm_cache_volume_creation.