13.5. iSCSI-based Storage Pools

This section covers using iSCSI-based devices to store guest virtual machines. This allows for more flexible storage options such as using iSCSI as a block storage device. The iSCSI devices use an LIO target, which is a multi-protocol SCSI target for Linux. In addition to iSCSI, LIO also supports Fibre Channel and Fibre Channel over Ethernet (FCoE).
iSCSI (Internet Small Computer System Interface) is a network protocol for sharing storage devices. iSCSI connects initiators (storage clients) to targets (storage servers) using SCSI instructions over the IP layer.

13.5.1. Configuring a Software iSCSI Target

Introduced in Red Hat Enterprise Linux 7, iSCSI targets are created with the targetcli package, which provides a command set for creating software-backed iSCSI targets.

Procedure 13.4. Creating an iSCSI target

  1. Install the required package

    Install the targetcli package and all dependencies:
    # yum install targetcli
  2. Launch targetcli

    Launch the targetcli command set:
    # targetcli
  3. Create storage objects

    Create three storage objects as follows, using the device created in Section 13.4, “LVM-based Storage Pools”:
    1. Create a block storage object, by changing into the /backstores/block directory and running the following command:
      # create [block-name][filepath]
      For example:
       # create block1 dev=/dev/vdb1
    2. Create a fileio object, by changing into the fileio directory and running the following command:
      # create [fileioname] [imagename] [image-size]
      For example:
      # create fileio1 /foo.img 50M
    3. Create a ramdisk object by changing into the ramdisk directory, and running the following command:
      # create [ramdiskname] [size]
      For example:
      # create ramdisk1 1M
    4. Remember the names of the disks you created in this step, as you will need them later.
  4. Navigate to the /iscsi directory

    Change into the iscsi directory:
    #cd /iscsi
  5. Create iSCSI target

    Create an iSCSI target in one of two ways:
    1. create with no additional parameters, automatically generates the IQN.
    2. create iqn.2010-05.com.example.server1:iscsirhel7guest creates a specific IQN on a specific server.
  6. Define the target portal group (TPG)

    Each iSCSI target needs to have a target portal group (TPG) defined. In this example, the default tpg1 will be used, but you can add additional tpgs as well. As this is the most common configuration, the example will configure tpg1. To do this, make sure you are still in the /iscsi directory and change to the /tpg1 directory.
    # /iscsi>iqn.iqn.2010-05.com.example.server1:iscsirhel7guest/tpg1
  7. Define the portal IP address

    In order to export the block storage over iSCSI, the portals, LUNs, and ACLs must all be configured first.
    The portal includes the IP address and TCP port that the target will listen on, and the initiators will connect to. iSCSI uses port 3260, which is the port that will be configured by default. To connect to this port, enter the following command from the /tpg directory:
    # portals/ create
    This command will have all available IP addresses listening to this port. To specify that only one specific IP address will listen on the port, run portals/ create [ipaddress], and the specified IP address will be configured to listen to port 3260.
  8. Configure the LUNs and assign the storage objects to the fabric

    This step uses the storage devices created in Procedure 13.4, “Creating an iSCSI target”. Make sure you change into the luns directory for the TPG you created in step 6, or iscsi>iqn.iqn.2010-05.com.example.server1:iscsirhel7guest, for example.
    1. Assign the first LUN to the ramdisk as follows:
      # create /backstores/ramdisk/ramdisk1
    2. Assign the second LUN to the block disk as follows:
      # create /backstores/block/block1
    3. Assign the third LUN to the fileio disk as follows:
      # create /backstores/fileio/file1
    4. Listing the resulting LUNs should resemble this screen output:
      /iscsi/iqn.20...csirhel7guest/tpg1 ls
      
      o- tgp1 ............................................................................[enabled, auth]
         o- acls..................................................................................[0 ACL]
         o- luns.................................................................................[3 LUNs]
         | o- lun0.....................................................................[ramdisk/ramdisk1]
         | o- lun1..............................................................[block/block1 (dev/vdb1)]
         | o- lun2...............................................................[fileio/file1 (foo.img)]
         o- portals............................................................................[1 Portal]
           o- IP-ADDRESS:3260........................................................................[OK]
      
  9. Creating ACLs for each initiator

    This step allows for the creation of authentication when the initiator connects, and it also allows for restriction of specified LUNs to specified initiators. Both targets and initiators have unique names. iSCSI initiators use an IQN.
    1. To find the IQN of the iSCSI initiator, enter the following command, replacing the name of the initiator:
      # cat /etc/iscsi/initiatorname.iscsi
      Use this IQN to create the ACLs.
    2. Change to the acls directory.
    3. Run the command create [iqn], or to create specific ACLs, refer to the following example:
      # create iqn.2010-05.com.example.foo:888
      Alternatively, to configure the kernel target to use a single user ID and password for all initiators, and enable all initiators to log in with that user ID and password, use the following commands (replacing userid and password):
      # set auth userid=redhat
      # set auth password=password123
      # set attribute authentication=1
      # set attribute generate_node_acls=1
  10. Make the configuration persistent with the saveconfig command. This will overwrite the previous boot settings. Alternatively, running exit from the targetcli saves the target configuration by default.
  11. Enable the service with systemctl enable target.service to apply the saved settings on next boot.

Procedure 13.5. Optional steps

  1. Create LVM volumes

    LVM volumes are useful for iSCSI backing images. LVM snapshots and re-sizing can be beneficial for guest virtual machines. This example creates an LVM image named virtimage1 on a new volume group named virtstore on a RAID5 array for hosting guest virtual machines with iSCSI.
    1. Create the RAID array

      Creating software RAID5 arrays is covered by the Red Hat Enterprise Linux 7 Storage Administration Guide.
    2. Create the LVM volume group

      Create a logical volume group named virtstore with the vgcreate command.
      # vgcreate virtstore /dev/md1
    3. Create a LVM logical volume

      Create a logical volume named virtimage1 on the virtstore volume group with a size of 20GB using the lvcreate command.
      # lvcreate **size 20G -n virtimage1 virtstore
      The new logical volume, virtimage1, is ready to use for iSCSI.

      Important

      Using LVM volumes for kernel target backstores can cause issues if the initiator also partitions the exported volume with LVM. This can be solved by adding global_filter = ["r|^/dev/vg0|"] to /etc/lvm/lvm.conf
  2. Optional: Test discovery

    Test whether the new iSCSI device is discoverable.
    # iscsiadm --mode discovery --type sendtargets --portal server1.example.com
    127.0.0.1:3260,1 iqn.2010-05.com.example.server1:iscsirhel7guest
  3. Optional: Test attaching the device

    Attach the new device (iqn.2010-05.com.example.server1:iscsirhel7guest) to determine whether the device can be attached.
    # iscsiadm -d2 -m node --login
    scsiadm: Max file limits 1024 1024
    
    Logging in to [iface: default, target: iqn.2010-05.com.example.server1:iscsirhel7guest, portal: 10.0.0.1,3260]
    Login to [iface: default, target: iqn.2010-05.com.example.server1:iscsirhel7guest, portal: 10.0.0.1,3260] successful.
    
  4. Detach the device.
    # iscsiadm -d2 -m node --logout
    scsiadm: Max file limits 1024 1024
    
    Logging out of session [sid: 2, target: iqn.2010-05.com.example.server1:iscsirhel7guest, portal: 10.0.0.1,3260
    Logout of [sid: 2, target: iqn.2010-05.com.example.server1:iscsirhel7guest, portal: 10.0.0.1,3260] successful.
An iSCSI device is now ready to use for virtualization.

13.5.2. Creating an iSCSI Storage Pool in virt-manager

This procedure covers creating a storage pool with an iSCSI target in virt-manager.

Procedure 13.6. Adding an iSCSI device to virt-manager

  1. Open the host machine's storage details

    1. In virt-manager, click the Edit and select Connection Details from the drop-down menu.
    2. click the Storage tab.
      Storage menu

      Figure 13.15. Storage menu

  2. Add a new pool (Step 1 of 2)

    Press the + button (the add pool button). The Add a New Storage Pool wizard appears.
    Add an iSCSI storage pool name and type

    Figure 13.16. Add an iSCSI storage pool name and type

    Choose a name for the storage pool, change the Type to iSCSI, and press Forward to continue.
  3. Add a new pool (Step 2 of 2)

    You will need the information you used in Section 13.5, “iSCSI-based Storage Pools” to complete the fields in this menu.
    1. Enter the iSCSI source and target. The Format option is not available as formatting is handled by the guest virtual machines. It is not advised to edit the Target Path. The default target path value, /dev/disk/by-path/, adds the drive path to that directory. The target path should be the same on all host physical machines for migration.
    2. Enter the host name or IP address of the iSCSI target. This example uses host1.example.com.
    3. In the Source IQN field, enter the iSCSI target IQN. If you look in Section 13.5, “iSCSI-based Storage Pools”, this is the information you added in the /etc/target/targets.conf file. This example uses iqn.2010-05.com.test_example.server1:iscsirhel7guest.
    4. (Optional) Check the Initiator IQN check box to enter the IQN for the initiator. This example uses iqn.2010-05.com.example.host1:iscsirhel7.
    5. Click Finish to create the new storage pool.
    Create an iSCSI storage pool

    Figure 13.17. Create an iSCSI storage pool

13.5.3. Deleting a Storage Pool Using virt-manager

This procedure demonstrates how to delete a storage pool.
  1. To avoid any issues with other guest virtual machines using the same pool, it is best to stop the storage pool and release any resources in use by it. To do this, select the storage pool you want to stop and click .
    Deleting a storage pool

    Figure 13.18. Deleting a storage pool

  2. Delete the storage pool by clicking . This icon is only enabled if you stop the storage pool first.

13.5.4. Creating an iSCSI-based Storage Pool with virsh

  1. Optional: Secure the storage pool

    If desired, set up authentication with the steps in Section 13.5.5, “Securing an iSCSI Storage Pool”.
  2. Define the storage pool

    Storage pool definitions can be created with the virsh command-line tool. Creating storage pools with virsh is useful for system administrators using scripts to create multiple storage pools.
    The virsh pool-define-as command has several parameters which are accepted in the following format:
    virsh pool-define-as name type source-host source-path source-dev source-name target
    The parameters are explained as follows:
    type
    defines this pool as a particular type, iSCSI for example
    name
    sets the name for the storage pool; must be unique
    source-host and source-path
    the host name and iSCSI IQN, respectively
    source-dev and source-name
    these parameters are not required for iSCSI-based pools; use a - character to leave the field blank.
    target
    defines the location for mounting the iSCSI device on the host machine
    The example below creates the same iSCSI-based storage pool as the virsh pool-define-as example above:
    # virsh pool-define-as --name iscsirhel7pool --type iscsi \
         --source-host server1.example.com \
         --source-dev iqn.2010-05.com.example.server1:iscsirhel7guest \
         --target /dev/disk/by-path
    Pool iscsirhel7pool defined
  3. Verify the storage pool is listed

    Verify the storage pool object is created correctly and the state is inactive.
    # virsh pool-list --all
    Name                 State      Autostart
    -----------------------------------------
    default              active     yes
    iscsirhel7pool       inactive   no
  4. Optional: Establish a direct connection to the iSCSI storage pool

    This step is optional, but it allows you to establish a direct connection to the iSCSI storage pool. By default this is enabled, but if the connection is to the host machine (and not direct to the network) you can change it back by editing the domain XML for the virtual machine to reflect this example:
       ...
       <disk type='volume' device='disk'>
          <driver name='qemu'/>
          <source pool='iscsi' volume='unit:0:0:1' mode='direct'/>
          <target dev='vda' bus='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
       </disk>
          ... 
    
    

    Figure 13.19. Disk type element XML example

    Note

    The same iSCSI storage pool can be used for a LUN or a disk, by specifying the disk device as either a disk or lun. For XML configuration examples of adding SCSI LUN-based storage to a guest, see Section 14.5.3, “Adding SCSI LUN-based Storage to a Guest”.
    Additionally, the source mode can be specified as mode='host' for a connection to the host machine.
    If you have configured authentication on the iSCSI server as detailed in Procedure 13.4, “Creating an iSCSI target”, then the following XML used as a <disk> sub-element will provide the authentication credentials for the disk. Section 13.5.5, “Securing an iSCSI Storage Pool” describes how to configure the libvirt secret.
    <auth type='chap' username='redhat'>
        <secret usage='iscsirhel7secret'/>
    </auth>
    
  5. Start the storage pool

    Use the virsh pool-start to enable a directory storage pool. This allows the storage pool to be used for volumes and guest virtual machines.
    # virsh pool-start iscsirhel7pool
    Pool iscsirhel7pool started
    # virsh pool-list --all
    Name                 State      Autostart
    -----------------------------------------
    default              active     yes
    iscsirhel7pool       active     no
    
  6. Turn on autostart

    Turn on autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts.
    # virsh pool-autostart iscsirhel7pool
    Pool iscsirhel7pool marked as autostarted
    Verify that the iscsirhel7pool pool has autostart enabled:
    # virsh pool-list --all
    Name                 State      Autostart
    -----------------------------------------
    default              active     yes
    iscsirhel7pool       active     yes
    
  7. Verify the storage pool configuration

    Verify the storage pool was created correctly, the sizes report correctly, and the state reports as running.
    # virsh pool-info iscsirhel7pool
    Name:           iscsirhel7pool
    UUID:           afcc5367-6770-e151-bcb3-847bc36c5e28
    State:          running
    Persistent:     unknown
    Autostart:      yes
    Capacity:       100.31 GB
    Allocation:     0.00
    Available:      100.31 GB
    
An iSCSI-based storage pool called iscsirhel7pool is now available.

13.5.5. Securing an iSCSI Storage Pool

User name and password parameters can be configured with virsh to secure an iSCSI storage pool. This can be configured before or after the pool is defined, but the pool must be started for the authentication settings to take effect.

Procedure 13.7. Configuring authentication for a storage pool with virsh

  1. Create a libvirt secret file

    Create a libvirt secret XML file called secret.xml, using the following example:
    # cat secret.xml
    <secret ephemeral='no' private='yes'>
        <description>Passphrase for the iSCSI example.com server</description>
        <auth type='chap' username='redhat'/>
        <usage type='iscsi'>
            <target>iscsirhel7secret</target>
        </usage>
    </secret>
    
  2. Define the secret file

    Define the secret.xml file with virsh:
    # virsh secret-define secret.xml
  3. Verify the secret file's UUID

    Verify the UUID in secret.xml:
    # virsh secret-list
    
     UUID                                  Usage
    --------------------------------------------------------------------------------
     2d7891af-20be-4e5e-af83-190e8a922360  iscsi iscsirhel7secret
  4. Assign a secret to the UUID

    Assign a secret to that UUID, using the following command syntax as an example:
    # MYSECRET=`printf %s "password123" | base64`
    # virsh secret-set-value 2d7891af-20be-4e5e-af83-190e8a922360 $MYSECRET
    This ensures the CHAP username and password are set in a libvirt-controlled secret list.
  5. Add an authentication entry to the storage pool

    Modify the <source> entry in the storage pool's XML file using virsh edit and add an <auth> element, specifying authentication type, username, and secret usage.
    The following shows an example of a storage pool XML definition with authentication configured:
    # cat iscsirhel7pool.xml
      <pool type='iscsi'>
        <name>iscsirhel7pool</name>
          <source>
             <host name='192.168.122.1'/>
             <device path='iqn.2010-05.com.example.server1:iscsirhel7guest'/>
             <auth type='chap' username='redhat'>
                <secret usage='iscsirhel7secret'/>
             </auth>
          </source>
        <target>
          <path>/dev/disk/by-path</path>
        </target>
      </pool>
    

    Note

    The <auth> sub-element exists in different locations within the guest XML's <pool> and <disk> elements. For a <pool>, <auth> is specified within the <source> element, as this describes where to find the pool sources, since authentication is a property of some pool sources (iSCSI and RBD). For a <disk>, which is a sub-element of a domain, the authentication to the iSCSI or RBD disk is a property of the disk. For an example of <disk> configured in the guest XML, see Section 13.5.4, “Creating an iSCSI-based Storage Pool with virsh”.
  6. Activate the changes in the storage pool

    The storage pool must be started to activate these changes.
    If the storage pool has not yet been started, follow the steps in Section 13.5.4, “Creating an iSCSI-based Storage Pool with virsh” to define and start the storage pool.
    If the pool has already been started, enter the following commands to stop and restart the storage pool:
    # virsh pool-destroy iscsirhel7pool
    # virsh pool-start iscsirhel7pool

13.5.6. Deleting a Storage Pool Using virsh

The following demonstrates how to delete a storage pool using virsh:
  1. To avoid any issues with other guest virtual machines using the same pool, it is best to stop the storage pool and release any resources in use by it.
    # virsh pool-destroy iscsirhel7pool
  2. Remove the storage pool's definition
    # virsh pool-undefine iscsirhel7pool