14.5. Adding Storage Devices to Guests
This section covers adding storage devices to a guest. Additional storage can only be added as needed. The following types of storage is discussed in this section:
- File based storage. Refer to Section 14.5.1, “Adding File-based Storage to a Guest”.
- Block devices - including CD-ROM, DVD and floppy devices. Refer to Section 14.5.2, “Adding Hard Drives and Other Block Devices to a Guest”.
- SCSI controllers and devices. If your host physical machine can accommodate it, up to 100 SCSI controllers can be added to any guest virtual machine. Refer to Section 14.5.4, “Managing Storage Controllers in a Guest Virtual Machine”.
14.5.1. Adding File-based Storage to a Guest
File-based storage is a collection of files that are stored on the host physical machines file system that act as virtualized hard drives for guests. To add file-based storage, perform the following steps:
Procedure 14.1. Adding file-based storage
- Create a storage file or use an existing file (such as an IMG file). Note that both of the following commands create a 4GB file which can be used as additional storage for a guest:
- Pre-allocated files are recommended for file-based storage images. Create a pre-allocated file using the following
ddcommand as shown:#
dd if=/dev/zero of=/var/lib/libvirt/images/FileName.img bs=1G count=4 - Alternatively, create a sparse file instead of a pre-allocated file. Sparse files are created much faster and can be used for testing, but are not recommended for production environments due to data integrity and performance issues.
#
dd if=/dev/zero of=/var/lib/libvirt/images/FileName.img bs=1G seek=4096 count=4
- Create the additional storage by writing a <disk> element in a new file. In this example, this file will be known as
NewStorage.xml.A<disk>element describes the source of the disk, and a device name for the virtual block device. The device name should be unique across all devices in the guest, and identifies the bus on which the guest will find the virtual block device. The following example defines a virtio block device whose source is a file-based storage container namedFileName.img:<disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/FileName.img'/> <target dev='vdb'/> </disk>
Device names can also start with "hd" or "sd", identifying respectively an IDE and a SCSI disk. The configuration file can also contain an<address>sub-element that specifies the position on the bus for the new device. In the case of virtio block devices, this should be a PCI address. Omitting the<address>sub-element lets libvirt locate and assign the next available PCI slot. - Attach the CD-ROM as follows:
<disk type='file' device='cdrom'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/FileName.img'/> <readonly/> <target dev='hdc'/> </disk >
- Add the device defined in
NewStorage.xmlwith your guest (Guest1):#
virsh attach-device --config Guest1 ~/NewStorage.xmlNote
This change will only apply after the guest has been destroyed and restarted. In addition, persistent devices can only be added to a persistent domain, that is a domain whose configuration has been saved withvirsh definecommand.If the guest is running, and you want the new device to be added temporarily until the guest is destroyed, omit the--configoption:#
virsh attach-device Guest1 ~/NewStorage.xmlNote
Thevirshcommand allows for anattach-diskcommand that can set a limited number of parameters with a simpler syntax and without the need to create an XML file. Theattach-diskcommand is used in a similar manner to theattach-devicecommand mentioned previously, as shown:#
virsh attach-disk Guest1 /var/lib/libvirt/images/FileName.img vdb --cache noneNote that thevirsh attach-diskcommand also accepts the--configoption. - Start the guest machine (if it is currently not running):
#
virsh start Guest1Note
The following steps are Linux guest specific. Other operating systems handle new storage devices in different ways. For other systems, refer to that operating system's documentation. Partitioning the disk drive
The guest now has a hard disk device called/dev/vdb. If required, partition this disk drive and format the partitions. If you do not see the device that you added, then it indicates that there is an issue with the disk hotplug in your guest's operating system.- Start
fdiskfor the new device:#
fdisk /dev/vdbCommand (m for help): - Type
nfor a new partition. - The following appears:
Command action e extended p primary partition (1-4)
Typepfor a primary partition. - Choose an available partition number. In this example, the first partition is chosen by entering
1.Partition number (1-4): 1
- Enter the default first cylinder by pressing
Enter.First cylinder (1-400, default 1):
- Select the size of the partition. In this example the entire disk is allocated by pressing
Enter.Last cylinder or +size or +sizeM or +sizeK (2-400, default 400):
- Enter
tto configure the partition type.Command (m for help): t
- Select the partition you created in the previous steps. In this example, the partition number is
1as there was only one partition created and fdisk automatically selected partition 1.Partition number (1-4): 1
- Enter
83for a Linux partition.Hex code (type L to list codes): 83
- Enter
wto write changes and quit.Command (m for help): w
- Format the new partition with the
ext3file system.#
mke2fs -j /dev/vdb1
- Create a mount directory, and mount the disk on the guest. In this example, the directory is located in myfiles.
#
mkdir /myfiles#mount /dev/vdb1 /myfilesThe guest now has an additional virtualized file-based storage device. Note however, that this storage will not mount persistently across reboot unless defined in the guest's/etc/fstabfile:/dev/vdb1 /myfiles ext3 defaults 0 0
14.5.2. Adding Hard Drives and Other Block Devices to a Guest
System administrators have the option to use additional hard drives to provide increased storage space for a guest, or to separate system data from user data.
Procedure 14.2. Adding physical block devices to guests
- This procedure describes how to add a hard drive on the host physical machine to a guest. It applies to all physical block devices, including CD-ROM, DVD and floppy devices.Physically attach the hard disk device to the host physical machine. Configure the host physical machine if the drive is not accessible by default.
- Do one of the following:
- Create the additional storage by writing a
diskelement in a new file. In this example, this file will be known asNewStorage.xml. The following example is a configuration file section which contains an additional device-based storage container for the host physical machine partition/dev/sr0:<disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source dev='/dev/sr0'/> <target dev='vdc' bus='virtio'/> </disk> - Follow the instruction in the previous section to attach the device to the guest virtual machine. Alternatively, you can use the
virsh attach-diskcommand, as shown:#
virsh attach-disk Guest1 /dev/sr0 vdcNote that the following options are available:- The
virsh attach-diskcommand also accepts the--config,--type, and--modeoptions, as shown:#
virsh attach-disk Guest1 /dev/sr0 vdc --config --type cdrom --mode readonly - Additionally,
--typealso accepts--type diskin cases where the device is a hard drive.
- The guest virtual machine now has a new hard disk device called
/dev/vdcon Linux (or something similar, depending on what the guest virtual machine OS chooses) . You can now initialize the disk from the guest virtual machine, following the standard procedures for the guest virtual machine's operating system. Refer to Procedure 14.1, “Adding file-based storage” for an example.
Warning
When adding block devices to a guest, make sure to follow the related security considerations found in the Red Hat Enterprise Linux 7 Virtualization Security Guide.
14.5.3. Adding SCSI LUN-based Storage to a Guest
A host SCSI LUN device can be exposed entirely to the guest using three mechanisms, depending on your host configuration. Exposing the SCSI LUN device in this way allows for SCSI commands to be executed directly to the LUN on the guest. This is useful as a means to share a LUN between guests, as well as to share Fibre Channel storage between hosts.
Important
The optional
sgio attribute controls whether unprivileged SCSI Generical I/O (SG_IO) commands are filtered for a device='lun' disk. The sgio attribute can be specified as 'filtered' or 'unfiltered', but must be set to 'unfiltered' to allow SG_IO ioctl commands to be passed through on the guest in a persistent reservation.
In addition to setting
sgio='unfiltered', the <shareable> element must be set to share a LUN between guests. The sgio attribute defaults to 'filtered' if not specified.
The
<disk> XML attribute device='lun' is valid for the following guest disk configurations:
type='block'for<source dev='/dev/disk/by-{path|id|uuid|label}'/><disk type='block' device='lun' sgio='unfiltered'> <driver name='qemu' type='raw'/> <source dev='/dev/disk/by-path/pci-0000\:04\:00.1-fc-0x203400a0b85ad1d7-lun-0'/> <target dev='sda' bus='scsi'/> <shareable /> </disk>Note
The backslashes prior to the colons in the<source>device name are required.type='network'for<source protocol='iscsi'... /><disk type='network' device='disk' sgio='unfiltered'> <driver name='qemu' type='raw'/> <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-net-pool/1'> <host name='example.com' port='3260'/> </source> <auth username='myuser'> <secret type='iscsi' usage='libvirtiscsi'/> </auth> <target dev='sda' bus='scsi'/> <shareable /> </disk>type='volume'when using an iSCSI or a NPIV/vHBA source pool as the SCSI source pool.The following example XML shows a guest using an iSCSI source pool (named iscsi-net-pool) as the SCSI source pool:<disk type='volume' device='lun' sgio='unfiltered'> <driver name='qemu' type='raw'/> <source pool='iscsi-net-pool' volume='unit:0:0:1' mode='host'/> <target dev='sda' bus='scsi'/> <shareable /> </disk>Note
Themode=option within the<source>tag is optional, but if used, it must be set to'host'and not'direct'. When set to'host', libvirt will find the path to the device on the local host. When set to'direct', libvirt will generate the path to the device using the source pool's source host data.The iSCSI pool (iscsi-net-pool) in the example above will have a similar configuration to the following:# virsh pool-dumpxml iscsi-net-pool <pool type='iscsi'> <name>iscsi-net-pool</name> <capacity unit='bytes'>11274289152</capacity> <allocation unit='bytes'>11274289152</allocation> <available unit='bytes'>0</available> <source> <host name='192.168.122.1' port='3260'/> <device path='iqn.2013-12.com.example:iscsi-chap-netpool'/> <auth type='chap' username='redhat'> <secret usage='libvirtiscsi'/> </auth> </source> <target> <path>/dev/disk/by-path</path> <permissions> <mode>0755</mode> </permissions> </target> </pool>To verify the details of the available LUNs in the iSCSI source pool, enter the following command:# virsh vol-list iscsi-net-pool Name Path ------------------------------------------------------------------------------ unit:0:0:1 /dev/disk/by-path/ip-192.168.122.1:3260-iscsi-iqn.2013-12.com.example:iscsi-chap-netpool-lun-1 unit:0:0:2 /dev/disk/by-path/ip-192.168.122.1:3260-iscsi-iqn.2013-12.com.example:iscsi-chap-netpool-lun-2
type='volume'when using a NPIV/vHBA source pool as the SCSI source pool.The following example XML shows a guest using a NPIV/vHBA source pool (named vhbapool_host3) as the SCSI source pool:<disk type='volume' device='lun' sgio='unfiltered'> <driver name='qemu' type='raw'/> <source pool='vhbapool_host3' volume='unit:0:1:0'/> <target dev='sda' bus='scsi'/> <shareable /> </disk>
The NPIV/vHBA pool (vhbapool_host3) in the example above will have a similar configuration to:# virsh pool-dumpxml vhbapool_host3 <pool type='scsi'> <name>vhbapool_host3</name> <capacity unit='bytes'>0</capacity> <allocation unit='bytes'>0</allocation> <available unit='bytes'>0</available> <source> <adapter type='fc_host' parent='scsi_host3' managed='yes' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee045d'/> </source> <target> <path>/dev/disk/by-path</path> <permissions> <mode>0700</mode> <owner>0</owner> <group>0</group> </permissions> </target> </pool>To verify the details of the available LUNs on the vHBA, enter the following command:# virsh vol-list vhbapool_host3 Name Path ------------------------------------------------------------------------------ unit:0:0:0 /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016044602198-lun-0 unit:0:1:0 /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016844602198-lun-0
For more information on using a NPIV vHBA with SCSI devices, see Section 13.7.3, “Configuring the Virtual Machine to Use a vHBA LUN”.
The following procedure shows an example of adding a SCSI LUN-based storage device to a guest. Any of the above
<disk device='lun'> guest disk configurations can be attached with this method. Substitute configurations according to your environment.
Procedure 14.3. Attaching SCSI LUN-based storage to a guest
- Create the device file by writing a <disk> element in a new file, and save this file with an XML extension (in this example, sda.xml):
# cat sda.xml <disk type='volume' device='lun' sgio='unfiltered'> <driver name='qemu' type='raw'/> <source pool='vhbapool_host3' volume='unit:0:1:0'/> <target dev='sda' bus='scsi'/> <shareable /> </disk>
- Associate the device created in sda.xml with your guest virtual machine (Guest1, for example):
#
virsh attach-device --config Guest1 ~/sda.xmlNote
Running thevirsh attach-devicecommand with the--configoption requires a guest reboot to add the device permanently to the guest. Alternatively, the--persistentoption can be used instead of--config, which can also be used to hotplug the device to a guest.
Alternatively, the SCSI LUN-based storage can be attached or configured on the guest using virt-manager. To configure this using virt-manager, click the button and add a virtual disk with the desired parameters, or change the settings of an existing SCSI LUN device from this window. In Red Hat Enterprise Linux 7.2 and above, the SGIO value can also be configured in virt-manager:
Reconnecting to an exposed LUN after a hardware failure
If the connection to an exposed Fiber Channel (FC) LUN is lost due to a failure of hardware (such as the host bus adapter), the exposed LUNs on the guest may continue to appear as failed even after the hardware failure is fixed. To prevent this, edit the
dev_loss_tmo and fast_io_fail_tmo kernel options:
dev_loss_tmocontrols how long the SCSI layer waits after a SCSI device fails before marking it as failed. To prevent a timeout, it is recommended to set the option to the maximum value, which is2147483647.fast_io_fail_tmocontrols how long the SCSI layer waits after a SCSI device fails before failing back to the I/O. To ensure thatdev_loss_tmois not ignored by the kernel, set this option's value to any number lower than the value ofdev_loss_tmo.
To modify the value of
dev_loss_tmo and fast_io_fail, do one of the following:
- Edit the
/etc/multipath.conffile, and set the values in thedefaultssection:defaults { ... fast_io_fail_tmo 20 dev_loss_tmo infinity } - Set
dev_loss_tmoandfast_io_failon the level of the FC host or remote port, for example as follows:#
echo 20 > /sys/devices/pci0000:00/0000:00:06.0/0000:13:00.0/host1/rport-1:0-0/fc_remote_ports/rport-1:0-0/fast_io_fail_tmo#echo 2147483647 > /sys/devices/pci0000:00/0000:00:06.0/0000:13:00.0/host1/rport-1:0-0/fc_remote_ports/rport-1:0-0/dev_loss_tmo
To verify that the new values of
dev_loss_tmo and fast_io_fail are active, use the following command:
# find /sys -name dev_loss_tmo -print -exec cat {} \;
If the parameters have been set correctly, the output will look similar to this, with the appropriate device or devices instead of
pci0000:00/0000:00:06.0/0000:13:00.0/host1/rport-1:0-0/fc_remote_ports/rport-1:0-0:
# find /sys -name dev_loss_tmo -print -exec cat {} \;
...
/sys/devices/pci0000:00/0000:00:06.0/0000:13:00.0/host1/rport-1:0-0/fc_remote_ports/rport-1:0-0/dev_loss_tmo
2147483647
...
14.5.4. Managing Storage Controllers in a Guest Virtual Machine
Unlike virtio disks, SCSI devices require the presence of a controller in the guest virtual machine. This section details the necessary steps to create a virtual SCSI controller (also known as "Host Bus Adapter", or HBA), and to add SCSI storage to the guest virtual machine.
Procedure 14.4. Creating a virtual SCSI controller
- Display the configuration of the guest virtual machine (
Guest1) and look for a pre-existing SCSI controller:#
virsh dumpxml Guest1 | grep controller.*scsiIf a device controller is present, the command will output one or more lines similar to the following:<controller type='scsi' model='virtio-scsi' index='0'/>
- If the previous step did not show a device controller, create the description for one in a new file and add it to the virtual machine, using the following steps:
- Create the device controller by writing a
<controller>element in a new file and save this file with an XML extension.virtio-scsi-controller.xml, for example.<controller type='scsi' model='virtio-scsi'/>
- Associate the device controller you just created in
virtio-scsi-controller.xmlwith your guest virtual machine (Guest1, for example):#
virsh attach-device --config Guest1 ~/virtio-scsi-controller.xmlIn this example the--configoption behaves the same as it does for disks. Refer to Procedure 14.2, “Adding physical block devices to guests” for more information.
- Add a new SCSI disk or CD-ROM. The new disk can be added using the methods in sections Section 14.5.1, “Adding File-based Storage to a Guest” and Section 14.5.2, “Adding Hard Drives and Other Block Devices to a Guest”. In order to create a SCSI disk, specify a target device name that starts with sd. The supported limit for each controller is 1024 virtio-scsi disks, but it is possible that other available resources in the host (such as file descriptors) are exhausted with fewer disks.For more information, refer to the following Red Hat Enterprise Linux 6 whitepaper: The next-generation storage interface for the Red Hat Enterprise Linux Kernel Virtual Machine: virtio-scsi.
#
virsh attach-disk Guest1 /var/lib/libvirt/images/FileName.img sdb --cache noneDepending on the version of the driver in the guest virtual machine, the new disk may not be detected immediately by a running guest virtual machine. Follow the steps in the Red Hat Enterprise Linux Storage Administration Guide.
