Chapter 6. Getting started with iSCSI

Red Hat Enterprise Linux 8 uses the targetcli shell as a command-line interface to perform the following operations:

  • Add, remove, view, and monitor iSCSI storage interconnects to utilize iSCSI hardware.
  • Export local storage resources that are backed by either files, volumes, local SCSI devices, or by RAM disks to remote systems.

The targetcli tool has a tree-based layout including built-in tab completion, auto-complete support, and inline documentation.

6.1. Adding an iSCSI target

As a system administrator, you can add an iSCSI targets using the targetcli tool.

6.1.1. Installing targetcli

Install the targetcli tool to add, monitor, and remove iSCSI storage interconnects .

Procedure

  1. Install targetcli:

    # yum install targetcli
  2. Start the target service:

    # systemctl start target
  3. Configure target to start at boot time:

    # systemctl enable target
  4. Open port 3260 in the firewall and reload the firewall configuration:

    # firewall-cmd --permanent --add-port=3260/tcp
    Success
    
    # firewall-cmd --reload
    Success
  5. View the targetcli layout:

    # targetcli
    /> ls
    o- /........................................[...]
      o- backstores.............................[...]
      | o- block.................[Storage Objects: 0]
      | o- fileio................[Storage Objects: 0]
      | o- pscsi.................[Storage Objects: 0]
      | o- ramdisk...............[Storage Objects: 0]
      o- iscsi...........................[Targets: 0]
      o- loopback........................[Targets: 0]

Additional resources

  • The targetcli man page.

6.1.2. Creating an iSCSI target

Creating an iSCSI target enables the iSCSI initiator of the client to access the storage devices on the server. Both targets and initiators have unique identifying names.

Prerequisites

Procedure

  1. Navigate to the iSCSI directory:

    /> iscsi/
    Note

    The cd command is used to change directories as well as to list the path to move into.

  2. Use one of the following options to create an iSCSI target:

    1. Creating an iSCSI target using a default target name:

      /iscsi> create
      
      Created target
      iqn.2003-01.org.linux-iscsi.hostname.x8664:sn.78b473f296ff
      Created TPG1
    2. Creating an iSCSI target using a specific name:

      /iscsi> create iqn.2006-04.com.example:444
      
      Created target iqn.2006-04.com.example:444
      Created TPG1
      Here iqn.2006-04.com.example:444 is target_iqn_name

      Replace iqn.2006-04.com.example:444 with the specific target name.

  3. Verify the newly created target:

    /iscsi> ls
    
    o- iscsi.......................................[1 Target]
        o- iqn.2006-04.com.example:444................[1 TPG]
            o- tpg1...........................[enabled, auth]
               o- acls...............................[0 ACL]
                o- luns...............................[0 LUN]
               o- portals.........................[0 Portal]

Additional resources

  • The targetcli man page.

6.1.3. iSCSI Backstore

An iSCSI backstore enables support for different methods of storing an exported LUN’s data on the local machine. Creating a storage object defines the resources that the backstore uses. An administrator can choose any of the following backstore devices that Linux-IO (LIO) supports:

Additional resources

  • The targetcli man page.

6.1.4. Creating a fileio storage object

fileio storage objects can support either the write_back or write_thru operations. The write_back operation enables the local file system cache. This improves performance but increases the risk of data loss. It is recommended to use write_back=false to disable the write_back operation in favor of the write_thru operation.

Prerequisites

Procedure

  1. Navigate to the backstores directory:

    /> backstores/
  2. Create a fileio storage object:

    /> backstores/fileio create file1 /tmp/disk1.img 200M write_back=false
    
    Created fileio file1 with size 209715200
  3. Verify the created fileio storage object:

    /backstores> ls

Additional resources

  • The targetcli man page.

6.1.5. Creating a block storage object

The block driver allows the use of any block device that appears in the /sys/block/ directory to be used with Linux-IO (LIO). This includes physical devices (for example, HDDs, SSDs, CDs, DVDs) and logical devices (for example, software or hardware RAID volumes, or LVM volumes).

Prerequisites

Procedure

  1. Navigate to the backstores directory:

    /> backstores/
  2. Create a block backstore:

    /> backstores/block create name=block_backend dev=/dev/sdb
    
    Generating a wwn serial.
    Created block storage object block_backend using /dev/vdb.
  3. Verify the created block storage object:

    /backstores> ls
    Note

    You can also create a block backstore on a logical volume.

Additional resources

  • The targetcli man page.

6.1.6. Creating a pscsi storage object

You can configure, as a backstore, any storage object that supports direct pass-through of SCSI commands without SCSI emulation, and with an underlying SCSI device that appears with lsscsi in the /proc/scsi/scsi (such as a SAS hard drive) . SCSI-3 and higher is supported with this subsystem.

Warning

pscsi should only be used by advanced users. Advanced SCSI commands such as for Asymmetric Logical Unit Assignment (ALUAs) or Persistent Reservations (for example, those used by VMware ESX, and vSphere) are usually not implemented in the device firmware and can cause malfunctions or crashes. When in doubt, use block backstore for production setups instead.

Prerequisites

Procedure

  1. Navigate to the backstores directory:

    /> backstores/
  2. Create a pscsi backstore for a physical SCSI device, a TYPE_ROM device using /dev/sr0 in this example:

    /> backstores/pscsi/ create name=pscsi_backend dev=/dev/sr0
    
    Generating a wwn serial.
    Created pscsi storage object pscsi_backend using /dev/sr0
  3. Verify the created pscsi storage object:

    /backstores> ls

Additional resources

  • The targetcli man page.

6.1.7. Creating a Memory Copy RAM disk storage object

Memory Copy RAM disks (ramdisk) provide RAM disks with full SCSI emulation and separate memory mappings using memory copy for initiators. This provides capability for multi-sessions and is particularly useful for fast and volatile mass storage for production purposes.

Prerequisites

Procedure

  1. Navigate to the backstores directory:

    /> backstores/
  2. Create a 1GB RAM disk backstore:

    /> backstores/ramdisk/ create name=rd_backend size=1GB
    
    Generating a wwn serial.
    Created rd_mcp ramdisk rd_backend with size 1GB.
  3. Verify the created ramdisk storage object:

    /backstores> ls

Additional resources

  • The targetcli man page.

6.1.8. Creating an iSCSI portal

Creating an iSCSI portal adds an IP address and a port to the target that keeps the target enabled.

Prerequisites

Procedure

  1. Navigate to the TPG directory:

    /iscsi> iqn.2006-04.example:444/tpg1/
  2. Use one of the following options to create an iSCSI portal:

    1. Creating a default portal uses the default iSCSI port 3260 and allows the target to listen to all IP addresses on that port:

      /iscsi/iqn.20...mple:444/tpg1> portals/ create
      
      Using default IP port 3260
      Binding to INADDR_Any (0.0.0.0)
      Created network portal 0.0.0.0:3260
      Note

      When an iSCSI target is created, a default portal is also created. This portal is set to listen to all IP addresses with the default port number that is: 0.0.0.0:3260.

      To remove the default portal:

      /iscsi/iqn-name/tpg1/portals delete ip_address=0.0.0.0 ip_port=3260

    2. Creating a portal using a specific IP address:

      /iscsi/iqn.20...mple:444/tpg1> portals/ create 192.168.122.137
      
      Using default IP port 3260
      Created network portal 192.168.122.137:3260
  3. Verify the newly created portal:

    /iscsi/iqn.20...mple:444/tpg1> ls
    
    o- tpg.................................. [enambled, auth]
        o- acls ......................................[0 ACL]
        o- luns ......................................[0 LUN]
        o- portals ................................[1 Portal]
           o- 192.168.122.137:3260......................[OK]

Additional resources

  • The targetcli man page.

6.1.9. Creating an iSCSI LUN

Logical unit number (LUN) is a physical device that is backed by the iSCSI backstore. Each LUN has a unique number.

Prerequisites

Procedure

  1. Create LUNs of already created storage objects:

    /iscsi/iqn.20...mple:444/tpg1> luns/ create /backstores/ramdisk/rd_backend
    Created LUN 0.
    
    /iscsi/iqn.20...mple:444/tpg1> luns/ create /backstores/block/block_backend
    Created LUN 1.
    
    /iscsi/iqn.20...mple:444/tpg1> luns/ create /backstores/fileio/file1
    Created LUN 2.
  2. Verify the created LUNs:

    /iscsi/iqn.20...mple:444/tpg1> ls
    
    o- tpg.................................. [enambled, auth]
        o- acls ......................................[0 ACL]
        o- luns .....................................[3 LUNs]
        |  o- lun0.........................[ramdisk/ramdisk1]
        |  o- lun1.................[block/block1 (/dev/vdb1)]
        |  o- lun2...................[fileio/file1 (/foo.img)]
        o- portals ................................[1 Portal]
            o- 192.168.122.137:3260......................[OK]

    Default LUN name starts at 0.

    Important

    By default, LUNs are created with read-write permissions. If a new LUN is added after ACLs are created, LUN automatically maps to all available ACLs and can cause a security risk. To create a LUN with read-only permissions, see Section 6.1.10, “Creating a read-only iSCSI LUN”.

  3. Configure ACLs. For more information, see Section 6.1.11, “Creating an iSCSI ACL”.

Additional resources

  • The targetcli man page.

6.1.10. Creating a read-only iSCSI LUN

By default, LUNs are created with read-write permissions. This procedure describes how to create a read-only LUN.

Prerequisites

Procedure

  1. Set read-only permissions:

    /> set global auto_add_mapped_luns=false
    
    Parameter auto_add_mapped_luns is now 'false'.

    This prevents the auto mapping of LUNs to existing ACLs allowing the manual mapping of LUNs.

  2. Create the LUN:

    /> iscsi/target_iqn_name/tpg1/acls/initiator_iqn_name/ create mapped_lun=next_sequential_LUN_number tpg_lun_or_backstore=backstore write_protect=1

    Example:

    /> iscsi/iqn.2006-04.example:444/tpg1/acls/2006-04.com.example.foo:888/ create mapped_lun=1 tpg_lun_or_backstore=/backstores/block/block2 write_protect=1
    
    Created LUN 1.
    Created Mapped LUN 1.
  3. Verify the created LUN:

    /> ls
    
    o- / ...................................................... [...]
      o- backstores ........................................... [...]
      <snip>
      o- iscsi ......................................... [Targets: 1]
      | o- iqn.2006-04.example:444 .................. [TPGs: 1]
      |   o- tpg1 ............................ [no-gen-acls, no-auth]
      |     o- acls ....................................... [ACLs: 2]
      |     | o- 2006-04.com.example.foo:888 .. [Mapped LUNs: 2]
      |     | | o- mapped_lun0 .............. [lun0 block/disk1 (rw)]
      |     | | o- mapped_lun1 .............. [lun1 block/disk2 (ro)]
      |     o- luns ....................................... [LUNs: 2]
      |     | o- lun0 ...................... [block/disk1 (/dev/vdb)]
      |     | o- lun1 ...................... [block/disk2 (/dev/vdc)]
      <snip>

    The mapped_lun1 line now has (ro) at the end (unlike mapped_lun0’s (rw)) stating that it is read-only.

  4. Configure ACLs. For more information, see Section 6.1.11, “Creating an iSCSI ACL”.

Additional resources

  • The targetcli man page.

6.1.11. Creating an iSCSI ACL

In targetcli, Access Control Lists (ACLs) are used to define access rules and each initiator has exclusive access to a LUN. Both targets and initiators have unique identifying names. You must know the unique name of the initiator to configure ACLs. The iSCSI initiators can be found in the /etc/iscsi/initiatorname.iscsi file.

Prerequisites

Procedure

  1. Navigate to the acls directory

    /iscsi/iqn.20...mple:444/tpg1> acls/
  2. Use one of the following options to create an ACL :

    1. Using the initiator name from /etc/iscsi/initiatorname.iscsi file on the initiator.
    2. Using a name that is easier to remember, see section Section 6.1.12, “Creating an iSCSI initiator” to ensure ACL matches the initiator.

      /iscsi/iqn.20...444/tpg1/acls> create iqn.2006-04.com.example.foo:888
      
      Created Node ACL for iqn.2006-04.com.example.foo:888
      Created mapped LUN 2.
      Created mapped LUN 1.
      Created mapped LUN 0.
      Note

      The global setting auto_add_mapped_luns used in the preceding example, automatically maps LUNs to any created ACL.

      You can set user-created ACLs within the TPG node on the target server:

      /iscsi/iqn.20...scsi:444/tpg1> set attribute generate_node_acls=1
  3. Verify the created ACL:

    /iscsi/iqn.20...444/tpg1/acls> ls
    
    o- acls .................................................[1 ACL]
        o- iqn.2006-04.com.example.foo:888 ....[3 Mapped LUNs, auth]
            o- mapped_lun0 .............[lun0 ramdisk/ramdisk1 (rw)]
            o- mapped_lun1 .................[lun1 block/block1 (rw)]
            o- mapped_lun2 .................[lun2 fileio/file1 (rw)]

Additional resources

  • The targetcli man page.

6.1.12. Creating an iSCSI initiator

An iSCSI initiator forms a session to connect to the iSCSI target. For more information on iSCSI target, see Section 6.1.2, “Creating an iSCSI target”. By default, an iSCSI service is lazily started and the service starts after running the iscsiadm command. If root is not on an iSCSI device or there are no nodes marked with node.startup = automatic then the iSCSI service will not start until an iscsiadm command is executed that requires iscsid or the iscsi kernel modules to be started.

To force the iscsid daemon to run and iSCSI kernel modules to load:

# systemctl start iscsid.service

Prerequisites

Procedure

  1. Install iscsi-initiator-utils on client machine:

    # yum install iscsi-initiator-utils
  2. Check the initiator name:

    # cat /etc/iscsi/initiatorname.iscsi
    
    InitiatorName=2006-04.com.example.foo:888
  3. If the ACL was given a custom name in Section 6.1.11, “Creating an iSCSI ACL”, modify the /etc/iscsi/initiatorname.iscsi file accordingly.

    # vi /etc/iscsi/initiatorname.iscsi
  4. Discover the target and log in to the target with the displayed target IQN:

    # iscsiadm -m discovery -t st -p 10.64.24.179
        10.64.24.179:3260,1 iqn.2006-04.example:444
    
    # iscsiadm -m node -T iqn.2006-04.example:444 -l
        Logging in to [iface: default, target: iqn.2006-04.example:444, portal: 10.64.24.179,3260] (multiple)
        Login to [iface: default, target: iqn.2006-04.example:444, portal: 10.64.24.179,3260] successful.

    Replace 10.64.24.179 with the target-ip-address.

    You can use this procedure for any number of initiators connected to the same target if their respective initiator names are added to the ACL as described in the Section 6.1.11, “Creating an iSCSI ACL”.

  5. Find the iSCSI disk name and create a file system on this iSCSI disk:

    # grep "Attached SCSI" /var/log/messages
    
    # mkfs.ext4 /dev/disk_name

    Replace disk_name with the iSCSI disk name displayed in the /var/log/messages file.

  6. Mount the file system:

    # mkdir /mount/point
    
    # mount /dev/disk_name /mount/point

    Replace /mount/point with the mount point of the partition.

  7. Edit the /etc/fstab file to mount the file system automatically when the system boots:

    # vi /etc/fstab
    
    /dev/disk_name /mount/point ext4 _netdev 0 0

    Replace disk_name with the iSCSI disk name and /mount/point with the mount point of the partition.

Additional resources

  • The targetcli man page.
  • The iscsiadm man page.

6.1.13. Setting up the Challenge-Handshake Authentication Protocol for the target

The Challenge-Handshake Authentication Protocol (CHAP) allows the user to protect the target with a password. The initiator must be aware of this password to be able to connect to the target.

Prerequisites

Procedure

  1. Set attribute authentication:

    /iscsi/iqn.20...mple:444/tpg1> set attribute authentication=1
    
    Parameter authentication is now '1'.
  2. Set userid and password:

    /tpg1> set auth userid=redhat
    Parameter userid is now 'redhat'.
    
    /iscsi/iqn.20...689dcbb3/tpg1> set auth password=redhat_passwd
    Parameter password is now 'redhat_passwd'.

Additional resources

  • The targetcli man page.

6.1.14. Setting up the Challenge-Handshake Authentication Protocol for the initiator

The Challenge-Handshake Authentication Protocol (CHAP) allows the user to protect the target with a password. The initiator must be aware of this password to be able to connect to the target.

Prerequisites

Procedure

  1. Enable CHAP authentication in the iscsid.conf file:

    # vi /etc/iscsi/iscsid.conf
    
    node.session.auth.authmethod = CHAP

    By default, the node.session.auth.authmethod is set to None

  2. Add target username and password in the iscsid.conf file:

    node.session.auth.username = redhat
    node.session.auth.password = redhat_passwd
  3. Start the iscsid daemon:

    # systemctl start iscsid.service

Additional resources

  • The iscsiadm man page

6.2. Monitoring an iSCSI session

As a system administrator, you can monitor the iSCSI session using the iscsiadm utility.

6.2.1. Monitoring an iSCSI session using the iscsiadm utility

This procedure describes how to monitor the iscsi session using the iscsiadm utility.

By default, an iSCSI service is lazily started and the service starts after running the iscsiadm command. If root is not on an iSCSI device or there are no nodes marked with node.startup = automatic then the iSCSI service will not start until an iscsiadm command is executed that requires iscsid or the iscsi kernel modules to be started.

To force the iscsid daemon to run and iSCSI kernel modules to load:

# systemctl start iscsid.service

Prerequisites

  • Installed iscsi-initiator-utils on client machine:

    yum install iscsi-initiator-utils

Procedure

  1. Find information about the running sessions:

    # iscsiadm -m session -P 3

    This command displays the session or device state, session ID (sid), some negotiated parameters, and the SCSI devices accessible through the session.

    • For shorter output, for example, to display only the sid-to-node mapping, run:

      # iscsiadm -m session -P 0
              or
      # iscsiadm -m session
      
      tcp [2] 10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311
      tcp [3] 10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311

      These commands print the list of running sessions in the following format: driver [sid] target_ip:port,target_portal_group_tag proper_target_name.

Additional resources

  • /usr/share/doc/iscsi-initiator-utils-version/README file.
  • The iscsiadm man page.

6.3. Removing an iSCSI target

As a system administrator, you can remove the iSCSI target.

6.3.1. Removing an iSCSI object using targetcli tool

This procedure describes how to remove the iSCSI objects using the targetcli tool.

Procedure

  1. Log off from the target:

    # iscsiadm -m node -T iqn.2006-04.example:444 -u

    For more information on how to log in to the target, see Section 6.1.12, “Creating an iSCSI initiator”.

  2. Remove the entire target, including all ACLs, LUNs, and portals:

    /> iscsi/ delete iqn.2006-04.com.example:444

    Replace iqn.2006-04.com.example:444 with the target_iqn_name.

    • To remove an iSCSI backstore:

      /> backstores/backstore-type/ delete block_backend
      • Replace backstore-type with either fileio, block, pscsi, or ramdisk.
      • Replace block_backend with the backstore-name you want to delete.
    • To remove parts of an iSCSI target, such as an ACL:

      /> /iscsi/iqn-name/tpg/acls/ delete iqn.2006-04.com.example:444
  3. View the changes:

    /> iscsi/ ls

Additional resources

  • The targetcli man page.

6.4. DM Multipath overrides of the device timeout

The recovery_tmo sysfs option controls the timeout for a particular iSCSI device. The following options globally override recovery_tmo values:

  • The replacement_timeout configuration option globally overrides the recovery_tmo value for all iSCSI devices.
  • For all iSCSI devices that are managed by DM Multipath, the fast_io_fail_tmo option in DM Multipath globally overrides the recovery_tmo value.

    The fast_io_fail_tmo option in DM Multipath also overrides the fast_io_fail_tmo option in Fibre Channel devices.

The DM Multipath fast_io_fail_tmo option takes precedence over replacement_timeout. Red Hat does not recommend using replacement_timeout to override recovery_tmo in devices managed by DM Multipath because DM Multipath always resets recovery_tmo when the multipathd service reloads.