Chapter 7. Getting started with iSCSI
Red Hat Enterprise Linux 8 uses the targetcli
shell as a command-line interface to perform the following operations:
- Add, remove, view, and monitor iSCSI storage interconnects to utilize iSCSI hardware.
- Export local storage resources that are backed by either files, volumes, local SCSI devices, or by RAM disks to remote systems.
The targetcli
tool has a tree-based layout including built-in tab completion, auto-complete support, and inline documentation.
7.1. Adding an iSCSI target
As a system administrator, you can add an iSCSI targets using the targetcli
tool.
7.1.1. Installing targetcli
Install the targetcli
tool to add, monitor, and remove iSCSI storage interconnects .
Procedure
Install
targetcli
:# yum install targetcli
Start the target service:
# systemctl start target
Configure target to start at boot time:
# systemctl enable target
Open port
3260
in the firewall and reload the firewall configuration:# firewall-cmd --permanent --add-port=3260/tcp Success # firewall-cmd --reload Success
View the
targetcli
layout:# targetcli /> ls o- /........................................[...] o- backstores.............................[...] | o- block.................[Storage Objects: 0] | o- fileio................[Storage Objects: 0] | o- pscsi.................[Storage Objects: 0] | o- ramdisk...............[Storage Objects: 0] o- iscsi...........................[Targets: 0] o- loopback........................[Targets: 0]
Additional resources
-
The
targetcli
man page.
7.1.2. Creating an iSCSI target
Creating an iSCSI target enables the iSCSI initiator of the client to access the storage devices on the server. Both targets and initiators have unique identifying names.
Prerequisites
-
Installed and running
targetcli
. For more information, see Section 7.1.1, “Installing targetcli”.
Procedure
Navigate to the iSCSI directory:
/> iscsi/
NoteThe
cd
command is used to change directories as well as to list the path to move into.Use one of the following options to create an iSCSI target:
Creating an iSCSI target using a default target name:
/iscsi> create Created target iqn.2003-01.org.linux-iscsi.hostname.x8664:sn.78b473f296ff Created TPG1
Creating an iSCSI target using a specific name:
/iscsi> create iqn.2006-04.com.example:444 Created target iqn.2006-04.com.example:444 Created TPG1 Here
iqn.2006-04.com.example:444
is target_iqn_nameReplace iqn.2006-04.com.example:444 with the specific target name.
Verify the newly created target:
/iscsi> ls o- iscsi.......................................[1 Target] o- iqn.2006-04.com.example:444................[1 TPG] o- tpg1...........................[enabled, auth] o- acls...............................[0 ACL] o- luns...............................[0 LUN] o- portals.........................[0 Portal]
Additional resources
-
The
targetcli
man page.
7.1.3. iSCSI Backstore
An iSCSI backstore enables support for different methods of storing an exported LUN’s data on the local machine. Creating a storage object defines the resources that the backstore uses. An administrator can choose any of the following backstore devices that Linux-IO (LIO) supports:
-
fileio
backstore: Create afileio
storage object if you are using regular files on the local file system as disk images. For creating afileio
backstore, see Section 7.1.4, “Creating a fileio storage object”. -
block
backstore: Create ablock
storage object if you are using any local block device and logical device. For creating ablock
backstore, see Section 7.1.5, “Creating a block storage object”. -
pscsi
backstore: Create apscsi
storage object if your storage object supports direct pass-through of SCSI commands. For creating apscsi
backstore, see Section 7.1.6, “Creating a pscsi storage object” -
ramdisk
backstore: Create aramdisk
storage object if you want to create a temporary RAM backed device. For creating aramdisk
backstore, see Section 7.1.7, “Creating a Memory Copy RAM disk storage object”.
Additional resources
-
The
targetcli
man page.
7.1.4. Creating a fileio storage object
fileio
storage objects can support either the write_back
or write_thru
operations. The write_back
operation enables the local file system cache. This improves performance but increases the risk of data loss. It is recommended to use write_back=false
to disable the write_back
operation in favor of the write_thru
operation.
Prerequisites
-
Installed and running
targetcli
. For more information, see Section 7.1.1, “Installing targetcli”.
Procedure
Navigate to the backstores directory:
/> backstores/
Create a
fileio
storage object:/> backstores/fileio create file1 /tmp/disk1.img 200M write_back=false Created fileio file1 with size 209715200
Verify the created
fileio
storage object:/backstores> ls
Additional resources
-
The
targetcli
man page.
7.1.5. Creating a block storage object
The block driver allows the use of any block device that appears in the /sys/block/
directory to be used with Linux-IO (LIO). This includes physical devices (for example, HDDs, SSDs, CDs, DVDs) and logical devices (for example, software or hardware RAID volumes, or LVM volumes).
Prerequisites
-
Installed and running
targetcli
. For more information, see Section 7.1.1, “Installing targetcli”.
Procedure
Navigate to the backstores directory:
/> backstores/
Create a
block
backstore:/> backstores/block create name=block_backend dev=/dev/sdb Generating a wwn serial. Created block storage object block_backend using /dev/vdb.
Verify the created
block
storage object:/backstores> ls
NoteYou can also create a block backstore on a logical volume.
Additional resources
-
The
targetcli
man page.
7.1.6. Creating a pscsi storage object
You can configure, as a backstore, any storage object that supports direct pass-through of SCSI commands without SCSI emulation, and with an underlying SCSI device that appears with lsscsi
in the /proc/scsi/scsi
(such as a SAS hard drive) . SCSI-3 and higher is supported with this subsystem.
pscsi
should only be used by advanced users. Advanced SCSI commands such as for Asymmetric Logical Unit Assignment (ALUAs) or Persistent Reservations (for example, those used by VMware ESX, and vSphere) are usually not implemented in the device firmware and can cause malfunctions or crashes. When in doubt, use block
backstore for production setups instead.
Prerequisites
-
Installed and running
targetcli
. For more information, see Section 7.1.1, “Installing targetcli”.
Procedure
Navigate to the backstores directory:
/> backstores/
Create a
pscsi
backstore for a physical SCSI device, a TYPE_ROM device using/dev/sr0
in this example:/> backstores/pscsi/ create name=pscsi_backend dev=/dev/sr0 Generating a wwn serial. Created pscsi storage object pscsi_backend using /dev/sr0
Verify the created
pscsi
storage object:/backstores> ls
Additional resources
-
The
targetcli
man page.
7.1.7. Creating a Memory Copy RAM disk storage object
Memory Copy RAM disks (ramdisk
) provide RAM disks with full SCSI emulation and separate memory mappings using memory copy for initiators. This provides capability for multi-sessions and is particularly useful for fast and volatile mass storage for production purposes.
Prerequisites
-
Installed and running
targetcli
. For more information, see Section 7.1.1, “Installing targetcli”.
Procedure
Navigate to the backstores directory:
/> backstores/
Create a 1GB RAM disk backstore:
/> backstores/ramdisk/ create name=rd_backend size=1GB Generating a wwn serial. Created rd_mcp ramdisk rd_backend with size 1GB.
Verify the created
ramdisk
storage object:/backstores> ls
Additional resources
-
The
targetcli
man page.
7.1.8. Creating an iSCSI portal
Creating an iSCSI portal adds an IP address and a port to the target that keeps the target enabled.
Prerequisites
-
Installed and running
targetcli
. For more information, see Section 7.1.1, “Installing targetcli”. - An iSCSI target associated with a Target Portal Groups (TPG). For more information, see Section 7.1.2, “Creating an iSCSI target”.
Procedure
Navigate to the TPG directory:
/iscsi> iqn.2006-04.example:444/tpg1/
Use one of the following options to create an iSCSI portal:
Creating a default portal uses the default iSCSI port
3260
and allows the target to listen to all IP addresses on that port:/iscsi/iqn.20...mple:444/tpg1> portals/ create Using default IP port 3260 Binding to INADDR_Any (0.0.0.0) Created network portal 0.0.0.0:3260
NoteWhen an iSCSI target is created, a default portal is also created. This portal is set to listen to all IP addresses with the default port number that is:
0.0.0.0:3260
.To remove the default portal:
/iscsi/iqn-name/tpg1/portals delete ip_address=0.0.0.0 ip_port=3260
Creating a portal using a specific IP address:
/iscsi/iqn.20...mple:444/tpg1> portals/ create 192.168.122.137 Using default IP port 3260 Created network portal 192.168.122.137:3260
Verify the newly created portal:
/iscsi/iqn.20...mple:444/tpg1> ls o- tpg.................................. [enambled, auth] o- acls ......................................[0 ACL] o- luns ......................................[0 LUN] o- portals ................................[1 Portal] o- 192.168.122.137:3260......................[OK]
Additional resources
-
The
targetcli
man page.
7.1.9. Creating an iSCSI LUN
Logical unit number (LUN) is a physical device that is backed by the iSCSI backstore. Each LUN has a unique number.
Prerequisites
-
Installed and running
targetcli
. For more information, see Section 7.1.1, “Installing targetcli”. - An iSCSI target associated with a Target Portal Groups (TPG). For more information, see Section 7.1.2, “Creating an iSCSI target”.
- Created storage objects. For more information, see Section 7.1.3, “iSCSI Backstore”.
Procedure
Create LUNs of already created storage objects:
/iscsi/iqn.20...mple:444/tpg1> luns/ create /backstores/ramdisk/rd_backend Created LUN 0. /iscsi/iqn.20...mple:444/tpg1> luns/ create /backstores/block/block_backend Created LUN 1. /iscsi/iqn.20...mple:444/tpg1> luns/ create /backstores/fileio/file1 Created LUN 2.
Verify the created LUNs:
/iscsi/iqn.20...mple:444/tpg1> ls o- tpg.................................. [enambled, auth] o- acls ......................................[0 ACL] o- luns .....................................[3 LUNs] | o- lun0.........................[ramdisk/ramdisk1] | o- lun1.................[block/block1 (/dev/vdb1)] | o- lun2...................[fileio/file1 (/foo.img)] o- portals ................................[1 Portal] o- 192.168.122.137:3260......................[OK]
Default LUN name starts at
0
.ImportantBy default, LUNs are created with read-write permissions. If a new LUN is added after ACLs are created, LUN automatically maps to all available ACLs and can cause a security risk. To create a LUN with read-only permissions, see Section 7.1.10, “Creating a read-only iSCSI LUN”.
- Configure ACLs. For more information, see Section 7.1.11, “Creating an iSCSI ACL”.
Additional resources
-
The
targetcli
man page.
7.1.10. Creating a read-only iSCSI LUN
By default, LUNs are created with read-write permissions. This procedure describes how to create a read-only LUN.
Prerequisites
-
Installed and running
targetcli
. For more information, see Section 7.1.1, “Installing targetcli”. - An iSCSI target associated with a Target Portal Groups (TPG). For more information, see Section 7.1.2, “Creating an iSCSI target”.
- Created storage objects. For more information, see Section 7.1.3, “iSCSI Backstore”.
Procedure
Set read-only permissions:
/> set global auto_add_mapped_luns=false Parameter auto_add_mapped_luns is now 'false'.
This prevents the auto mapping of LUNs to existing ACLs allowing the manual mapping of LUNs.
Create the LUN:
/> iscsi/target_iqn_name/tpg1/acls/initiator_iqn_name/ create mapped_lun=next_sequential_LUN_number tpg_lun_or_backstore=backstore write_protect=1
Example:
/> iscsi/iqn.2006-04.example:444/tpg1/acls/2006-04.com.example.foo:888/ create mapped_lun=1 tpg_lun_or_backstore=/backstores/block/block2 write_protect=1 Created LUN 1. Created Mapped LUN 1.
Verify the created LUN:
/> ls o- / ...................................................... [...] o- backstores ........................................... [...] <snip> o- iscsi ......................................... [Targets: 1] | o- iqn.2006-04.example:444 .................. [TPGs: 1] | o- tpg1 ............................ [no-gen-acls, no-auth] | o- acls ....................................... [ACLs: 2] | | o- 2006-04.com.example.foo:888 .. [Mapped LUNs: 2] | | | o- mapped_lun0 .............. [lun0 block/disk1 (rw)] | | | o- mapped_lun1 .............. [lun1 block/disk2 (ro)] | o- luns ....................................... [LUNs: 2] | | o- lun0 ...................... [block/disk1 (/dev/vdb)] | | o- lun1 ...................... [block/disk2 (/dev/vdc)] <snip>
The mapped_lun1 line now has (
ro
) at the end (unlike mapped_lun0’s (rw
)) stating that it is read-only.- Configure ACLs. For more information, see Section 7.1.11, “Creating an iSCSI ACL”.
Additional resources
-
The
targetcli
man page.
7.1.11. Creating an iSCSI ACL
In targetcli
, Access Control Lists (ACLs) are used to define access rules and each initiator has exclusive access to a LUN. Both targets and initiators have unique identifying names. You must know the unique name of the initiator to configure ACLs. The iSCSI initiators can be found in the /etc/iscsi/initiatorname.iscsi
file.
Prerequisites
-
Installed and running
targetcli
. For more information, see Section 7.1.1, “Installing targetcli”. - An iSCSI target associated with a Target Portal Groups (TPG). For more information, see Section 7.1.2, “Creating an iSCSI target”.
Procedure
Navigate to the acls directory
/iscsi/iqn.20...mple:444/tpg1> acls/
Use one of the following options to create an ACL :
-
Using the initiator name from
/etc/iscsi/initiatorname.iscsi
file on the initiator. Using a name that is easier to remember, see section Section 7.1.12, “Creating an iSCSI initiator” to ensure ACL matches the initiator.
/iscsi/iqn.20...444/tpg1/acls> create iqn.2006-04.com.example.foo:888 Created Node ACL for iqn.2006-04.com.example.foo:888 Created mapped LUN 2. Created mapped LUN 1. Created mapped LUN 0.
NoteThe global setting
auto_add_mapped_luns
used in the preceding example, automatically maps LUNs to any created ACL.You can set user-created ACLs within the TPG node on the target server:
/iscsi/iqn.20...scsi:444/tpg1> set attribute generate_node_acls=1
-
Using the initiator name from
Verify the created ACL:
/iscsi/iqn.20...444/tpg1/acls> ls o- acls .................................................[1 ACL] o- iqn.2006-04.com.example.foo:888 ....[3 Mapped LUNs, auth] o- mapped_lun0 .............[lun0 ramdisk/ramdisk1 (rw)] o- mapped_lun1 .................[lun1 block/block1 (rw)] o- mapped_lun2 .................[lun2 fileio/file1 (rw)]
Additional resources
-
The
targetcli
man page.
7.1.12. Creating an iSCSI initiator
An iSCSI initiator forms a session to connect to the iSCSI target. For more information on iSCSI target, see Section 7.1.2, “Creating an iSCSI target”. By default, an iSCSI service is lazily
started and the service starts after running the iscsiadm
command. If root is not on an iSCSI device or there are no nodes marked with node.startup = automatic
then the iSCSI service will not start until an iscsiadm
command is executed that requires iscsid
or the iscsi
kernel modules to be started.
To force the iscsid
daemon to run and iSCSI kernel modules to load:
# systemctl start iscsid.service
Prerequisites
-
Installed and running
targetcli
on a server machine. For more information, see Section 7.1.1, “Installing targetcli”. - An iSCSI target associated with a Target Portal Groups (TPG) on a server machine. For more information, see Section 7.1.2, “Creating an iSCSI target”.
- Created iSCSI ACL. For more information, see Section 7.1.11, “Creating an iSCSI ACL”.
Procedure
Install
iscsi-initiator-utils
on client machine:# yum install iscsi-initiator-utils
Check the initiator name:
# cat /etc/iscsi/initiatorname.iscsi InitiatorName=2006-04.com.example.foo:888
If the ACL was given a custom name in Section 7.1.11, “Creating an iSCSI ACL”, modify the
/etc/iscsi/initiatorname.iscsi
file accordingly.# vi /etc/iscsi/initiatorname.iscsi
Discover the target and log in to the target with the displayed target IQN:
# iscsiadm -m discovery -t st -p 10.64.24.179 10.64.24.179:3260,1 iqn.2006-04.example:444 # iscsiadm -m node -T iqn.2006-04.example:444 -l Logging in to [iface: default, target: iqn.2006-04.example:444, portal: 10.64.24.179,3260] (multiple) Login to [iface: default, target: iqn.2006-04.example:444, portal: 10.64.24.179,3260] successful.
Replace 10.64.24.179 with the target-ip-address.
You can use this procedure for any number of initiators connected to the same target if their respective initiator names are added to the ACL as described in the Section 7.1.11, “Creating an iSCSI ACL”.
Find the iSCSI disk name and create a file system on this iSCSI disk:
# grep "Attached SCSI" /var/log/messages # mkfs.ext4 /dev/disk_name
Replace disk_name with the iSCSI disk name displayed in the
/var/log/messages
file.Mount the file system:
# mkdir /mount/point # mount /dev/disk_name /mount/point
Replace /mount/point with the mount point of the partition.
Edit the
/etc/fstab
file to mount the file system automatically when the system boots:# vi /etc/fstab /dev/disk_name /mount/point ext4 _netdev 0 0
Replace disk_name with the iSCSI disk name and /mount/point with the mount point of the partition.
Additional resources
-
The
targetcli
man page. -
The
iscsiadm
man page.
7.1.13. Setting up the Challenge-Handshake Authentication Protocol for the target
The Challenge-Handshake Authentication Protocol (CHAP)
allows the user to protect the target with a password. The initiator must be aware of this password to be able to connect to the target.
Prerequisites
- Created iSCSI ACL. For more information, see Section 7.1.11, “Creating an iSCSI ACL”.
Procedure
Set attribute authentication:
/iscsi/iqn.20...mple:444/tpg1> set attribute authentication=1 Parameter authentication is now '1'.
Set
userid
andpassword
:/tpg1> set auth userid=redhat Parameter userid is now 'redhat'. /iscsi/iqn.20...689dcbb3/tpg1> set auth password=redhat_passwd Parameter password is now 'redhat_passwd'.
Additional resources
-
The
targetcli
man page.
7.1.14. Setting up the Challenge-Handshake Authentication Protocol for the initiator
The Challenge-Handshake Authentication Protocol (CHAP)
allows the user to protect the target with a password. The initiator must be aware of this password to be able to connect to the target.
Prerequisites
- Created iSCSI initiator. For more information, see Section 7.1.12, “Creating an iSCSI initiator”.
-
Set the
CHAP
for the target. For more information, see Section 7.1.13, “Setting up the Challenge-Handshake Authentication Protocol for the target”.
Procedure
Enable CHAP authentication in the
iscsid.conf
file:# vi /etc/iscsi/iscsid.conf node.session.auth.authmethod = CHAP
By default, the
node.session.auth.authmethod
is set toNone
Add target
username
andpassword
in theiscsid.conf
file:node.session.auth.username = redhat node.session.auth.password = redhat_passwd
Start the
iscsid
daemon:# systemctl start iscsid.service
Additional resources
-
The
iscsiadm
man page
7.2. Monitoring an iSCSI session
As a system administrator, you can monitor the iSCSI session using the iscsiadm
utility.
7.2.1. Monitoring an iSCSI session using the iscsiadm utility
This procedure describes how to monitor the iscsi session using the iscsiadm
utility.
By default, an iSCSI service is lazily
started and the service starts after running the iscsiadm
command. If root is not on an iSCSI device or there are no nodes marked with node.startup = automatic
then the iSCSI service will not start until an iscsiadm
command is executed that requires iscsid
or the iscsi
kernel modules to be started.
To force the iscsid
daemon to run and iSCSI kernel modules to load:
# systemctl start iscsid.service
Prerequisites
Installed iscsi-initiator-utils on client machine:
yum install iscsi-initiator-utils
Procedure
Find information about the running sessions:
# iscsiadm -m session -P 3
This command displays the session or device state, session ID (sid), some negotiated parameters, and the SCSI devices accessible through the session.
For shorter output, for example, to display only the
sid-to-node
mapping, run:# iscsiadm -m session -P 0 or # iscsiadm -m session tcp [2] 10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311 tcp [3] 10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311
These commands print the list of running sessions in the following format:
driver [sid] target_ip:port,target_portal_group_tag proper_target_name
.
Additional resources
-
/usr/share/doc/iscsi-initiator-utils-version/README
file. -
The
iscsiadm
man page.
7.3. Removing an iSCSI target
As a system administrator, you can remove the iSCSI target.
7.3.1. Removing an iSCSI object using targetcli tool
This procedure describes how to remove the iSCSI objects using the targetcli
tool.
Procedure
Log off from the target:
# iscsiadm -m node -T iqn.2006-04.example:444 -u
For more information on how to log in to the target, see Section 7.1.12, “Creating an iSCSI initiator”.
Remove the entire target, including all ACLs, LUNs, and portals:
/> iscsi/ delete iqn.2006-04.com.example:444
Replace iqn.2006-04.com.example:444 with the target_iqn_name.
To remove an iSCSI backstore:
/> backstores/backstore-type/ delete block_backend
-
Replace backstore-type with either
fileio
,block
,pscsi
, orramdisk
. - Replace block_backend with the backstore-name you want to delete.
-
Replace backstore-type with either
To remove parts of an iSCSI target, such as an ACL:
/> /iscsi/iqn-name/tpg/acls/ delete iqn.2006-04.com.example:444
View the changes:
/> iscsi/ ls
Additional resources
-
The
targetcli
man page.
7.4. DM Multipath overrides of the device timeout
The recovery_tmo
sysfs
option controls the timeout for a particular iSCSI device. The following options globally override recovery_tmo
values:
-
The
replacement_timeout
configuration option globally overrides therecovery_tmo
value for all iSCSI devices. For all iSCSI devices that are managed by DM Multipath, the
fast_io_fail_tmo
option in DM Multipath globally overrides therecovery_tmo
value.The
fast_io_fail_tmo
option in DM Multipath also overrides thefast_io_fail_tmo
option in Fibre Channel devices.
The DM Multipath fast_io_fail_tmo
option takes precedence over replacement_timeout
. Red Hat does not recommend using replacement_timeout
to override recovery_tmo
in devices managed by DM Multipath because DM Multipath always resets recovery_tmo
when the multipathd
service reloads.