Chapter 8. Using an iSCSI Gateway

The iSCSI gateway is integrating Red Hat Ceph Storage with the iSCSI standard to provide a Highly Available (HA) iSCSI target that exports RADOS Block Device (RBD) images as SCSI disks. The iSCSI protocol allows clients (initiators) to send SCSI commands to SCSI storage devices (targets) over a TCP/IP network. This allows for heterogeneous clients, such as Microsoft Windows, to access the Red Hat Ceph Storage cluster.

Each iSCSI gateway runs the Linux IO target kernel subsystem (LIO) to provide iSCSI protocol support. LIO utilizes a user-space passthrough (TCMU) to interact with Ceph’s librbd library to expose RBD images to iSCSI clients. With Ceph’s iSCSI gateway you can effectively run a fully integrated block-storage infrastructure with all features and benefits of a conventional Storage Area Network (SAN).

Figure 8.1. Ceph iSCSI Gateway HA Design

Ceph iSCSI HA 424879 1116 ECE 01

8.1. Requirements

To implement the Ceph iSCSI gateway there are a few requirements. Red Hat recommends using two nodes for a highly available Ceph iSCSI gateway solution and up to four Ceph iSCSI gateway nodes.

For the hardware requirements, see the Red Hat Ceph Storage Hardware Selection Guide.

Warning

On the iSCSI gateway nodes, the memory footprint of the RBD images can grow to a large size. Plan memory requirements accordingly based off the number RBD images mapped. Each RBD image roughly uses 90 MB of RAM.

There are no specific iSCSI gateway options for the Ceph Monitors or OSDs, but it is important to lower the default timers for detecting down OSDs to reduce the possibility of initiator timeouts. Red Hat recommends setting the following timers on each OSD in the storage cluster:

osd client watch timeout = 15
osd heartbeat grace = 20
osd heartbeat interval = 5
  • Online Updating Using the Ceph Monitor

    Syntax

    ceph tell <daemon_type>.<id> injectargs '--<parameter_name> <new_value>'

    Example

    # ceph tell osd.0 injectargs '--osd_client_watch_timeout 15'
    # ceph tell osd.0 injectargs '--osd_heartbeat_grace 20'
    # ceph tell osd.0 injectargs '--osd_heartbeat_interval 5'

  • Online Updating on the OSD Node

    Syntax

    ceph daemon <daemon_type>.<id> config set <parameter_name> <new_value>

    Example

    # ceph daemon osd.0 config set osd_client_watch_timeout 15
    # ceph daemon osd.0 config set osd_heartbeat_grace 20
    # ceph daemon osd.0 config set osd_heartbeat_interval 5

Note

Update the Ceph configuration file and copy it to all nodes in the Ceph storage cluster. For example, the default configuration file is /etc/ceph/ceph.conf. Add the following lines to the Ceph configuration file:

[osd]
osd_client_watch_timeout = 15
osd_heartbeat_grace = 20
osd_heartbeat_interval = 5

For more details on setting Ceph configuration options, see the Configuration Guide for Red Hat Ceph Storage 3.

8.2. Configuring the iSCSI Target

Traditionally, block-level access to a Ceph storage cluster has been limited to QEMU and librbd, which is a key enabler for adoption within OpenStack environments.

Prerequisites:

  • Red Hat Enterprise Linux 7.5 or later
  • A running Red Hat Ceph Storage 3.0 cluster
  • iSCSI gateways nodes, which can either be colocated with OSD nodes or on dedicated nodes
  • Valid Red Hat Enterprise Linux 7 and Red Hat Ceph Storage 3.0 entitlements/subscriptions on the iSCSI gateways nodes
  • Separate network subnets for iSCSI front-end traffic and Ceph back-end traffic

Deploying the Ceph iSCSI gateway can be done using Ansible or the command-line interface.

8.2.1. Configuring the iSCSI Target using Ansible

Requirements:

  • Red Hat Enterprise Linux 7.5 or later.
  • A running Red Hat Ceph Storage 3 or later.

Installing:

  1. On the iSCSI gateway nodes, enable the Red Hat Ceph Storage 3 Tools repository. For details, see the Enabling the Red Hat Ceph Storage Repositories section in the Installation Guide for Red Hat Enterprise Linux.

    1. Install the ceph-iscsi-config package:

      # yum install ceph-iscsi-config
  2. On the Ansible administration node, do the following steps, as the root user:

    1. Enable the Red Hat Ceph Storage 3 Tools repository. For details, see the Enabling the Red Hat Ceph Storage Repositories section in the Installation Guide for Red Hat Enterprise Linux.
    2. Install the ceph-ansible package:

      # yum install ceph-ansible
    3. Add an entry in /etc/ansible/hosts file for the gateway group:

      [iscsigws]
      ceph-igw-1
      ceph-igw-2
      Note

      If colocating the iSCSI gateway with an OSD node, add the OSD node to the [iscsigws] section.

Configuring:

The ceph-ansible package places a file in the /usr/share/ceph-ansible/group_vars/ directory called iscsigws.yml.sample. Create a copy of the iscsigws.yml.sample file and name it iscsigws.yml.

Important

The new file name (iscsigws.yml) and the new section heading ([iscsigws]) are only applicable to Red Hat Ceph Storage 3.1 or higher. Upgrading from previous versions of Red Hat Ceph Storage to 3.1 will still use the old file name (iscsi-gws.yml) and the old section heading ([iscsi-gws]).

Uncomment the gateway_ip_list variable and update the values accordingly.

For example, to add two gateways with IP addresses of 10.172.19.21 and 10.172.19.22, configure gateway_ip_list like this:

gateway_ip_list: 10.172.19.21,10.172.19.22

Uncomment the rbd_devices variable and update the values accordingly, for example:

rbd_devices:
  - { pool: 'rbd', image: 'ansible1', size: '30G', host: 'ceph-1', state: 'present' }
  - { pool: 'rbd', image: 'ansible2', size: '15G', host: 'ceph-1', state: 'present' }
  - { pool: 'rbd', image: 'ansible3', size: '30G', host: 'ceph-1', state: 'present' }
  - { pool: 'rbd', image: 'ansible4', size: '50G', host: 'ceph-1', state: 'present' }

Uncomment the client_connections variable and update the values accordingly, for example:

Example with enabling CHAP authentication

client_connections:
  - { client: 'iqn.1994-05.com.redhat:rh7-iscsi-client', image_list: 'rbd.ansible1,rbd.ansible2', chap: 'rh7-iscsi-client/redhat', status: 'present' }
  - { client: 'iqn.1991-05.com.microsoft:w2k12r2', image_list: 'rbd.ansible4', chap: 'w2k12r2/microsoft_w2k12', status: 'absent' }

Example with disabling CHAP authentication

client_connections:
  - { client: 'iqn.1991-05.com.microsoft:w2k12r2', image_list: 'rbd.ansible4', chap: '', status: 'present' }
  - { client: 'iqn.1991-05.com.microsoft:w2k16r2', image_list: 'rbd.ansible2', chap: '', status: 'present' }

Important

Disabling CHAP is only supported on Red Hat Ceph Storage 3.1 or higher. Red Hat does not support mixing clients, some with CHAP enabled and some CHAP disabled. All clients marked as present must have CHAP enabled or must have CHAP disabled.

Review the following Ansible variables and descriptions, and update accordingly, if needed.

Table 8.1. iSCSI Gateway General Variables

VariableMeaning/Purpose

seed_monitor

Each gateway needs access to the ceph cluster for rados and rbd calls. This means the iSCSI gateway must have an appropriate /etc/ceph/ directory defined. The seed_monitor host is used to populate the iSCSI gateway’s /etc/ceph/ directory.

cluster_name

Define a custom storage cluster name.

gateway_keyring

Define a custom keyring name.

deploy_settings

If set to true, then deploy the settings when the playbook is ran.

perform_system_checks

This is a boolean value that checks for multipath and lvm configuration settings on each gateway. It must be set to true for at least the first run to ensure multipathd and lvm are configured properly.

gateway_iqn

This is the iSCSI IQN that all the gateways will expose to clients. This means each client will see the gateway group as a single subsystem.

gateway_ip_list

The comma separated ip list defines the IP addresses that will be used on the front end network for iSCSI traffic. This IP will be bound to the active target portal group on each node, and is the access point for iSCSI traffic. Each IP should correspond to an IP available on the hosts defined in the iscsigws.yml host group in /etc/ansible/hosts.

rbd_devices

This section defines the RBD images that will be controlled and managed within the iSCSI gateway configuration. Parameters like pool and image are self explanatory. Here are the other parameters:
size = This defines the size of the RBD. You may increase the size later, by simply changing this value, but shrinking the size of an RBD is not supported and is ignored.
host = This is the iSCSI gateway host name that will be responsible for the rbd allocation/resize. Every defined rbd_device entry must have a host assigned.
state = This is typical Ansible syntax for whether the resource should be defined or removed. A request with a state of absent will first be checked to ensure the rbd is not mapped to any client. If the RBD is unallocated, it will be removed from the iSCSI gateway and deleted from the configuration.

client_connections

This section defines the iSCSI client connection details together with the LUN (RBD image) masking. Currently only CHAP is supported as an authentication mechanism. Each connection defines an image_list which is a comma separated list of the form pool.rbd_image[,pool.rbd_image,…​]. RBD images can be added and removed from this list, to change the client masking. Note, that there are no checks done to limit RBD sharing across client connections.

Table 8.2. iSCSI Gateway RBD-TARGET-API Variables

VariableMeaning/Purpose

api_user

The user name for the API. The default is admin.

api_password

The password for using the API. The default is admin.

api_port

The TCP port number for using the API. The default is 5001.

api_secure

Value can be true or false. The default is false.

loop_delay

Controls the sleeping interval in seconds for polling the iSCSI management object. The default value is 1.

trusted_ip_list

A list of IP addresses who have access to the API. By default, only the iSCSI gateway nodes have access.

Important

For rbd_devices, there can not be any periods (.) in the pool name or in the image name.

Warning

Gateway configuration changes are only supported from one gateway at a time. Attempting to run changes concurrently through multiple gateways may lead to configuration instability and inconsistency.

Warning

Ansible will install the ceph-iscsi-cli package, create, and then update the /etc/ceph/iscsi-gateway.cfg file based on settings in the group_vars/iscsigws.yml file when the ansible-playbook command is ran. If you have previously installed the ceph-iscsi-cli package using the command line installation procedures, then the existing settings from the iscsi-gateway.cfg file must be copied to the group_vars/iscsigws.yml file.

See the Appendix A, Sample iscsigws.yml File to view the full iscsigws.yml.sample file.

Deploying:

On the Ansible administration node, do the following steps, as the root user.

  1. Execute the Ansible playbook:

    # cd /usr/share/ceph-ansible
    # ansible-playbook site.yml
    Note

    The Ansible playbook will handle RPM dependencies, RBD creation and Linux iSCSI target configuration.

    Warning

    On stand-alone iSCSI gateway nodes, verify that the correct Red Hat Ceph Storage 3.0 software repositories are enabled. If they are unavailable, then the wrong packages will be installed.

  2. Verify the configuration by running the following command:

    # gwcli ls
    Important

    Do not use the targetcli utility to change the configuration, this will result in the following issues: ALUA misconfiguration and path failover problems. There is the potential to corrupt data, to have mismatched configuration across iSCSI gateways, and to have mismatched WWN information, which will lead to client pathing problems.

Service Management:

The ceph-iscsi-config package installs the configuration management logic and a Systemd service called rbd-target-gw. When the Systemd service is enabled, the rbd-target-gw will start at boot time and will restore the Linux iSCSI target state. Deploying the iSCSI gateways with the Ansible playbook disables the target service.

# systemctl start rbd-target-gw

Below are the outcomes of interacting with the rbd-target-gw Systemd service.

# systemctl <start|stop|restart|reload> rbd-target-gw
reload
A reload request will force rbd-target-gw to reread the configuration and apply it to the current running environment. This is normally not required, since changes are deployed in parallel from Ansible to all iSCSI gateway nodes.
stop
A stop request will close the gateway’s portal interfaces, dropping connections to clients and wipe the current Linux iSCSI target configuration from the kernel. This returns the iSCSI gateway to a clean state. When clients are disconnected, active I/O is rescheduled to the other iSCSI gateways by the client side multipathing layer.

Administration:

Within the /usr/share/ceph-ansible/group_vars/iscsigws.yml file there are a number of operational workflows that the Ansible playbook supports.

Warning

Before removing RBD images from the iSCSI gateway configuration, follow the standard procedures for removing a storage device from the operating system.

For clients and systems using Red Hat Enterprise Linux 7, see the Red Hat Enterprise Linux 7 Storage Administration Guide for more details on removing devices.

Table 8.3. Operational Workflows

I want to…​Update the iscsigws.yml file by…​

Add more RBD images

Adding another entry to the rbd_devices section with the new image.

Resize an existing RBD image

Updating the size parameter within the rbd_devices section. Client side actions are required to pick up the new size of the disk.

Add a client

Adding an entry to the client_connections section.

Add another RBD to a client

Adding the relevant RBD pool.image name to the image_list variable for the client.

Remove an RBD from a client

Removing the RBD pool.image name from the clients image_list variable.

Remove an RBD from the system

Changing the RBD entry state variable to absent. The RBD image must be unallocated from the operating system first for this to succeed.

Change the clients CHAP credentials

Updating the relevant CHAP details in client_connections. This will need to be coordinated with the clients. For example, the client issues an iSCSI logout, the credentials are changed by the Ansible playbook, the credentials are changed at the client, then the client performs an iSCSI login.

Remove a client

Updating the relevant client_connections item with a state of absent. Once the Ansible playbook is ran, the client will be purged from the system, but the disks will remain defined to Linux iSCSI target for potential reuse.

Once a change has been made, rerun the Ansible playbook to apply the change across the iSCSI gateway nodes.

# ansible-playbook site.yml

Removing the Configuration:

All iSCSI initiators need to be disconnected before purging the iSCSI gateway configuration. Follow the procedures below for the appropriate operating system:

Red Hat Enterprise Linux initiators:

iscsiadm -m node -T $TARGET_NAME --logout

Replace $TARGET_NAME with the configured iSCSI target name.

Example output

# iscsiadm -m node -T iqn.2003-01.com.redhat.iscsi-gw:ceph-igw --logout
Logging out of session [sid: 1, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.21,3260]
Logging out of session [sid: 2, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.22,3260]
Logout of [sid: 1, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.21,3260] successful.
Logout of [sid: 2, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.22,3260] successful.

Windows initiators:

See the Microsoft documentation for more details.

VMware ESXi initiators:

See the VMware documentation for more details.

The ceph-ansible package provides an Ansible playbook to remove the iSCSI gateway configuration and related RBD images. The Ansible playbook is /usr/share/ceph-ansible/purge-iscsi-gateways.yml.

ansible-playbook purge-iscsi-gateways.yml

When this Ansible playbook is ran, a prompt for the type of purge to do is displayed:

lio
In this mode the Linux iSCSI target configuration is purged on all iSCSI gateways that are defined. Disks that were created are left untouched within the Ceph storage cluster.
all
When all is chosen, the Linux iSCSI target configuration is removed together with all RBD images that were defined within the iSCSI gateway environment, other unrelated RBD images will not be removed. Ensure the correct mode is chosen, this operation will delete data.
Warning

A purge operation is destructive action against the iSCSI gateway environment.

Warning

A purge operation will fail, if RBD images have snapshots or clones and are exported through the Ceph iSCSI gateway.

Example Output

[root@rh7-iscsi-client ceph-ansible]# ansible-playbook purge-iscsi-gateways.yml
Which configuration elements should be purged? (all, lio or abort) [abort]: all


PLAY [Confirm removal of the iSCSI gateway configuration] *********************


GATHERING FACTS ***************************************************************
ok: [localhost]


TASK: [Exit playbook if user aborted the purge] *******************************
skipping: [localhost]


TASK: [set_fact ] *************************************************************
ok: [localhost]


PLAY [Removing the gateway configuration] *************************************


GATHERING FACTS ***************************************************************
ok: [ceph-igw-1]
ok: [ceph-igw-2]


TASK: [igw_purge | purging the gateway configuration] *************************
changed: [ceph-igw-1]
changed: [ceph-igw-2]


TASK: [igw_purge | deleting configured rbd devices] ***************************
changed: [ceph-igw-1]
changed: [ceph-igw-2]


PLAY RECAP ********************************************************************
ceph-igw-1                 : ok=3    changed=2    unreachable=0    failed=0
ceph-igw-2                 : ok=3    changed=2    unreachable=0    failed=0
localhost                  : ok=2    changed=0    unreachable=0    failed=0

8.2.2. Configuring the iSCSI Target using the Command Line Interface

The Ceph iSCSI gateway is the iSCSI target node and also a Ceph client node. The Ceph iSCSI gateway can be a standalone node or be colocated on a Ceph Object Store Disk (OSD) node. Completing the following steps will install, and configure the Ceph iSCSI gateway for basic operation.

Requirements:

  • Red Hat Enterprise Linux 7.5 or later
  • A running Red Hat Ceph Storage 3.0 cluster or later
  • The following packages must be installed:

    • targetcli-2.1.fb47-0.1.20170815.git5bf3517.el7cp or newer package
    • python-rtslib-2.1.fb64-0.1.20170815.gitec364f3.el7cp or newer package
    • tcmu-runner-1.4.0-0.2.el7cp or newer package
    • openssl-1.0.2k-8.el7 or newer package

      Important

      If previous versions of these packages exist, then they must be removed first before installing the newer versions. These newer versions must be installed from a Red Hat Ceph Storage repository.

Do the following steps on all Ceph Monitor nodes in the storage cluster, before using the gwcli utility:

  1. Restart the ceph-mon service, as the root user:

    # systemctl restart ceph-mon@$MONITOR_HOST_NAME

    For example:

    # systemctl restart ceph-mon@monitor1

Do the following steps on the Ceph iSCSI gateway node, as the root user, before proceeding to the Installing section:

  1. If the Ceph iSCSI gateway is not colocated on an OSD node, then copy the Ceph configuration files, located in /etc/ceph/, from a running Ceph node in the storage cluster to the iSCSI Gateway node. The Ceph configuration files must exist on the iSCSI gateway node under /etc/ceph/.
  2. Install and configure the Ceph command-line interface.For details, see the Installing the Ceph Command Line Interface chapter in the Red Hat Ceph Storage 3 Installation Guide for Red Hat Enterprise Linux.
  3. Enable the Ceph tools repository:

    # subscription-manager repos --enable=rhel-7-server-ceph-3-tools-rpms
  4. If needed, open TCP ports 3260 and 5000 on the firewall.
  5. Create a new or use an existing RADOS Block Device (RBD).

    1. See Section 2.1, “Prerequisites” for more details.
Warning

If you already installed the Ceph iSCSI gateway using Ansible, then do not use this procedure.

Ansible will install the ceph-iscsi-cli package, create, and then update the /etc/ceph/iscsi-gateway.cfg file based on settings in the group_vars/iscsigws.yml file when the ansible-playbook command is ran. See Requirements: for more information.

Installing:

Do the following steps on all iSCSI gateway nodes, as the root user, unless otherwise noted.

  1. Install the ceph-iscsi-cli package:

    # yum install ceph-iscsi-cli
  2. Install the tcmu-runner package:

    # yum install tcmu-runner
  3. If needed, install the openssl package:

    # yum install openssl
    1. On the primary iSCSI gateway node, create a directory to hold the SSL keys:

      # mkdir ~/ssl-keys
      # cd ~/ssl-keys
    2. On the primary iSCSI gateway node, create the certificate and key files:

      # openssl req -newkey rsa:2048 -nodes -keyout iscsi-gateway.key -x509 -days 365 -out iscsi-gateway.crt
      Note

      You will be prompted to enter the environmental information.

    3. On the primary iSCSI gateway node, create a PEM file:

      # cat iscsi-gateway.crt iscsi-gateway.key > iscsi-gateway.pem
    4. On the primary iSCSI gateway node, create a public key:

      # openssl x509 -inform pem -in iscsi-gateway.pem -pubkey -noout > iscsi-gateway-pub.key
    5. From the primary iSCSI gateway node, copy the iscsi-gateway.crt, iscsi-gateway.pem, iscsi-gateway-pub.key, and iscsi-gateway.key files to the /etc/ceph/ directory on the other iSCSI gateway nodes.
  4. Create a file named iscsi-gateway.cfg in the /etc/ceph/ directory:

    # touch /etc/ceph/iscsi-gateway.cfg
    1. Edit the iscsi-gateway.cfg file and add the following lines:

      Syntax

      [config]
      cluster_name = <ceph_cluster_name>
      gateway_keyring = <ceph_client_keyring>
      api_secure = true
      trusted_ip_list = <ip_addr>,<ip_addr>

      Example

      [config]
      cluster_name = ceph
      gateway_keyring = ceph.client.admin.keyring
      api_secure = true
      trusted_ip_list = 192.168.0.10,192.168.0.11

      See Tables 8.1 and 8.2 in the Requirements: for more details on these options.

      Important

      The iscsi-gateway.cfg file must be identical on all iSCSI gateway nodes.

    2. Copy the iscsi-gateway.cfg file to all iSCSI gateway nodes.
  5. Enable and start the API service:

    # systemctl enable rbd-target-api
    # systemctl start rbd-target-api

Configuring:

  1. Start the iSCSI gateway command-line interface:

    # gwcli
  2. Creating the iSCSI gateways:

    Syntax

    >/iscsi-target create iqn.2003-01.com.redhat.iscsi-gw:<target_name>
    > goto gateways
    > create <iscsi_gw_name> <IP_addr_of_gw>
    > create <iscsi_gw_name> <IP_addr_of_gw>

    Example

    >/iscsi-target create iqn.2003-01.com.redhat.iscsi-gw:ceph-igw
    > goto gateways
    > create ceph-gw-1 10.172.19.21
    > create ceph-gw-2 10.172.19.22

  3. Adding a RADOS Block Device (RBD):

    Syntax

    > cd /disks
    >/disks/ create <pool_name> image=<image_name> size=<image_size>m|g|t max_data_area_mb=<buffer_size>

    Example

    > cd /disks
    >/disks/ create rbd image=disk_1 size=50g max_data_area_mb=32

    Important

    There can not be any periods (.) in the pool name or in the image name.

    Note

    The max_data_area_mb option controls the amount of memory in megabytes that each image can use to pass SCSI command data between the iSCSI target and the Ceph cluster. If this value is too small, then it can result in excessive queue full retries which will affect performance. If the value is too large, then it can result in one disk using too much of the system’s memory, which can cause allocation failures for other subsystems. The default value is 8.

    This value can be changed using the reconfigure command The image must not be in use by an iSCSI initiator for this command to take effect.

    Syntax

    >/disks/ reconfigure max_data_area_mb <new_buffer_size>

    Example

    >/disks/ reconfigure max_data_area_mb 64

  4. Creating a client:

    Syntax

    > goto hosts
    > create iqn.1994-05.com.redhat:<client_name>
    > auth chap=<user_name>/<password>

    Example

    > goto hosts
    > create iqn.1994-05.com.redhat:rh7-client
    > auth chap=iscsiuser1/temp12345678

    Important

    Disabling CHAP is only supported on Red Hat Ceph Storage 3.1 or higher. Red Hat does not support mixing clients, some with CHAP enabled and some CHAP disabled. All clients must have either CHAP enabled or have CHAP disabled. The default behavior is to only authenticate an initiator by its initiator name.

    If initiators are failing to log into the target, then the CHAP authentication might be a misconfigured for some initiators.

    Example

    o- hosts ................................ [Hosts: 2: Auth: MISCONFIG]

    Do the following command at the hosts level to reset all the CHAP authentication:

    /> goto hosts
    /iscsi-target...csi-igw/hosts> auth nochap
    ok
    ok
    /iscsi-target...csi-igw/hosts> ls
    o- hosts ................................ [Hosts: 2: Auth: None]
      o- iqn.2005-03.com.ceph:esx ........... [Auth: None, Disks: 4(310G)]
      o- iqn.1994-05.com.redhat:rh7-client .. [Auth: None, Disks: 0(0.00Y)]
  5. Adding disks to a client:

    Syntax

    >/iscsi-target..eph-igw/hosts> cd iqn.1994-05.com.redhat:<client_name>
    > disk add <pool_name>.<image_name>

    Example

    >/iscsi-target..eph-igw/hosts> cd iqn.1994-05.com.redhat:rh7-client
    > disk add rbd.disk_1

  6. To confirm that the API is using SSL correctly, look in the /var/log/rbd-target-api.log file for https, for example:

    Aug 01 17:27:42 test-node.example.com python[1879]:  * Running on https://0.0.0.0:5001/
  7. The next step is to configure an iSCSI initiator. See Section 8.3, “Configuring the iSCSI Initiator” for more information on configuring an iSCSI initiator.

Verifying

  1. To verify if the iSCSI gateways are working:

    Example

    /> goto gateways
    /iscsi-target...-igw/gateways> ls
    o- gateways ............................ [Up: 2/2, Portals: 2]
      o- ceph-gw-1  ........................ [ 10.172.19.21 (UP)]
      o- ceph-gw-2  ........................ [ 10.172.19.22 (UP)]

    Note

    If the status is UNKNOWN, then check for network issues and any misconfigurations. If using a firewall, then check if the appropriate TCP port is open. Check if the iSCSI gateway is listed in the trusted_ip_list option. Verify that the rbd-target-api service is running on the iSCSI gateway node.

  2. To verify if the initiator is connecting to the iSCSI target, you will see the initiator LOGGED-IN:

    Example

    /> goto hosts
    /iscsi-target...csi-igw/hosts> ls
    o- hosts .............................. [Hosts: 1: Auth: None]
      o- iqn.1994-05.com.redhat:rh7-client  [LOGGED-IN, Auth: None, Disks: 0(0.00Y)]

  3. To verify if LUNs are balanced across iSCSI gateways:

    /> goto hosts
    /iscsi-target...csi-igw/hosts> ls
    o- hosts ................................. [Hosts: 2: Auth: None]
      o- iqn.2005-03.com.ceph:esx ............ [Auth: None, Disks: 4(310G)]
      | o- lun 0 ............................. [rbd.disk_1(100G), Owner: ceph-gw-1]
      | o- lun 1 ............................. [rbd.disk_2(10G), Owner: ceph-gw-2]

    When creating a disk, the disk will be assigned an iSCSI gateway as its Owner based on the initiator’s multipath layer. The initiator’s multipath layer will reported as being in ALUA Active-Optimized (AO) state. The other paths will be reported as being in the ALUA Active-non-Optimized (ANO) state.

    If the AO path fails one of the other iSCSI gateways will be used. The ordering for the failover gateway depends on the initiator’s multipath layer, where normally, the order is based on which path was discovered first.

    Currently, the balancing of LUNs is not dynamic. The owning iSCSI gateway is selected at disk creation time and is not changeable.

8.3. Configuring the iSCSI Initiator

Red Hat Ceph Storage supports iSCSI initiators on three operating systems for connecting to the Ceph iSCSI gateway:

8.3.1. The iSCSI Initiator for Red Hat Enterprise Linux

Prerequisite:

  • Package iscsi-initiator-utils-6.2.0.873-35 or newer must be installed
  • Package device-mapper-multipath-0.4.9-99 or newer must be installed

Installing the Software:

  1. Install the iSCSI initiator and multipath tools:

    # yum install iscsi-initiator-utils
    # yum install device-mapper-multipath

Setting the Initiator Name

  1. Edit the /etc/iscsi/initiatorname.iscsi file.

    Note

    The initiator name must match the initiator name used in the Ansible client_connections option or what was used during the initial setup using gwcli.

Configuring Multipath IO:

  1. Create the default /etc/multipath.conf file and enable the multipathd service:

    # mpathconf --enable --with_multipathd y
  2. Add the following to /etc/multipath.conf file:

    devices {
            device {
                    vendor                 "LIO-ORG"
                    hardware_handler       "1 alua"
                    path_grouping_policy   "failover"
                    path_selector          "queue-length 0"
                    failback               60
                    path_checker           tur
                    prio                   alua
                    prio_args              exclusive_pref_bit
                    fast_io_fail_tmo       25
                    no_path_retry          queue
            }
    }
  3. Restart the multipathd service:

    # systemctl reload multipathd

CHAP Setup and iSCSI Discovery/Login:

  1. Provide a CHAP username and password by updating the /etc/iscsi/iscsid.conf file accordingly.

    Example

    node.session.auth.authmethod = CHAP
    node.session.auth.username = user
    node.session.auth.password = password

    Note

    If you update these options, then you must rerun the iscsiadm discovery command.

  2. Discover the target portals:

    # iscsiadm -m discovery -t st -p 192.168.56.101
    192.168.56.101:3260,1 iqn.2003-01.org.linux-iscsi.rheln1
    192.168.56.102:3260,2 iqn.2003-01.org.linux-iscsi.rheln1
  3. Login to target:

    # iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.rheln1 -l

Viewing the Multipath IO Configuration:

The multipath daemon (multipathd), will set up devices automatically based on the multipath.conf settings. Running the multipath command show devices setup in a failover configuration with a priority group for each path, for example:

# multipath -ll
mpathbt (360014059ca317516a69465c883a29603) dm-1 LIO-ORG ,IBLOCK
size=1.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| `- 28:0:0:1 sde  8:64  active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
  `- 29:0:0:1 sdc  8:32  active ready running

The multipath -ll output prio value indicates the ALUA state, where prio=50 indicates it is the path to the owning iSCSI gateway in the ALUA Active-Optimized state and prio=10 indicates it is an Active-non-Optmized path. The status field indicates which path is being used, where active indicates the currently used path, and enabled indicates the failover path, if the active fails. To match the device name, for example, sde in the multipath -ll output, to the iSCSI gateway run the following command:

# iscsiadm -m session -P 3

The Persistent Portal value is the IP address assigned to the iSCSI gateway listed in gwcli or the IP address of one of the iSCSI gateways listed in the gateway_ip_list, if Ansible was used.

8.3.2. The iSCSI Initiator for Red Hat Virtualization

Prerequisite:

  • Red Hat Virtualization 4.1
  • Configured MPIO devices on all Red Hat Virtualization nodes
  • Package iscsi-initiator-utils-6.2.0.873-35 or newer must be installed
  • Package device-mapper-multipath-0.4.9-99 or newer must be installed

Adding iSCSI Storage

  1. Click the Storage resource tab to list the existing storage domains.
  2. Click the New Domain button to open the New Domain window.
  3. Enter the Name of the new storage domain.
  4. Use the Data Center drop-down menu to select an data center.
  5. Use the drop-down menus to select the Domain Function and the Storage Type. The storage domain types that are not compatible with the chosen domain function are not available.
  6. Select an active host in the Use Host field. If this is not the first data domain in a data center, you must select the data center’s SPM host.
  7. The New Domain window automatically displays known targets with unused LUNs when iSCSI is selected as the storage type. If the target that you are adding storage from is not listed then you can use target discovery to find it, otherwise proceed to the next step.

    1. Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment.

      Note

      LUNs external to the environment are also displayed.

      You can use the Discover Targets options to add LUNs on many targets, or multiple paths to the same LUNs.

    2. Enter the fully qualified domain name or IP address of the iSCSI host in the Address field.
    3. Enter the port to connect to the host on when browsing for targets in the Port field. The default is 3260.
    4. If the Challenge Handshake Authentication Protocol (CHAP) is being used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password.
    5. Click the Discover button.
    6. Select the target to use from the discovery results and click the Login button. Alternatively, click the Login All to log in to all of the discovered targets.

      Important

      If more than one path access is required, ensure to discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported.

  8. Click the + button next to the desired target. This will expand the entry and display all unused LUNs attached to the target.
  9. Select the check box for each LUN that you are using to create the storage domain.
  10. Optionally, you can configure the advanced parameters.

    1. Click Advanced Parameters.
    2. Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
    3. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
    4. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
    5. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains.
  11. Click OK to create the storage domain and close the window.

8.3.3. The iSCSI Initiator for Microsoft Windows

Prerequisite:

  • Microsoft Windows Server 2016

iSCSI Initiator, Discovery and Setup:

  1. Install the iSCSI initiator driver and MPIO tools.
  2. Launch the MPIO program, click on the Discover Multi-Paths tab, check the Add support for iSCSI devices box, and click Add. This change will require a reboot.
  3. On the iSCSI Initiator Properties window, on the Discovery tab 1 , add a target portal. Enter the IP address or DNS name 2 and Port 3 of the Ceph iSCSI gateway:

    iscsi discovery tab mod
  4. On the Targets tab 1 , select the target and click on Connect 2 :

    iscsi target tab mod
  5. On the Connect To Target window, select the Enable multi-path option 1 , and click the Advanced button 2 :

    iscsi connect to target mod
  6. Under the Connect using section, select a Target portal IP 1 . Select the Enable CHAP login on 2 and enter the Name and Target secret values 3 from the Ceph iSCSI Ansible client credentials section, and click OK 4 :

    iscsi advanced window mod
    Important

    Windows Server 2016 does not accept a CHAP secret less than 12 bytes.

  7. Repeat steps 5 and 6 for each target portal defined when setting up the iSCSI gateway.
  8. If the initiator name is different than the initiator name used during the initial setup, then rename the initiator name. From iSCSI Initiator Properties window, on the Configuration tab 1 , click the Change button 2 to rename the initiator name.

    iscsi windows initiator properties mod

Multipath IO Setup:

Configuring the MPIO load balancing policy, setting the timeout and retry options are using PowerShell with the mpclaim command. The iSCSI Initiator tool configures the remaining options.

Note

Red Hat recommends increasing the PDORemovePeriod option to 120 seconds from PowerShell. This value might need to be adjusted based on the application. When all paths are down, and 120 seconds expires, the operating system will start failing IO requests.

Set-MPIOSetting -NewPDORemovePeriod 120
  1. Set the failover policy
mpclaim.exe -l -m 1
  1. Verify the failover policy
mpclaim -s -m
MSDSM-wide Load Balance Policy: Fail Over Only
  1. Using the iSCSI Initiator tool, from the Targets tab 1 click on the Devices…​ button 2 :

    iscsi target tab2 mod
  2. From the Devices window, select a disk 1 and click the MPIO…​ button 2 :

    iscsi devices mpio mod
  3. On the Device Details window the paths to each target portal is displayed. If using the ceph-ansible setup method, the iSCSI gateway will use ALUA to tell the iSCSI initiator which path and iSCSI gateway should be used as the primary path. The Load Balancing Policy Fail Over Only must be selected

    mpio set failover only mod
  4. From PowerShell, to view the multipath configuration
mpclaim -s -d $MPIO_DISK_ID

+ Replace $MPIO_DISK_ID with the appropriate disk identifier.

Note

There will be one Active/Optimized path which is the path to the iSCSI gateway node that owns the LUN, and there will be an Active/Unoptimized path for each other iSCSI gateway node.

mpclaim output mod

Tuning:

Consider using the following registry settings:

  • Windows Disk Timeout

    Key

    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk

    Value

    TimeOutValue = 65

  • Microsoft iSCSI Initiator Driver

    Key

    HKEY_LOCAL_MACHINE\\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}\<Instance_Number>\Parameters

    Values

    LinkDownTime = 25
    SRBTimeoutDelta = 15

8.3.4. The iSCSI Initiator for VMware ESX vSphere Web Client

Prerequisite:

  • VMware ESX 6.5 or later using Virtual Machine compatibility 6.5 with VMFS 6
  • Access to the vSphere Web Client
  • Root access to VMware ESX host to execute the esxcli command

iSCSI Discovery and Multipath Device Setup:

  1. Disable HardwareAcceleratedMove (XCOPY):

    # esxcli system settings advanced set --int-value 0 --option /DataMover/HardwareAcceleratedMove
  2. Enable the iSCSI software. From Navigator pane, click on Storage 1 . Select the Adapters tab 2 . Click on Confgure iSCSI 3 :

    esx web client storage main mod
  3. Verify the initiator name in the Name & alias section 1 .

    esx web client config iscsi main mod step2
    Note

    If the initiator name is different than the initiator name used when creating the client during the initial setup using gwcli or if the initiator name used in the Ansible client_connections: client variable is different, then follow this procedure to change the initiator name. From the VMware ESX host, run these esxcli commands.

    1. Get the adapter name for the iSCSI software:

      > esxcli iscsi adapter list
      > Adapter  Driver     State   UID            Description
      > -------  ---------  ------  -------------  ----------------------
      > vmhba64  iscsi_vmk  online  iscsi.vmhba64  iSCSI Software Adapter
    2. Set the initiator name:

      Syntax

      > esxcli iscsi adapter set -A <adaptor_name> -n <initiator_name>

      Example

      > esxcli iscsi adapter set -A vmhba64 -n iqn.1994-05.com.redhat:rh7-client

  4. Configure CHAP. Expand the CHAP authentication section 1 . Select “Do not use CHAP unless required by target” 2 . Enter the CHAP Name and Secret 3 credentials that were used in the initial setup, whether using the gwcli auth command or the Ansible client_connections: credentials variable. Verify the Mutual CHAP authentication section 4 has “Do not use CHAP” selected.

    esx web client chap mod step3
    Warning

    There is a bug in the vSphere Web Client where the CHAP settings are not used initially. On the Ceph iSCSI gateway node, in kernel logs, you will see the following errors as an indication of this bug:

    > kernel: CHAP user or password not set for Initiator ACL
    > kernel: Security negotiation failed.
    > kernel: iSCSI Login negotiation failed.

    To workaround this bug, configure the CHAP settings using the esxcli command. The authname argument is the Name in the vSphere Web Client:

    > esxcli iscsi adapter auth chap set --direction=uni --authname=myiscsiusername --secret=myiscsipassword --level=discouraged -A vmhba64
  5. Configure the iSCSI settings. Expand Advanced settings 1 . Set the RecoveryTimeout value to 25 2 .

    esx web client iscsi recovery timeout mod step4
  6. Set the discovery address. In the Dynamic targets section 1 , click Add dynamic target 2 . Under Address 3 add an IP addresses for one of the Ceph iSCSI gateways. Only one IP address needs to be added. Finally, click the Save configuration button 4 . From the main interface, on the Devices tab, you will see the RBD image.

    esx web client config iscsi main mod step5
    Note

    Configuring the LUN will be done automatically, using the ALUA SATP and MRU PSP. Other SATPs and PSPs must not be used. This can be verified with the esxcli command:

    esxcli storage nmp path list -d eui.$DEVICE_ID

    Replace $DEVICE_ID with the appropriate device identifier.

  7. Verify that multipathing has been setup correctly.

    1. List the devices:

      Example

      # esxcli storage nmp device list | grep iSCSI
         Device Display Name: LIO-ORG iSCSI Disk (naa.6001405f8d087846e7b4f0e9e3acd44b)
         Device Display Name: LIO-ORG iSCSI Disk (naa.6001405057360ba9b4c434daa3c6770c)

    2. Get the multipath information for the Ceph iSCSI disk from the previous step:

      Example

      # esxcli storage nmp path list -d naa.6001405f8d087846e7b4f0e9e3acd44b
      
      iqn.2005-03.com.ceph:esx1-00023d000001,iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw,t,1-naa.6001405f8d087846e7b4f0e9e3acd44b
         Runtime Name: vmhba64:C0:T0:L0
         Device: naa.6001405f8d087846e7b4f0e9e3acd44b
         Device Display Name: LIO-ORG iSCSI Disk (naa.6001405f8d087846e7b4f0e9e3acd44b)
         Group State: active
         Array Priority: 0
         Storage Array Type Path Config: {TPG_id=1,TPG_state=AO,RTP_id=1,RTP_health=UP}
         Path Selection Policy Path Config: {current path; rank: 0}
      
      iqn.2005-03.com.ceph:esx1-00023d000002,iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw,t,2-naa.6001405f8d087846e7b4f0e9e3acd44b
         Runtime Name: vmhba64:C1:T0:L0
         Device: naa.6001405f8d087846e7b4f0e9e3acd44b
         Device Display Name: LIO-ORG iSCSI Disk (naa.6001405f8d087846e7b4f0e9e3acd44b)
         Group State: active unoptimized
         Array Priority: 0
         Storage Array Type Path Config: {TPG_id=2,TPG_state=ANO,RTP_id=2,RTP_health=UP}
         Path Selection Policy Path Config: {non-current path; rank: 0}

      From the example output, each path has an iSCSI/SCSI name with the following parts:

      Initiator name = iqn.2005-03.com.ceph:esx1 ISID = 00023d000002 Target name = iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw Target port group = 2 Device id = naa.6001405f8d087846e7b4f0e9e3acd44b

      The Group State value of active indicates this is the Active-Optimized path to the iSCSI gateway. The gwcli command lists the active as the iSCSI gateway owner. The rest of the paths will have the Group State value of unoptimized and will be the failover path, if the active path goes into a dead state.

  8. To match all paths to their respective iSCSI gateways, run the following command:

    # esxcli iscsi session connection list

    Example output

    vmhba64,iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw,00023d000001,0
       Adapter: vmhba64
       Target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw
       ISID: 00023d000001
       CID: 0
       DataDigest: NONE
       HeaderDigest: NONE
       IFMarker: false
       IFMarkerInterval: 0
       MaxRecvDataSegmentLength: 131072
       MaxTransmitDataSegmentLength: 262144
       OFMarker: false
       OFMarkerInterval: 0
       ConnectionAddress: 10.172.19.21
       RemoteAddress: 10.172.19.21
       LocalAddress: 10.172.19.11
       SessionCreateTime: 08/16/18 04:20:06
       ConnectionCreateTime: 08/16/18 04:20:06
       ConnectionStartTime: 08/16/18 04:30:45
       State: logged_in
    
    vmhba64,iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw,00023d000002,0
       Adapter: vmhba64
       Target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw
       ISID: 00023d000002
       CID: 0
       DataDigest: NONE
       HeaderDigest: NONE
       IFMarker: false
       IFMarkerInterval: 0
       MaxRecvDataSegmentLength: 131072
       MaxTransmitDataSegmentLength: 262144
       OFMarker: false
       OFMarkerInterval: 0
       ConnectionAddress: 10.172.19.22
       RemoteAddress: 10.172.19.22
       LocalAddress: 10.172.19.12
       SessionCreateTime: 08/16/18 04:20:06
       ConnectionCreateTime: 08/16/18 04:20:06
       ConnectionStartTime: 08/16/18 04:30:41
       State: logged_in

    Match the path name with the ISID value, and the RemoteAddress value is the IP address of the owning iSCSI gateway.

8.4. Monitoring the iSCSI gateways

Red Hat provides an additional tool for Ceph iSCSI gateway environments to monitor performance of exported RADOS Block Device (RBD) images.

The gwtop tool is a top-like tool that displays aggregated performance metrics of RBD images that are exported to clients over iSCSI. The metrics are sourced from a Performance Metrics Domain Agent (PMDA). Information from the Linux-IO target (LIO) PMDA is used to list each exported RBD image with the connected client and its associated I/O metrics.

Requirements:

  • A running Ceph iSCSI gateway

Installing:

Do the following steps on the iSCSI gateway nodes, as the root user.

  1. Enable the Ceph tools repository:

    # subscription-manager repos --enable=rhel-7-server-ceph-3-tools-rpms
  2. Install the ceph-iscsi-tools package:

    # yum install ceph-iscsi-tools
  3. Install the performance co-pilot package:

    # yum install pcp
    Note

    For more details on performance co-pilot, see the Red Hat Enterprise Linux Performance Tuning Guide.

  4. Install the LIO PMDA package:

    # yum install pcp-pmda-lio
  5. Enable and start the performance co-pilot service:

    # systemctl enable pmcd
    # systemctl start pmcd
  6. Register the pcp-pmda-lio agent:

    cd /var/lib/pcp/pmdas/lio
    ./Install

By default, gwtop assumes the iSCSI gateway configuration object is stored in a RADOS object called gateway.conf in the rbd pool. This configuration defines the iSCSI gateways to contact for gathering the performance statistics. This can be overridden by using either the -g or -c flags. See gwtop --help for more details.

The LIO configuration determines which type of performance statistics to extract from performance co-pilot. When gwtop starts it looks at the LIO configuration, and if it find user-space disks, then gwtop selects the LIO collector automatically.

Example gwtop Outputs:

For user backed storage (TCMU) devices:

gwtop  2/2 Gateways   CPU% MIN:  4 MAX:  5    Network Total In:    2M  Out:    3M   10:20:00
Capacity:   8G    Disks:   8   IOPS:  503   Clients:  1   Ceph: HEALTH_OK          OSDs:   3
Pool.Image       Src    Size     iops     rMB/s     wMB/s   Client
iscsi.t1703             500M        0      0.00      0.00
iscsi.testme1           500M        0      0.00      0.00
iscsi.testme2           500M        0      0.00      0.00
iscsi.testme3           500M        0      0.00      0.00
iscsi.testme5           500M        0      0.00      0.00
rbd.myhost_1      T       4G      504      1.95      0.00   rh460p(CON)
rbd.test_2                1G        0      0.00      0.00
rbd.testme              500M        0      0.00      0.00

In the Client column, (CON) means the iSCSI initiator (client) is currently logged into the iSCSI gateway. If -multi- is displayed, then multiple clients are mapped to the single RBD image.

Warning

Red Hat does not support mapping a single RBD image to multiple iSCSI initiators (clients).