Fibre Channel over Ethernet (FCoE) Configuration Overview on Red Hat Enterprise Linux 6 and 7

Updated -

For details about configuring Fibre Channel over Ethernet (FCoE) on Red Hat Enterprise Linux 8, see the corresponding section in the Managing storage devices documentation.

Contents

Overview

The idea of this article is to quickly explain some topics involving the FCoE configuration process on Red Hat Enterprise Linux used terms, commands, and tips that might help you while troubleshooting it.

Although I'll be mentioning networking concepts, this is not an in-depth guide to FCoE, with that said I'm considering that you have an up and running Storage Area Network (SAN) and Local Area Network (LAN) infrastructure.

Disclosure: every network architecture is unique in its own way, and for that reason, be advised that this document is not a recommendation, best practices or 'anything between' type of guide.

Used Terms and Definitions

The theory behind FCoE is extensive, so before we start the configuration process, I'll give the simplest overview related to concepts, and definitions, that might help you to understand the commands used to set up FCoE on Red Hat Enterprise Linux.

Disclosure: there is much more to this technology than what's written here.

Fibre Channel over Ethernet (FCoE)

Based on standards from IEEE T11 FC-BB-5, FCoE allows the encapsulation of Fibre Channel (FC) frames over Ethernet networks.

The common data center has a dedicated LAN (Local Area Network) and SAN (Storage Area Network), separated from each other with their specific configurations. The idea behind FCoE is to merge LAN and SAN onto a single and converged network structure, which means a reduced number of cables, ports, switches, I/O cards, power and cooling costs.

Data Center Bridging (DCB)

Data Center Bridging, also known as Data Center Bridging eXchange (DCBX), enables priority-based flow control to a specific type of traffic, which enhances the reliability of Ethernet transport. Oversimplifying things, two DCB important functionalities are: allocating bandwidth and to provide lossless ethernet transport using priority-based flow control between endpoints and switches.

Link Layer Discovery Protocol (LLDP)

Link Layer Discovery Protocol (LLDP) is a vendor-neutral layer 2 protocol. It is used by network devices, such as switches and routers to advertise their identity and capabilities to other devices on the network.

Virtual LAN (VLAN)

This technology allows network administrators to subdivide a network into Virtual LANs (VLAN), isolating the local network in more than one virtual network with distinct broadcast domains. It also allows switches, or router, in a different physical location to communicate as if they were in the same broadcast domain. VLAN plays an important part in FCoE, as it should use a VLAN dedicated only for FCoE traffic, also known as FCoE VLAN, and most important do not mix standard Ethernet with FCoE traffic on the same VLAN.

FCoE Initiator Protocol (FIP)

Fibre Channel over Ethernet (FCoE) Initialization Protocol (FIP) is responsible to establish and maintain between FCoE devices and Fibre Channel (FC) virtual links. It performs four important functions:

  • Discover the FCoE VLANs on which to transmit and receive traffic (FIP VLAN discovery).
  • Discover Fibre Channel (FC) switches to which they are able to connect (FIP discovery).
  • Perform fabric login and discovery to create a virtual link with the FC switch (initialization).
  • Keep the FCoE device and FC switch virtual link communication alive (maintenance).

Deploying Fibre-Channel Over Ethernet on Red Hat Enterprise Linux

Disclosure: before beginning the configuration process, you must understand the hardware in use and its capabilities. For example, the differences between a rack-mounted and a blade server, passthrough or virtual adapter enclosure switches and so on. As it is vendor-specific details, these are way beyond the scope of this article, so please contact your hardware documentation.

Installing Packages

There are two packages needed to be able to use FCoE on RHEL:

  • fcoe-utils - Fibre Channel over Ethernet utilities.
  • lldpad - Userspace daemon and configuration tool for LLDP.

After certifying that these packages were installed, the network interface configuration is the next step toward the FCoE deploy.

There will be only one Ethernet interface in use for the FCoE connection in this article. If you want to add more than one interface, the same process can be used for the newly added network interface.

Network Interface Configuration

  1. Identify the Ethernet device that supports FCoE and configure the new VLAN.

    The file /etc/fcoe/cfg-ethx provides the default configuration, and you can modify it as necessary.

    # cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-ethN
    

    The default content of /etc/fcoe/cfg-ethx looks like this:

    ## Type: yes/no
    ## Default: no
    # Enable/Disable FCoE service at the Ethernet port
    # Normally set to "yes"
    FCOE_ENABLE="yes"
    
    ## Type: yes/no
    ## Default: no
    # Indicate if DCB service is required at the Ethernet port
    # Normally set to "yes"
    DCB_REQUIRED="yes"
    
    ## Type: yes/no
    ## Default:   no
    # Indicate if VLAN discovery should be handled by fcoemon
    # Normally set to "yes"
    AUTO_VLAN="yes"
    
    ## Type: fabric/vn2vn
    ## Default: fabric
    # Indicate the mode of the FCoE operation, either fabric or vn2vn
    # Normally set to "fabric"
    MODE="fabric"
    
    ## Type: yes/no
    ## Default: no
    # Indicate whether to run a FIP responder for VLAN discovery in vn2vn mode
    #FIP_RESP="yes"
    

    Important: If the network interface provides hardware DCB/DCBX capabilities, the field DCB_REQUIRED should be set to no. Usually, you can set the hardware DCB/DCBX inside the BIOS or firmware configuration, contact the hardware vendor for specific information about it. Also, refer to the topic Types of FCoE Cards for details.

    Warning: Do not run software-based DCB or LLDP on CNAs that implement DCB. Some Combined Network Adapters (CNAs) implement DCB protocol in firmware. The DCB protocol assumes that there is just one originator of DCB on a particular network link. This means that any higher-level software implementation of DCB, or LLDP, must be disabled on CNAs that implement DCB.

    If you want the network interface loading at boot time, you must set ONBOOT=yes in the file corresponding to your network interface, which in this case is /etc/sysconfig/network-scripts/ifcfg-ethN. This is important if a disk is attached to the interface and it is not started at boot then you will not be able to access it.

    The network interface configuration should look something like this:

    # cat /etc/sysconfig/network-scripts/ifcfg-ethN
    
    DEVICE=ethN
    HWADDR=00:1A:2B:3C:4D:5E
    ONBOOT=yes
    BOOTPROTO=none
    NM_CONTROLLED=no
    
  2. Start lldpad daemon using the following command:

    # service lldpad start
    
  3. For interfaces that require a software DCB/DCBX client, enable it on the network interface using the following commands:

    Reminder: If you are using hardware DCB/DCBX, then you remembered setting DCB_REQUIRED to no and this step can be skipped.

    For interfaces that require a software DCBX client, enable DCB on the network interface with the following command:

    # dcbtool sc ethN dcb on
    

    Then, enable FCoE on the network interface running:

    # dcbtool sc ethN app:fcoe e:1
    
  4. Start the network interface:

    # ifup ethN
    
  5. Start fcoe and lldpad daemon:

    • RHEL 6:

      # service fcoe start
      # service lldpad start
      
    • RHEL 7:

      # systemctl start fcoe
      # systemctl start lldpad
      
  6. Assuming that all other settings on the fabric are correct, the FCoE device should appear shortly. Run the following command to view the FCoE devices:

    # fcoeadm -i
    

    The output should look similar to this:

    # fcoeadm -i
    Description: 10-Gigabit Network Connection
    Revision: 01
    Manufacturer: Vendor Corporation
    Serial Number: 001A1B112345
    Driver: xyz 3.0.8-k2
    Number of Ports: 1
    
    Symbolic Name: fcoe v0.1 over ethN.550-fcoe
    OS Device Name: host0
    Node Name: 0x1000001234567890
    Port Name: 0x2000001234567890
    FabricName: 0x2064000ABCDEFGHI
    Speed: 10 Gbit
    Supported Speed: 10 Gbit
    MaxFrameSize: 2048
    FC-ID (Port ID): 0x4A0065
    State: Online
    
  7. Make sure that fcoe and lldapd daemons start at boot time:

    • RHEL 6:

      # chkconfig fcoe on
      # chkconfig lldpad on
      
    • RHEL 7:

      # systemctl enable fcoe
      # systemctl enable lldpad
      

    If everything is right, from this point on you can attach LUNs, and use rescan-scsi-bus.sh, available installing sg3_utils package, to scan for new LUNs.

Useful Commands

Besides fcoeadm, commands like fipvlan and ip can give you some information about FCoE devices.

Using ip command, you can see the VLAN ID of your network interface quickly:

# ip -o|grep fcoe

3: eth1.550-fcoe@ethN: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP \ link/ether 00:1a:2b:3c:4d:5e brd ff:ff:ff:ff:ff:ff

If you want more details, you can use fipvlan command, like the example below:

# fipvlan -a

Fibre Channel Forwarders Discovered
interface | VLAN | FCF MAC
------------------------------------------
ethN      | 550  | 00:1a:2b:3c:4d:5e

The command fcoeadm can help you gather some information.

Some examples that might be helpful:

Show detailed information about the discovered SCSI LUNs associated with the FCoE instance on the specified network interface:

# fcoeadm -l
Interface: ethN.550-fcoe
Roles: FCP Target
Node Name: 0x1000001234567890
Port Name: 0x2000001234567890
Target ID: 0
MaxFrameSize: 2048
OS Device Name: rport-0:0-1
FC-ID (Port ID): 0x4A0065
State: Online

LUN #10 Information:
OS Device Name: /dev/sdb
Description: Vendor Model (revision)
Ethernet Port FCID: 0x4A0065
Target FCID: 0x4A06FF
Target ID: 0
LUN ID: 10
Capacity: 25.00 GB
Capacity in Blocks: 52428799
Block Size: 512 bytes
Status: Attached

Show information about the discovered targets associated with the FCoE instance on the specified network interface. If no network interface is specified, information about discovered targets from all FCoE instances will be shown.

# fcoeadm -t
fcoeadm -t ethN.550
  Interface:        ethN.550
  Roles:            FCP Target 
  Node Name:        0x1000001234567890
  Port Name:        0x2000001234567890
  Target ID:        0 
  MaxFrameSize:     2048 
  OS Device Name:   rport-0:0-1 
  FC-ID (Port ID):  0x4A0065
  State:            Online  

LUN ID  Device Name   Capacity   Block Size  Description 
------  -----------  ----------  ----------  ----------------------------
    10  /dev/sdb     25.0 GB      512     Vendor Model (revision)

There are much more options available for the commands listed here, refer to their respective manuals for detailed information.

Caveats

Types of FCoE Cards

There are three types of FCoE cards:

  • No offload: EE (Enhanced Ethernet) cards.

    • These cards support (FIP) 0x8614 and FC-Frame mapping 0x8609 ethernet packet types, but offer no offloading of any type.
  • Partial offload: Some Qlogic cards, for example.

    • These cards support both the FIP and FC-Frame mapping packets -- no offload is provided for the FIP so that portion of the setup/configuration is still required, but once the FC gateway is found via FIP, the other half of the card performs the FC functions, that is a qla2xxx FC driver handles all disk I/O over FC and the Ethernet side of the HBA automatically handles the FC-Frame mapping/encapsulation within 0x8609 Ethernet packets.
    • Requires software support to enable FCoE functionality.
  • Full offload: Emulex cards, for example.

    • These cards support both FIP and FC-Frame mapping packets.
    • The ethernet firmware supports full FIP processing, while the FC-Framing is handled within the FC portion of the card.
    • These cards act as a plug-and-play replacement for FC cards in that no additional -- or very limited -- FCoE specific configuration is required.

Attempting a configuration for an EE-type card for a partial offload card will result in issues.

Partition Scheme

  • Operating System: Per documentation, if /usr or /var is partitioned separately from the rest of the root volume, the boot process becomes much more complex because these directories contain boot-critical components. In some situations, such as when these directories are placed on an iSCSI drive or an FCoE location, the system may either be unable to boot, or it may hang with a Device is busy error when powering off or rebooting. This limitation only applies to /usr or /var; not to directories below them. For example, a separate partition for /var/www will work without issues. More about it at 9.15.5. Recommended Partitioning Scheme

  • Other Partitions: Actually this is not an FCoE-only option, it applies to SAN mount points in general. When you have a mount point that depends on the network connection, it needs the _netdev option set on /etc/fstab. More about it at After system reboot logical volumes or file systems on the SAN are not accessible. How can I resolve this issue?.

Comments