Release Notes

Red Hat Storage Console 2.0

Release Notes for Red Hat Storage Console 2.0

Rakesh Ghatvisave

Red Hat Customer Content Services

Abstract

The Release Notes document describes the key features introduced in Red Hat Storage Console 2.0 and the known issues.

Chapter 1. About Red Hat Storage Console

Red Hat Storage Console 2.0 represents a substantial evolution forward in the management of Red Hat Ceph Storage. It utilizes functionality from its predecessor - The Calamari API Framework. Calamari tightly integrates with Red Hat Ceph Storage 2.0 and serves as a manageability API layer for Red Hat Ceph Storage 2.0 and greater. Red Hat Storage Console 2.0 is designed to be a modular, componentized service which gains much of its look and feel from the PatternFly open source web framework. The upstream code is based on the Skyrings Github project.

Chapter 2. Key Features

The primary value and purpose of Red Hat Storage Console 2.0 is to enable graphical management of Red Hat Ceph Storage 2.0 by administrators who either need or prefer a graphical environment for installing, monitoring, managing, and alerting purposes. This results in lowering the total cost of ownership involved in the operation of Ceph software defined storage.

Red Hat Storage Console is run in the follwing ways:

  1. When installed using the yum utility on Red Hat Enterprise Linux 7.2.
  2. A pre-built virtual machine, consumed as a qcow image. You can run the image on Red Hat Enterprise Virtualization, KVM, OpenStack, and other environments that supports the qcow2 format.

Red Hat Storage Console supports the installation of Red Hat Ceph Storage 2.0 in a completely graphical manner, as well as the import of existing (or upgraded) Ceph 2.0 clusters. Other important features include:

  • Provisioning of RADOS storage pools and RADOS block devices
  • Monitoring resources using the enterprise dashboard
  • Email alerts based on configurable thresholds
  • Integration with LDAP and Active Directory environments
  • Automatic cluster expansion occurs when new storage disks are added or when a new host is added to an existing cluster
  • Automatic Placement Group (PG) optimization in provisioning operations

Red Hat Storage Console allows the Ceph administrator to filter Object Storage Devices (OSDs) based on percentage utilization and other criterion so that the needed actions are easy to identify.

Chapter 3. Hardware Considerations

The ideal target environment for installations using the Console is production Ceph deployments, specifically hardware environments that conform to the best practices established by Ceph storage experts at Red Hat. For example, it is recommended that the Ceph OSD nodes have a small number of solid state disks (SSDs) to serve as shared journal devices.

The Ansible-based installer embedded with the Storage Console auto-detects these devices and make intelligent layout decisions for journal placement and placement group (PG) counts given the hardware at hand. If no SSD devices are present, the installer uses half of the OSD devices for journals and half for OSDs. This might not be optimal for some use cases where storage density is the primary concern, over performance.

Red Hat Storage Console supports importing clusters configured with journals and OSDs on dedicated physical disks. Currently, the Storage console does not support collocated journals and OSDs on the same physical disk. If a user imports a cluster with journals and OSDs collocated on the same physical disk, there can be the following undesirable consequences:

  • Incorrect journal size and journal path shown in the OSD tab of the Host details view
  • Provisioning operations and automatic cluster expansion operations by the Storage Console would be performed using journals and OSDs on dedicated physical disks and would not maintain the existing colocation configuration

Therefore, owing to these consequences, importing clusters with colocated journals and OSDs is not recommended.

Chapter 4. Known Issues

This section documents known issues found in this release of Red Hat Storage Console.

Cannot create more than one cluster at a time

The Ceph installer currently supports the execution of only one cluster. As a consequence, if more than one cluster creation is initiated concurrently, the create task will time out with unpredictable results.

To avoid any unexpected behavior, create only one cluster at a time. (BZ#1341490)

Active directory short form bind DN in the form of “User@dc” is not supported

As a workaround, use a connection string with a traditional bind Distinguished Name (DN) in the form of the full DN path. For example: cn=userA,ou=users,=dc=myDomain and provide only the user name. (BZ#1311917)

LDAP configuration fails to save if the provider name in the configuration file is not changed

As per the current design, the Red Hat Storage Console looks for a provider name in the skyring.conf configuration file located in the /etc/skyring/ directory under the authentication section and decides whether to save the given LDAP details in the database.

To change the LDAP provider, set the "providerName": attribute in skyring.conf, for example:

"providerName": "ldapauthprovider" (BZ#1350977)

Pools list in Console displays incorrect storage utilization and capacity data

Pool utilization values are not calculated by Ceph appropriately if there are multiple CRUSH hierarchies. As a result of this:

  • Pool utilization values on the dashboard, clusters view and pool listing page are displayed incorrectly
  • No alerts will be sent if the actual pool utilization surpasses the configured thresholds
  • False alerts might be generated for pool utilization

This issue occurs only when the user creates multiple storage profiles for a cluster, which in turn creates multiple CRUSH hierarchies. To avoid this problem, include all the OSDs in a single storage profile. (BZ#1355723)

Incorrect color in profile utilization graphs on the Console dashboard

The most used storage profiles and the most used pools utilization bars on console dashboard shows incorrect color code when threshold value is reached due to an issue with the PatternFly graphs.

The utilization bar shows correct color codes for values below and above thresholds. This behavior is not normal and will be fixed in a future release. (BZ#1357460)

Red Hat Storage Console 2.0 unable to configure multiple networks during cluster creation

Red Hat Storage Console 2.0 allows configuration of only one cluster network and one access network during cluster creation. However, the cluster creation wizard allows the user to select more than one network but doing so will result in unpredictable behavior.

To avoid any unpredictable behavior, select only one cluster and access network during the cluster creation process. (BZ#1360643)

Ansible does not support adding encrypted OSDs

The current version of the ceph-ansible utility does not support adding encrypted OSD nodes. As a consequence, an attempt to perform asynchronous updates between releases by using the rolling-update playbook fails to upgrade encrypted OSD nodes. In addition, Ansible returns the following error during the disk activation task:

mount: unknown filesystem type 'crypto_LUKS'

To work around this issue, open the rolling_update.yml file located in the Ansible working directory and find all instances of the roles: list. Then remove or comment out all roles (ceph-mon, ceph-rgw, ceph-osd or ceph-mds) except the ceph-common role from the lists, for example:

roles:
    - ceph-common
    #- ceph-mon
Note

Make sure to edit all instances of the roles: list in the rolling_update.yml file.

Then run the rolling_update playbook to update the nodes. (BZ#1366808)

Red Hat Storage Console Agent installation fails on Ceph nodes

Red Hat Storage Console Agent setup from ceph-installer (performed by ceph-ansible) only supports installations via the CDN. Installations with an ISO or local yum repository fails. For a temporary workaround, log in to your Red Hat Account and see this solution in the Knowledgebase. (BZ#1403576)

Storage nodes shows identical machine IDs

Using hardcoded machine IDs in templates creates multiple nodes with identical machine IDs. As a consequence, Red Hat Storage Console fails to recognize multiple nodes as the machine IDs are identical.

As a workaround, generate unique machine IDs on each node and update the /etc/machine-id file. This action will enable Storage Console to identify the nodes as unique. (BZ#1270860)

Legal Notice

Copyright © 2017 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.