Chapter 4. Known Issues

This section documents known issues found in this release of Red Hat Storage Console.

Cannot create more than one cluster at a time

The Ceph installer currently supports the execution of only one cluster. As a consequence, if more than one cluster creation is initiated concurrently, the create task will time out with unpredictable results.

To avoid any unexpected behavior, create only one cluster at a time. (BZ#1341490)

Active directory short form bind DN in the form of “User@dc” is not supported

As a workaround, use a connection string with a traditional bind Distinguished Name (DN) in the form of the full DN path. For example: cn=userA,ou=users,=dc=myDomain and provide only the user name. (BZ#1311917)

LDAP configuration fails to save if the provider name in the configuration file is not changed

As per the current design, the Red Hat Storage Console looks for a provider name in the skyring.conf configuration file located in the /etc/skyring/ directory under the authentication section and decides whether to save the given LDAP details in the database.

To change the LDAP provider, set the "providerName": attribute in skyring.conf, for example:

"providerName": "ldapauthprovider" (BZ#1350977)

Pools list in Console displays incorrect storage utilization and capacity data

Pool utilization values are not calculated by Ceph appropriately if there are multiple CRUSH hierarchies. As a result of this:

  • Pool utilization values on the dashboard, clusters view and pool listing page are displayed incorrectly
  • No alerts will be sent if the actual pool utilization surpasses the configured thresholds
  • False alerts might be generated for pool utilization

This issue occurs only when the user creates multiple storage profiles for a cluster, which in turn creates multiple CRUSH hierarchies. To avoid this problem, include all the OSDs in a single storage profile. (BZ#1355723)

Incorrect color in profile utilization graphs on the Console dashboard

The most used storage profiles and the most used pools utilization bars on console dashboard shows incorrect color code when threshold value is reached due to an issue with the PatternFly graphs.

The utilization bar shows correct color codes for values below and above thresholds. This behavior is not normal and will be fixed in a future release. (BZ#1357460)

Red Hat Storage Console 2.0 unable to configure multiple networks during cluster creation

Red Hat Storage Console 2.0 allows configuration of only one cluster network and one access network during cluster creation. However, the cluster creation wizard allows the user to select more than one network but doing so will result in unpredictable behavior.

To avoid any unpredictable behavior, select only one cluster and access network during the cluster creation process. (BZ#1360643)

Ansible does not support adding encrypted OSDs

The current version of the ceph-ansible utility does not support adding encrypted OSD nodes. As a consequence, an attempt to perform asynchronous updates between releases by using the rolling-update playbook fails to upgrade encrypted OSD nodes. In addition, Ansible returns the following error during the disk activation task:

mount: unknown filesystem type 'crypto_LUKS'

To work around this issue, open the rolling_update.yml file located in the Ansible working directory and find all instances of the roles: list. Then remove or comment out all roles (ceph-mon, ceph-rgw, ceph-osd or ceph-mds) except the ceph-common role from the lists, for example:

roles:
    - ceph-common
    #- ceph-mon
Note

Make sure to edit all instances of the roles: list in the rolling_update.yml file.

Then run the rolling_update playbook to update the nodes. (BZ#1366808)

Red Hat Storage Console Agent installation fails on Ceph nodes

Red Hat Storage Console Agent setup from ceph-installer (performed by ceph-ansible) only supports installations via the CDN. Installations with an ISO or local yum repository fails. For a temporary workaround, log in to your Red Hat Account and see this solution in the Knowledgebase. (BZ#1403576)

Storage nodes shows identical machine IDs

Using hardcoded machine IDs in templates creates multiple nodes with identical machine IDs. As a consequence, Red Hat Storage Console fails to recognize multiple nodes as the machine IDs are identical.

As a workaround, generate unique machine IDs on each node and update the /etc/machine-id file. This action will enable Storage Console to identify the nodes as unique. (BZ#1270860)