Chapter 2. Ceph Dashboard installation and access

As a system administrator, you can install dashboard and access it for the first time.

Red Hat Ceph Storage is installed graphically using the Cockpit web interface, or on the command line using the Ansible playbooks provided by the ceph-ansible RPM. Cockpit uses the same Ansible playbooks to install Ceph. Those playbooks install dashboard by default. Therefore, whether you directly use the Ansible playbooks, or use Cockpit to install Ceph, dashboard will be installed.

Important

Change the default dashboard password. By default, the password for dashboard is p@ssw0rd, which is insecure. You can change the default password before installing Ceph by updating dashboard_admin_password in the all.yml Ansible playbook before using the playbooks to install Ceph, or after install using the same playbook, or dashboard itself. For more information, see the Install Guide, Changing the dashboard password using the dashboard, or Changing the dashboard password using Ansible.

2.1. Installing dashboard using Cockpit

Dashboard is installed by default when using the Cockpit web interface to install Red Hat Ceph Storage. You must set a host with the Metrics role for Grafana to be installed on.

Prerequisites

  • Consult the Installation Guide for full prerequisites. This procedure only highlights the steps relevant to the dashboard install.

Procedure

  1. On the Hosts page, add a host and set the Metrics role.

    Cockpit add metrics host
  2. Click Add.
  3. Complete the remaining Cockpit Ceph Installer prompts.
  4. After the deploy process finishes, click the Complete button at the bottom right corner of the page. This opens a window which displays the output of the command ceph status, as well as dashboard access information.

    Complete button
  5. At the bottom of the Ceph Cluster Status window, the dashboard access information is displayed, including the URL, user name, and password. Take note of this information.

    Ceph Cluster Status window

2.2. Installing dashboard using Ansible

Dashboard is installed by default when installing Red Hat Ceph Storage using the Ansible playbooks provided by the ceph-ansible RPM.

Prerequisites

  • Consult the Installation Guide for full prerequisites. This procedure only highlights the steps relevant to the dashboard install.

Procedure

  1. Ensure a [grafana-server] group with a node defined under it exists in the Ansible inventory file. Grafana and Prometheus are installed on this node.

    [root@jb-ceph4-admin ~]# grep grafana-server -A 1 /etc/ansible/hosts
    [grafana-server]
    jb-ceph4-mon
  2. In the all.yml Ansible playbook, ensure dashboard_enabled: has not been set to False. There should be a comment indicating the default setting of True.

    [root@jb-ceph4-admin ~]# grep "dashboard_enabled" /usr/share/ceph-ansible/group_vars/all.yml
    #dashboard_enabled: True
  3. Complete the rest of the steps necessary to install Ceph as outlined in the Installation Guide.
  4. After running ansible-playbook site.yml for bare metal installs, or ansible-playbook site-docker.yml for container installs, Ansible will print the dashboard access information. Find the dashboard URL, username, and password towards the end of the playbook output:

    2019-12-13 15:31:17,871 p=11421 u=admin |  TASK [ceph-dashboard : print dashboard URL] ************************************************************
    2019-12-13 15:31:17,871 p=11421 u=admin |  task path: /usr/share/ceph-ansible/roles/ceph-dashboard/tasks/main.yml:5
    2019-12-13 15:31:17,871 p=11421 u=admin |  Friday 13 December 2019  15:31:17 -0500 (0:00:02.189)       0:04:25.380 *******
    2019-12-13 15:31:17,934 p=11421 u=admin |  ok: [jb-ceph4-mon] =>
      msg: The dashboard has been deployed! You can access your dashboard web UI at http://jb-ceph4-mon:8443/ as an 'admin' user with 'p@ssw0rd' password.

    Take note of the output You can access your dashboard web UI at http://jb-ceph4-mon:8443/ as an 'admin' user with 'p@ssw0rd' password.

Note

The Ansible playbook does the following:

  • Enables the Prometheus module in ceph-mgr.
  • Enables the dashboard module in ceph-mgr and opens TCP port 8443.
  • Deploys the Prometheus node_exporter daemon to each node in the storage cluster.

    • Opens TCP port 9100.
    • Starts the node_exporter daemon.
  • Deploys Grafana and Prometheus containers under Docker/systemd on the node under [grafana-server] in the Ansible inventory file.

    • Configures Prometheus to gather data from the ceph-mgr nodes and the node-exporters running on each Ceph host
    • Opens TCP port 3000.
    • Creates the dashboard, theme, and user accounts in Grafana.
    • Displays the Ceph Dashboard login page URL.

2.3. Network port requirements

The Ceph dashboard components use certain TCP network ports which must be accessible. By default, the network ports are automatically opened in firewalld during installation of Red Hat Ceph Storage.

Table 2.1. TCP Port Requirements

PortUseOriginating NodeDestination Node

8443

The dashboard web interface

The Ceph Manager nodes.

IP addresses that need access to Ceph Dashboard UI.

8443

The dashboard web interface

IP addresses that need access to Ceph Dashboard UI.

The Ceph Manager nodes.

3000

Grafana

The node under [grafana-server] in the Ansible inventory file.

IP addresses that need access to Grafana Dashboard UI and all MGR hosts and grafana-server or prometheus host.

3000

Grafana

IP addresses that need access to Grafana Dashboard UI and all MGR hosts and grafana-server or prometheus host.

The node under [grafana-server] in the Ansible inventory file.

9090

Default Prometheus server for basic Prometheus graphs

The node under [grafana-server] in the Ansible inventory file.

IP addresses that need access to Prometheus UI and all MGR hosts and grafana-server or prometheus host.

9090

Default Prometheus server for basic Prometheus graphs

IP addresses that need access to Prometheus UI and all MGR hosts and grafana-server or prometheus host.

The node under [grafana-server] in the Ansible inventory file.

9092

Prometheus server for basic Prometheus graphs

The node under [grafana-server] in the Ansible inventory file.

IP addresses that need access to Prometheus UI and all MGR hosts and grafana-server or prometheus host.

9092

Prometheus server for basic Prometheus graphs

IP addresses that need access to Prometheus UI and all MGR hosts and grafana-server or prometheus host.

The node under [grafana-server] in the Ansible inventory file.

9093

Prometheus Alertmanager

All Ceph Manager nodes and the node under [grafana-server] in the Ansible inventory file.

IP addresses that need access to Alertmanager Web UI and all MGR hosts and grafana-server or prometheus host.

9093

Prometheus Alertmanager

IP addresses that need access to Alertmanager Web UI and all MGR hosts and grafana-server or prometheus host.

All Ceph Manager nodes and the node under [grafana-server] in the Ansible inventory file.

9094

Prometheus Alertmanager for configuring a highly available cluster made from multiple instances

All Ceph Manager nodes and the node under [grafana-server] in the Ansible inventory file.

IP addresses that need access to Alertmanager Web UI and all MGR hosts and grafana-server or prometheus host.

9094

Prometheus Alertmanager for configuring a highly available cluster made from multiple instances

IP addresses that need access to Alertmanager Web UI and all MGR hosts and grafana-server or prometheus host.

All Ceph Manager nodes and the node under [grafana-server] in the Ansible inventory file.

9100

The Prometheus node-exporter daemon

All storage cluster nodes, including MONs, OSDS, [grafana-server] host.

IP addresses that need to view Node Exporter metrics Web UI and all MGR nodes and grafana-server or prometheus host.

9100

The Prometheus node-exporter daemon

IP addresses that need to view Node Exporter metrics Web UI and all MGR nodes and grafana-server or prometheus host.

All storage cluster nodes, including MONs, OSDS, [grafana-server] host.

9283

Ceph Manager Prometheus exporter module

All Ceph Manager nodes.

IP addresses that need access to Ceph Exporter metrics Web UI and grafana-server or prometheus host.

9283

Ceph Manager Prometheus exporter module

IP addresses that need access to Ceph Exporter metrics Web UI and grafana-server or prometheus host.

All Ceph Manager nodes.

9287

Ceph iSCSI gateway data

All Ceph iSCSI gateway nodes.

All MGR hosts and grafana-server or prometheus host.

9287

Ceph iSCSI gateway data

All MGR hosts and grafana-server or prometheus host.

All Ceph iSCSI gateway nodes.

Additional Resources

2.4. Accessing dashboard

Accessing the dashboard allows you to administer and monitor your Red Hat Ceph Storage cluster.

Prerequisites

  • Successful installation of Red Hat Ceph Storage Dashboard.
  • NTP is synchronizing clocks properly.
Note

A time lag can occur between the dashboard node, cluster nodes, and a browser, when the nodes are not properly synced. Ensure all nodes and the system where the browser runs have time synced by NTP. By default, when Red Hat Ceph Storage is deployed, Ansible configures NTP on all nodes. To verify, for Red Hat Enterprise Linux 7, see Configuring NTP Using ntpd, for Red Hat Enterprise Linux 8, see Using the Chrony suite to configure NTP. If you run your browser on another operating system, consult the vendor of that operating system for NTP configuration information.

Note

When using OpenStack Platform (OSP) with Red Hat Ceph Storage, to enable OSP Safe Mode, use one of the following methods. With Ansible, edit the group_vars/all.yml Ansible playbook, set dashboard_admin_user_ro: true and re-run ansible-playbook against site.yml, or site-container.yml, for bare-metal, or container deployments, respectively. To enable OSP Safe Mode using the ceph command, run ceph dashboard ac-user-set-roles admin read-only. To ensure the changes persist if you run the ceph-ansible Ansible playbook, edit group_vars/all.yml and set dashboard_admin_user_ro: true.

Procedure

  1. Enter the following URL in a web browser:

    http:// HOST_NAME : PORT

    Replace:

    • HOST_NAME with the host name of the dashboard node.
    • PORT with port 8443

      For example:

      http://dashboard:8443
  2. On the login page, enter the username admin and the default password p@ssw0rd if you did not change the password during installation.

    Figure 2.1. Ceph Dashboard Login Page

    Ceph Dashboard Login Page
  3. After logging in, the dashboard default landing page is displayed, which provides a high-level overview of status, performance, and capacity metrics of the Red Hat Ceph Storage cluster.

    Figure 2.2. Ceph Dashboard Default Landing Page

    Ceph Dashboard Default Landing Page

Additional Resources

2.5. Changing the dashboard password using Ansible

By default, the password for accessing dashboard is set to p@ssw0rd.

Important

For security reasons, change the password after installation.

You can change the dashboard password using Ansible.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Access to the Ansible administration node.

Procedure

  1. Open the Ansible playbook file /usr/share/ceph-ansible/group_vars/all.yml for editing.
  2. Uncomment and update the password on this line:

    #dashboard_admin_password: p@ssw0rd

    to:

    dashboard_admin_password: NEW_PASSWORD

    Replace NEW_PASSWORD with your preferred password.

  3. Rerun the Ansible playbook file which deploys or updates the Ceph cluster.

    1. For bare metal installs, use the site.yml playbook:

      [admin@admin ceph-ansible]$ ansible-playbook -v site.yml
    2. For container installs, use the site-docker.yml playbook:

      [admin@admin ceph-ansible]$ ansible-playbook -v site-docker.yml
  4. Log in using the new password.

Additional Resources

2.6. Changing the dashboard password using the dashboard

By default, the password for accessing dashboard is set to p@ssw0rd.

Important

For security reasons, change the password after installation.

To change the password using the dashboard, also change the dashboard password setting in Ansible to ensure the password does not revert to the default password if Ansible is used to reconfigure the Red Hat Ceph Storage cluster.

Prerequisites

  • A running Red Hat Ceph Storage cluster.

Procedure

  1. Update the password in the group_vars/all.yml file to prevent the password from being reset to p@ssw0rd when Ansible is used to reconfigure the Ceph cluster.

    1. Open the Ansible playbook file /usr/share/ceph-ansible/group_vars/all.yml for editing.
    2. Uncomment and update the password on this line:

      #dashboard_admin_password: p@ssw0rd

      to:

      dashboard_admin_password: NEW_PASSWORD

      Replace NEW_PASSWORD with your preferred password.

  2. Change the password in the dashboard web user-interface.

    1. Log in to the dashboard:

      http://HOST_NAME:8443
    2. At the top right hand side toolbar, click the dashboard settings icon and then click User management.

      user management
    3. Locate the admin user in the Username table and click on admin.

      user management
    4. Above the table title Username, click on the Edit button.
    5. Enter the new password and confirm it by reentering it and click Edit User.

      user management

      You will be logged out and taken to the log in screen. A notification will appear confirming the password change.

  3. Log back in using the new password.

Additional Resources

2.7. Changing the Grafana password using Ansible

By default, the password for Grafana, used by dashboard, is set to admin. Use this procedure to change the password.

Important

For security reasons, change the password from the default.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root access to all nodes in the cluster.

Procedure

  1. Optional: If you do not know which node the Grafana container is running on, find the node listed under [grafana-server] in the Ansible hosts file, usually located at /etc/ansible/hosts:

    Example

    [grafana-server]
    grafana

  2. On the node where the Grafana container is running, change the password:

    Syntax

    podman exec CONTAINER_ID grafana-cli admin reset-admin-password --homepath "/usr/share/grafana" NEW_PASSWORD

    Change CONTAINER_ID to the ID of the Grafana container. Change NEW_PASSWORD to the desired Grafana password.

    Example

    [root@grafana ~]# podman exec 3f28b0309aee grafana-cli admin reset-admin-password --homepath "/usr/share/grafana" NewSecurePassword
    t=2020-10-29T17:45:58+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=sqlite3
    t=2020-10-29T17:45:58+0000 lvl=info msg="Starting DB migration" logger=migrator
    
    Admin password changed successfully ✔

  3. On the Ansible administration node, use ansible-vault to encrypt the Grafana password, and then add the encrypted password to group_vars/all.yml.

    1. Change to the /usr/share/ceph-ansible/ directory:

      [admin@admin ~]$ cd /usr/share/ceph-ansible/
    2. Run ansible-vault and create a new vault password:

      Example

      [admin@admin ceph-ansible]$ ansible-vault encrypt_string --stdin-name 'grafana_admin_password_vault'
      New Vault password:

    3. Re-enter the password to confirm it:

      Example

      [admin@admin ceph-ansible]$ ansible-vault encrypt_string --stdin-name 'grafana_admin_password_vault'
      New Vault password:
      Confirm New Vault password:

    4. Enter the Grafana password, press enter, and then enter CTRL+D to complete the entry:

      Syntax

      ansible-vault encrypt_string --stdin-name 'grafana_admin_password_vault'
      New Vault password:
      Confirm New Vault password:
      Reading plaintext input from stdin. (ctrl-d to end input)
      NEW_PASSWORD

      Replace NEW_PASSWORD with the Grafana password that was set earlier.

      Example

      [admin@admin ceph-ansible]$ ansible-vault encrypt_string --stdin-name 'grafana_admin_password_vault'
      New Vault password:
      Confirm New Vault password:
      Reading plaintext input from stdin. (ctrl-d to end input)
      NewSecurePassword

    5. Take note of the output that begins with grafana_admin_password_vault: !vault | and ends with a few lines of numbers, as it will be used in the next step:

      Example

      [admin@admin ceph-ansible]$ ansible-vault encrypt_string --stdin-name 'grafana_admin_password_vault'
      New Vault password:
      Confirm New Vault password:
      Reading plaintext input from stdin. (ctrl-d to end input)
      NewSecurePassword
      grafana_admin_password_vault: !vault |
                $ANSIBLE_VAULT;1.1;AES256
                38383639646166656130326666633262643836343930373836376331326437353032376165306234
                3161386334616632653530383231316631636462363761660a373338373334663434363865356633
                66383963323033303662333765383938353630623433346565363534636434643634336430643438
                6134306662646365370a343135316633303830653565633736303466636261326361333766613462
                39353365343137323163343937636464663534383234326531666139376561663532
      Encryption successful

    6. Open for editing group_vars/all.yml and paste the output from above into the file:

      Example

      grafana_admin_password_vault: !vault |
                $ANSIBLE_VAULT;1.1;AES256
                38383639646166656130326666633262643836343930373836376331326437353032376165306234
                3161386334616632653530383231316631636462363761660a373338373334663434363865356633
                66383963323033303662333765383938353630623433346565363534636434643634336430643438
                6134306662646365370a343135316633303830653565633736303466636261326361333766613462
                39353365343137323163343937636464663534383234326531666139376561663532

    7. Add a line below the encrypted password with the following:

      Example

      grafana_admin_password: "{{ grafana_admin_password_vault }}"

      Note

      Using two variables as seen above is required due to a bug in Ansible that breaks the string type when assigning the vault value directly to the Ansible variable.

    8. Save and close the file.
  4. Re-run ansible-playbook.

    1. For container based deployments:

      Example

      [admin@node1 ceph-ansible]$ ansible-playbook -v site-container.yml -i hosts

      Note that -i hosts is only necessary if you are not using the default Ansible hosts file location of /etc/ansible/hosts.

    2. For bare-metal, RPM based deployments:

      Example

      [admin@node1 ceph-ansible]$ ansible-playbook -v site.yml -i hosts

      Note that -i hosts is only necessary if you are not using the default Ansible hosts file location of /etc/ansible/hosts.

2.8. Syncing users using Red Hat Single Sign-On for the dashboard

Administrators can provide access to users on Red Hat Ceph Storage Dashboard using Red Hat Single Sign-on (SSO) with Lightweight Directory Access Protocol (LDAP) integration.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Dashboard is installed.
  • Admin level access to the dashboard.
  • Users are added to the dashboard.
  • Root-level access on all the nodes.
  • Red hat Single Sign-On installed from a ZIP file. See the Installing Red Hat Single Sign-On from a zip file for additional information.

Procedure

  1. Download the Red Hat Single Sign-On 7.4.0 Server on the system where Red Hat Ceph Storage is installed.
  2. Unzip the folder:

    [root@cephuser]# unzip rhsso-7.4.0.zip
  3. Navigate to the standalone/configuration directory and open the standalone.xml for editing:

    [root@cephuser]# cd standalone/configuration
    [root@cephuser configuration]# vi standalone.xml
  4. Replace three instances of localhost and two instances of 127.0.0.1 with the IP address of the machine where Red Hat Single Sign-On is installed.
  5. Optional: For Red Hat Enterprise Linux 8, users might get Certificate Authority (CA) issues. Import the custom certificates from CA and move them into the keystore with exact java version.

    Example

    [root@cephuser]# keytool -import -noprompt -trustcacerts -alias ca -file ../ca.cer -keystore /etc/java/java-1.8.0-openjdk/java-1.8.0-openjdk-1.8.0.272.b10-3.el8_3.x86_64/lib/security/cacert

  6. To start the server from the bin directory of rh-sso-7.4 folder, run the standalone boot script:

    [root@cephuser bin]# ./standalone.sh
  7. Create the admin account in http:_IP_ADDRESS_:8080/auth with a username and password:

    Create Admin User
    Note

    The admin account has to be created only the first time you log into the console.

  8. Log into the admin console with the credentials created:

    Admin Console
  9. To create a realm, click the Master drop-down. In this realm, administrators provide access to users and applications.

    Add realm drop-down
  10. In the Add Realm window, enter a name for the realm and set the parameter Enabled to ON and click Create:

    Add realm window
    Note

    The realm name is case-sensitive.

  11. In the Realm Settings tab, set the following parameters and click Save:

    1. Enabled - ON
    2. User-Managed Access - ON
    3. Copy the link address of SAML 2.0 Identity Provider Metadata

      Add realm settings window
  12. In the Clients tab, click Create:

    Add client
  13. In the Add Client window, set the following parameters and click Save:

    1. Client ID - BASE_URL:8443/auth/saml2/metadata

      Example

      https://magna082.ceph.redhat.com:8443/auth/saml2/metadata

    2. Client Protocol - saml

      Add client window
  14. In the Clients window, under Settings tab, set the following parameters and click Save:

    1. Client ID - BASE_URL:8443/auth/saml2/metadata

      Example

      https://magna082.ceph.redhat.com:8443/auth/saml2/metadata

    2. Enabled - ON
    3. Client Protocol - saml
    4. Include AuthnStatement - ON
    5. Sign Documents - ON
    6. Signature Algorithm - RSA_SHA1
    7. SAML Signature Key Name - KEY_ID
    8. Valid Redirect URLs - BASE_URL:8443/*

      Example

      https://magna082.ceph.redhat.com:8443/*

    9. Base URL - BASE_URL:8443

      Example

      https://magna082.ceph.redhat.com:8443/

    10. Master SAML Processing URL - http://localhost:8080/auth/realms/REALM_NAME/protocol/saml/descriptor

      Example

      http://localhost:8080/auth/realms/Ceph_LDAP/protocol/saml/descriptor

      Note

      Paste the link of SAML 2.0 Identity Provider Metadata from Realm Settings tab.

      Under Fine Grain SAML Endpoint Configuration, set the parameters:

    11. Assertion Consumer Service POST Binding URL - BASE_URL:8443/#/dashboard

      Example

      https://magna082.ceph.redhat.com:8443/#/dashboard

    12. Assertion Consumer Service Redirect Binding URL - BASE_URL:8443/#/dashboard

      Example

      https://magna082.ceph.redhat.com:8443/#/dashboard

    13. Logout Service Redirect Binding URL - BASE_URL:8443/

      Example

      https://magna082.ceph.redhat.com:8443/

      Client mappers upperpane
      Client mappers lowerpane
  15. In the Clients window, Mappers tab, set the following parameters and click Save:

    1. Protocol - saml
    2. Name - username
    3. Mapper Property - User Property
    4. Property - username
    5. SAML Attribute name - username

      Add Client Mappers
  16. In the Clients Scope tab, select role_list:

    1. In Mappers tab, select role list, set the Single Role Attribute to ON.

      Add client role list
  17. Select User_Federation tab:

    1. In User Federation window, select ldap from the drop-down:

      Add ldap as provider
  18. In User_Federation window, Settings tab, set the following parameters and click Save:

    1. Console Display Name - rh-ldap
    2. Import Users - ON
    3. Edit_Mode - READ_ONLY
    4. Username LDAP attribute - username
    5. RDN LDAP attribute - username
    6. UUID LDAP attribute - nsuniqueid
    7. User Object Classes - inetOrgPerson, organizationalPerson, rhatPerson
    8. Connection URL - ldap:://myldap.example.com

      Example

      ldap://ldap.corp.redhat.com

      Click Test Connection.

      LDAP Test Connection

      You will get a notification that the LDAP connection is successful.

    9. Users DN - ou=users, dc=example, dc=com

      Example

      ou=users,dc=redhat,dc=com

    10. Bind Type - simple

      User Federation Upperpane
      User Federation Lowerpane
    11. Click Test authentication.

      LDAP Test Authentication

      You will get a notification that the LDAP authentication is successful.

  19. In Mappers tab, select first name row and edit the following parameter and Click Save:

    1. LDAP Attribute - givenName

      User Federation Mappers tab
      User Federation Mappers window
  20. In User_Federation tab, Settings tab, Click Synchronize all users:

    User Federation Synchronize

    You will get a notification that the sync of users are updated successfully.

    User Federation synchronize notification
  21. In the Users tab, search for the user added to the dashboard and click the Search icon:

    User search tab
  22. To view the user , click it’s row. You should see the federation link as the name provided for the User Federation.

    User details
    Important

    Do not add users manually. If added manually, delete the user by clicking Delete.

  23. Users added to the realm and the dashboard can access the Ceph dashboard with their mail address and password.

    Example

    https://magna082.ceph.redhat.com:8443

    Dashboard link

Additional Resources

  • For adding users to the dashboard, see the Creating users on dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information.
  • For adding roles for users on the dashboard, see the Creating roles on dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information.

2.9. Enabling Single Sign-On for the Ceph Dashboard

The Ceph Dashboard supports external authentication of users with the Security Assertion Markup Language (SAML) protocol. Before using single sign-On (SSO) with the Ceph dashboard, create the dashboard user accounts and assign the desired roles. The Ceph Dashboard performs authorization of the users and the authentication process is performed by an existing Identity Provider (IdP). Red Hat uses Keycloak to test the dashboard SSO feature.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Installation of the Ceph Dashboard software.
  • Launch the Dashboard.
  • Root-level access to the Ceph Manager nodes.
  • Installation of the following library packages on the Ceph Manager nodes:

    • python3-saml
    • python3-defusedxml
    • python3-isodate
    • python3-xmlsec

Procedure

  1. To configure SSO on Ceph Dashboard, run the following command:

    1. Bare-metal deployments:

      Syntax

      ceph dashboard sso setup saml2 CEPH_DASHBOARD_BASE_URL IDP_METADATA IDP_USERNAME_ATTRIBUTE IDP_ENTITY_ID SP_X_509_CERT SP_PRIVATE_KEY

      Example

      [root@mon ~]# ceph dashboard sso setup saml2 http://dashboard_hostname.ceph.redhat.com:8443 idp-metadata.xml username http://10.70.59.125:8080/auth/realms/realm_name /home/certificate.txt /home/private-key.txt

    2. Container deployments:

      Syntax

      podman exec CEPH_MGR_NODE ceph dashboard sso setup saml2 CEPH_DASHBOARD_BASE_URL IDP_METADATA IDP_USERNAME_ATTRIBUTE IDP_ENTITY_ID SP_X_509_CERT SP_PRIVATE_KEY

      Example

      [root@mon ~]# podman exec ceph-mgr-hostname ceph dashboard sso setup saml2 http://dashboard_hostname.ceph.redhat.com:8443 idp-metadata.xml username http://10.70.59.125:8080/auth/realms/realm_name /home/certificate.txt /home/private-key.txt

    Replace

    • CEPH_MGR_NODE with Ceph mgr node. For example, ceph-mgr-hostname
    • CEPH_DASHBOARD_BASE_URL with the base URL where Ceph Dashboard is accessible.
    • IDP_METADATA with the URL to remote or local path or content of the IdP metadata XML. The supported URL types are http, https, and file.
    • Optional: IDP_USERNAME_ATTRIBUTE with the attribute used to get the username from the authentication response. Defaults to uid.
    • Optional: IDP_ENTITY_ID with the IdP entity ID when more than one entity ID exists on the IdP metadata.
    • Optional: SP_X_509_CERT with the file path of the certificate used by Ceph Dashboard for signing and encryption.
    • Optional: SP_PRIVATE_KEY with the file path of the private key used by Ceph Dashboard for signing and encryption.
  2. Verify the current SAML 2.0 configuration:

    1. Bare-metal deployments:

      Syntax

      ceph dashboard sso show saml2

    2. Container deployments:

      Syntax

      podman exec CEPH_MGR_NODE ceph dashboard sso show saml2

  3. To enable SSO, run the following command:

    1. Bare-metal deployments:

      Syntax

      ceph dashboard sso enable saml2
      SSO is "enabled" with "SAML2" protocol.

    2. Container deployments:

      Syntax

      podman exec CEPH_MGR_NODE ceph dashboard sso enable saml2
      SSO is "enabled" with "SAML2" protocol.

  4. Open your dashboard URL. For example:

    http://dashboard_hostname.ceph.redhat.com:8443
  5. On the SSO page, enter the login credentials. SSO redirects to the dashboard web interface.

Additional Resources

2.10. Disabling Single Sign-On for the Ceph Dashboard

You can disable single sign on for Ceph Dashboard.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Installation of the Ceph Dashboard software.
  • Launch the Dashboard.
  • Root-level access to the Ceph Manager nodes.
  • Single sign-on enabled for Ceph Dashboard
  • Installation of the following library packages on the Ceph Manager nodes:

    • python3-saml
    • python3-defusedxml
    • python3-isodate
    • python3-xmlsec

Procedure

  1. To view status of SSO, run the following command:

    1. Bare-metal deployments:

      Syntax

      ceph dashboard sso status
      SSO is "enabled" with "SAML2" protocol.

    2. Container deployments:

      Syntax

      podman exec CEPH_MGR_NODE ceph dashboard sso status
      SSO is "enabled" with "SAML2" protocol.

      Replace

      • CEPH_MGR_NODE with Ceph mgr node. For example, ceph-mgr-hostname
  2. To disable SSO, run the following command:

    1. Bare-metal deployments:

      Syntax

      ceph dashboard sso disable
      SSO is "disabled".

    2. Container deployments:

      Syntax

      podman exec CEPH_MGR_NODE ceph dashboard sso disable
      SSO is "disabled".

      Replace

      • CEPH_MGR_NODE with Ceph mgr node. For example, ceph-mgr-hostname

Additional Resources