Chapter 5. Ceph user management

As a storage administrator, you can manage the Ceph user base by providing authentication, keyring management and access control to objects in the Red Hat Ceph Storage cluster.

OSD States

5.1. Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Access to a Ceph Monitor or Ceph client node.

5.2. Ceph user management background

When Ceph runs with authentication and authorization enabled, you must specify a user name and a keyring containing the secret key of the specified user. If you do not specify a user name, Ceph will use the client.admin administrative user as the default user name. If you do not specify a keyring, Ceph will look for a keyring by using the keyring setting in the Ceph configuration. For example, if you execute the ceph health command without specifying a user or keyring:

# ceph health

Ceph interprets the command like this:

# ceph -n client.admin --keyring=/etc/ceph/ceph.client.admin.keyring health

Alternatively, you may use the CEPH_ARGS environment variable to avoid re-entry of the user name and secret.

Irrespective of the type of Ceph client, for example, block device, object store, file system, native API, or the Ceph command line, Ceph stores all data as objects within pools. Ceph users must have access to pools in order to read and write data. Additionally, administrative Ceph users must have permissions to execute Ceph’s administrative commands.

The following concepts can help you understand Ceph user management.

Storage Cluster Users

A user of the Red Hat Ceph Storage cluster is either an individual or as an application. Creating users allows you to control who can access the storage cluster, its pools, and the data within those pools.

Ceph has the notion of a type of user. For the purposes of user management, the type will always be client. Ceph identifies users in period (.) delimited form consisting of the user type and the user ID. For example, TYPE.ID, client.admin, or client.user1. The reason for user typing is that Ceph Monitors, and OSDs also use the Cephx protocol, but they are not clients. Distinguishing the user type helps to distinguish between client users and other users—​streamlining access control, user monitoring and traceability.

Sometimes Ceph’s user type may seem confusing, because the Ceph command line allows you to specify a user with or without the type, depending upon the command line usage. If you specify --user or --id, you can omit the type. So client.user1 can be entered simply as user1. If you specify --name or -n, you must specify the type and name, such as client.user1. Red Hat recommends using the type and name as a best practice wherever possible.

Note

A Red Hat Ceph Storage cluster user is not the same as a Ceph Object Gateway user. The object gateway uses a Red Hat Ceph Storage cluster user to communicate between the gateway daemon and the storage cluster, but the gateway has its own user management functionality for its end users.

Authorization capabilities

Ceph uses the term "capabilities" (caps) to describe authorizing an authenticated user to exercise the functionality of the Ceph Monitors and OSDs. Capabilities can also restrict access to data within a pool or a namespace within a pool. A Ceph administrative user sets a user’s capabilities when creating or updating a user. Capability syntax follows the form:

Syntax

DAEMON_TYPE 'allow CAPABILITY' [DAEMON_TYPE 'allow CAPABILITY']

  • Monitor Caps: Monitor capabilities include r, w, x, allow profile CAP, and profile rbd.

    Example

    mon 'allow rwx`
    mon 'allow profile osd'

  • OSD Caps: OSD capabilities include r, w, x, class-read, class-write, profile osd, profile rbd, and profile rbd-read-only. Additionally, OSD capabilities also allow for pool and namespace settings. :

    osd 'allow CAPABILITY' [pool=POOL_NAME] [namespace=NAMESPACE_NAME]
Note

The Ceph Object Gateway daemon (radosgw) is a client of the Ceph storage cluster, so it isn’t represented as a Ceph storage cluster daemon type.

The following entries describe each capability.

allow

Precedes access settings for a daemon.

r

Gives the user read access. Required with monitors to retrieve the CRUSH map.

w

Gives the user write access to objects.

x

Gives the user the capability to call class methods (that is, both read and write) and to conduct auth operations on monitors.

class-read

Gives the user the capability to call class read methods. Subset of x.

class-write

Gives the user the capability to call class write methods. Subset of x.

*

Gives the user read, write and execute permissions for a particular daemon or pool, and the ability to execute admin commands.

profile osd

Gives a user permissions to connect as an OSD to other OSDs or monitors. Conferred on OSDs to enable OSDs to handle replication heartbeat traffic and status reporting.

profile bootstrap-osd

Gives a user permissions to bootstrap an OSD, so that they have permissions to add keys when bootstrapping an OSD.

profile rbd

Gives a user read-write access to the Ceph Block Devices.

profile rbd-read-only

Gives a user read-only access to the Ceph Block Devices.

Pool

A pool defines a storage strategy for Ceph clients, and acts as a logical partition for that strategy.

In Ceph deployments, it is common to create a pool to support different types of use cases. For example, cloud volumes or images, object storage, hot storage, cold storage, and so on. When deploying Ceph as a back end for OpenStack, a typical deployment would have pools for volumes, images, backups and virtual machines, and users such as client.glance, client.cinder, and so on.

Namespace

Objects within a pool can be associated to a namespace—​a logical group of objects within the pool. A user’s access to a pool can be associated with a namespace such that reads and writes by the user take place only within the namespace. Objects written to a namespace within the pool can only be accessed by users who have access to the namespace.

Note

Currently, namespaces are only useful for applications written on top of librados. Ceph clients such as block device and object storage do not currently support this feature.

The rationale for namespaces is that pools can be a computationally expensive method of segregating data by use case, because each pool creates a set of placement groups that get mapped to OSDs. If multiple pools use the same CRUSH hierarchy and ruleset, OSD performance may degrade as load increases.

For example, a pool should have approximately 100 placement groups per OSD. So an exemplary cluster with 1000 OSDs would have 100,000 placement groups for one pool. Each pool mapped to the same CRUSH hierarchy and ruleset would create another 100,000 placement groups in the exemplary cluster. By contrast, writing an object to a namespace simply associates the namespace to the object name with out the computational overhead of a separate pool. Rather than creating a separate pool for a user or set of users, you may use a namespace.

Note

Only available using librados at this time.

Additional Resources

5.3. Managing Ceph users

As a storage administrator, you can manage Ceph users by creating, modifying, deleting, and importing users. A Ceph client user can be either individuals or applications, which use Ceph clients to interact with the Red Hat Ceph Storage cluster daemons.

5.3.1. Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Access to a Ceph Monitor or Ceph client node.

5.3.2. Listing Ceph users

You can list the users in the storage cluster using the command-line interface.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. To list the users in the storage cluster, execute the following:

    [root@mon ~]# ceph auth list

    Ceph will list out all users in the storage cluster. For example, in a two-node exemplary storage cluster, ceph auth list will output something that looks like this:

    Example

    installed auth entries:
    
    osd.0
        key: AQCvCbtToC6MDhAATtuT70Sl+DymPCfDSsyV4w==
        caps: [mon] allow profile osd
        caps: [osd] allow *
    osd.1
        key: AQC4CbtTCFJBChAAVq5spj0ff4eHZICxIOVZeA==
        caps: [mon] allow profile osd
        caps: [osd] allow *
    client.admin
        key: AQBHCbtT6APDHhAA5W00cBchwkQjh3dkKsyPjw==
        caps: [mds] allow
        caps: [mon] allow *
        caps: [osd] allow *
    client.bootstrap-mds
        key: AQBICbtTOK9uGBAAdbe5zcIGHZL3T/u2g6EBww==
        caps: [mon] allow profile bootstrap-mds
    client.bootstrap-osd
        key: AQBHCbtT4GxqORAADE5u7RkpCN/oo4e5W0uBtw==
        caps: [mon] allow profile bootstrap-osd

Note

The TYPE.ID notation for users applies such that osd.0 is a user of type osd and its ID is 0, client.admin is a user of type client and its ID is admin, that is, the default client.admin user. Note also that each entry has a key: VALUE entry, and one or more caps: entries.

You may use the -o FILE_NAME option with ceph auth list to save the output to a file.

5.3.3. Display Ceph user information

You can display a Ceph’s user information using the command-line interface.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. To retrieve a specific user, key and capabilities, execute the following:

    ceph auth export TYPE.ID

    Example

    [root@mon ~]# ceph auth get client.admin

  2. You can also use the -o FILE_NAME option with ceph auth get to save the output to a file. Developers can also execute the following:

    ceph auth export TYPE.ID

    Example

    [root@mon ~]# ceph auth export client.admin

The auth export command is identical to auth get, but also prints out the internal auid, which isn’t relevant to end users.

5.3.4. Add a new Ceph user

Adding a user creates a username, that is, TYPE.ID, a secret key and any capabilities included in the command you use to create the user.

A user’s key enables the user to authenticate with the Ceph storage cluster. The user’s capabilities authorize the user to read, write, or execute on Ceph monitors (mon), Ceph OSDs (osd) or Ceph Metadata Servers (mds).

There are a few ways to add a user:

  • ceph auth add: This command is the canonical way to add a user. It will create the user, generate a key and add any specified capabilities.
  • ceph auth get-or-create: This command is often the most convenient way to create a user, because it returns a keyfile format with the user name (in brackets) and the key. If the user already exists, this command simply returns the user name and key in the keyfile format. You may use the -o FILE_NAME option to save the output to a file.
  • ceph auth get-or-create-key: This command is a convenient way to create a user and return the user’s key only. This is useful for clients that need the key only, for example, libvirt. If the user already exists, this command simply returns the key. You may use the -o FILE_NAME option to save the output to a file.

When creating client users, you may create a user with no capabilities. A user with no capabilities is useless beyond mere authentication, because the client cannot retrieve the cluster map from the monitor. However, you can create a user with no capabilities if you wish to defer adding capabilities later using the ceph auth caps command.

A typical user has at least read capabilities on the Ceph monitor and read and write capability on Ceph OSDs. Additionally, a user’s OSD permissions are often restricted to accessing a particular pool. :

[root@mon ~]# ceph auth add client.john mon 'allow r' osd 'allow rw pool=liverpool'
[root@mon ~]# ceph auth get-or-create client.paul mon 'allow r' osd 'allow rw pool=liverpool'
[root@mon ~]# ceph auth get-or-create client.george mon 'allow r' osd 'allow rw pool=liverpool' -o george.keyring
[root@mon ~]# ceph auth get-or-create-key client.ringo mon 'allow r' osd 'allow rw pool=liverpool' -o ringo.key
Important

If you provide a user with capabilities to OSDs, but you DO NOT restrict access to particular pools, the user will have access to ALL pools in the cluster!

5.3.5. Modifying a Ceph User

The ceph auth caps command allows you to specify a user and change the user’s capabilities. Setting new capabilities will overwrite current capabilities. Therefore, view the current capabilities first and include them when you add new capabilities.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. View current capabilities:

    ceph auth get USERTYPE.USERID

    Example

    [root@mon ~]# ceph auth get client.john
    exported keyring for client.john
    [client.john]
    	key = AQAHjy1gkxhIMBAAxsaoFNuxlUhr/zKsmnAZOA==
    	caps mon = "allow r"
    	caps osd = "allow rw pool=liverpool"

  2. To add capabilities, use the form:

    ceph auth caps USERTYPE.USERID DAEMON 'allow [r|w|x|*|...] [pool=POOL_NAME] [namespace=NAMESPACE_NAME]'

    Example

    [root@mon ~]# ceph auth caps client.john mon 'allow r' osd 'allow rwx pool=liverpool'

    In the example, execute capabilities on the OSDs have been added.

  3. Verify the added capabilities:

    ceph auth get _USERTYPE_._USERID_

    Example

    [root@mon ~]# ceph auth get client.john
    exported keyring for client.john
    [client.john]
    	key = AQAHjy1gkxhIMBAAxsaoFNuxlUhr/zKsmnAZOA==
    	caps mon = "allow r"
    	caps osd = "allow rwx pool=liverpool"

    In the example, execute capabilities on the OSDs can be seen.

  4. To remove a capability, set all the current capabilities except the ones you want to remove.

    ceph auth caps USERTYPE.USERID DAEMON 'allow [r|w|x|*|...] [pool=POOL_NAME] [namespace=NAMESPACE_NAME]'

    Example

    [root@mon ~]# ceph auth caps client.john mon 'allow r' osd 'allow rw pool=liverpool'

    In the example, execute capabilities on the OSDs were not included and thus will be removed.

  5. Verify the removed capabilities:

    ceph auth get _USERTYPE_._USERID_

    Example

    [root@mon ~]# ceph auth get client.john
    exported keyring for client.john
    [client.john]
    	key = AQAHjy1gkxhIMBAAxsaoFNuxlUhr/zKsmnAZOA==
    	caps mon = "allow r"
    	caps osd = "allow rw pool=liverpool"

    In the example, execute capabilities on the OSDs are no longer listed.

Additional Resources

5.3.6. Deleting a Ceph user

You can delete a user from the Ceph storage cluster using the command-line interface.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. To delete a user, use ceph auth del:

    [root@mon ~]# ceph auth del TYPE.ID

    Where TYPE is one of client, osd, mon, or mds, and ID is the user name or ID of the daemon.

5.3.8. Import Ceph user

You can import a Ceph user using the command-line interface.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. To import one or more users, use ceph auth import and specify a keyring:

    ceph auth import -i /PATH/TO/KEYRING

    Example

    [root@mon ~]# ceph auth import -i /etc/ceph/ceph.keyring

Note

The Ceph storage cluster will add new users, their keys and their capabilities and will update existing users, their keys and their capabilities.

5.4. Managing Ceph keyrings

As a storage administrator, managing Ceph user keys is important for accessing the Red Hat Ceph Storage cluster. You can create keyrings, add users to keyrings, and modifying users with keyrings.

5.4.1. Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Access to a Ceph Monitor or Ceph client node.

5.4.2. Creating a keyring

You need to provide user keys to the Ceph clients so that the Ceph client can retrieve the key for the specified user and authenticate with the Ceph Storage Cluster. Ceph Clients access keyrings to lookup a user name and retrieve the user’s key.

The ceph-authtool utility allows you to create a keyring.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. To create an empty keyring, use --create-keyring or -C.

    Example

    [root@mon ~]# ceph-authtool --create-keyring /path/to/keyring

    When creating a keyring with multiple users, we recommend using the cluster name. For example, CLUSTER_NAME.keyring` for the keyring file name and saving it in the /etc/ceph/ directory so that the keyring configuration default setting will pick up the filename without requiring you to specify it in the local copy of the Ceph configuration file.

  2. Create ceph.keyring by executing the following:

    [root@mon ~]# ceph-authtool -C /etc/ceph/ceph.keyring

When creating a keyring with a single user, we recommend using the cluster name, the user type and the user name and saving it in the /etc/ceph/ directory. For example, ceph.client.admin.keyring for the client.admin user.

To create a keyring in /etc/ceph/, you must do so as root. This means the file will have rw permissions for the root user only, which is appropriate when the keyring contains administrator keys. However, if you intend to use the keyring for a particular user or group of users, ensure that you execute chown or chmod to establish appropriate keyring ownership and access.

5.4.3. Adding a user to the keyring

When you add a user to the Ceph storage cluster, you can use the get procedure to retrieve a user, key and capabilities, then save the user to a keyring file. When you only want to use one user per keyring, the Display Ceph user information procedure with the -o option will save the output in the keyring file format.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. To create a keyring for the client.admin user, execute the following:

    [root@mon ~]# ceph auth get client.admin -o /etc/ceph/ceph.client.admin.keyring

    Notice that we use the recommended file format for an individual user.

  2. When you want to import users to a keyring, you can use ceph-authtool to specify the destination keyring and the source keyring.

    [root@mon ~]# ceph-authtool /etc/ceph/ceph.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring

5.4.4. Creating a Ceph user with a keyring

Ceph provides the ability to create a user directly in the Red Hat Ceph Storage cluster. However, you can also create a user, keys and capabilities directly on a Ceph client keyring.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. Import a user into the keyring:

    Example

    [root@mon ~]# ceph-authtool -n client.ringo --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.keyring

  2. Create a keyring and add a new user to the keyring simultaneously:

    Example:

    [root@mon ~]# ceph-authtool -C /etc/ceph/ceph.keyring -n client.ringo --cap osd 'allow rwx' --cap mon 'allow rwx' --gen-key

    In the foregoing scenarios, the new user client.ringo is only in the keyring.

  3. To add the new user to the Ceph storage cluster:

    [root@mon ~]# ceph auth add client.ringo -i /etc/ceph/ceph.keyring

Additional Resources

5.4.5. Modifying a Ceph user with a keyring

You can modify a Ceph user and their keyring using the command-line interface.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. To modify the capabilities of a user record in a keyring, specify the keyring, and the user followed by the capabilities, for example:
[root@mon ~]# ceph-authtool /etc/ceph/ceph.keyring -n client.ringo --cap osd 'allow rwx' --cap mon 'allow rwx'
  1. To update the user to the Red Hat Ceph Storage cluster, you must update the user in the keyring to the user entry in the Red Hat Ceph Storage cluster:
[root@mon ~]# ceph auth import -i /etc/ceph/ceph.keyring

You may also modify user capabilities directly in the storage cluster, store the results to a keyring file; then, import the keyring into the main ceph.keyring file.

Additional Resources

  • See Import user for details on updating a Red Hat Ceph Storage cluster user from a keyring.

5.4.6. Command Line usage for Ceph users

Ceph supports the following usage for user name and secret:

--id | --user

Description
Ceph identifies users with a type and an ID. For example, TYPE.ID or client.admin, client.user1. The id, name and -n options enable you to specify the ID portion of the user name. For example, admin, user1, or foo. You can specify the user with the --id and omit the type. For example, to specify user client.foo enter the following:
[root@mon ~]# ceph --id foo --keyring /path/to/keyring health
[root@mon ~]# ceph --user foo --keyring /path/to/keyring health

--name | -n

Description
Ceph identifies users with a type and an ID. For example, TYPE.ID or client.admin, client.user1. The --name and -n options enables you to specify the fully qualified user name. You must specify the user type (typically client) with the user ID. For example:
[root@mon ~]# ceph --name client.foo --keyring /path/to/keyring health
[root@mon ~]# ceph -n client.foo --keyring /path/to/keyring health

--keyring

Description
The path to the keyring containing one or more user name and secret. The --secret option provides the same functionality, but it does not work with Ceph RADOS Gateway, which uses --secret for another purpose. You may retrieve a keyring with ceph auth get-or-create and store it locally. This is a preferred approach, because you can switch user names without switching the keyring path. For example:
[root@mon ~]# rbd map foo --pool rbd myimage --id client.foo --keyring /path/to/keyring

5.4.7. Ceph user management limitations

The cephx protocol authenticates Ceph clients and servers to each other. It is not intended to handle authentication of human users or application programs run on their behalf. If that effect is required to handle the access control needs, you must have another mechanism, which is likely to be specific to the front end used to access the Ceph object store. This other mechanism has the role of ensuring that only acceptable users and programs are able to run on the machine that Ceph will permit to access its object store.

The keys used to authenticate Ceph clients and servers are typically stored in a plain text file with appropriate permissions in a trusted host.

Important

Storing keys in plaintext files has security shortcomings, but they are difficult to avoid, given the basic authentication methods Ceph uses in the background. Those setting up Ceph systems should be aware of these shortcomings.

In particular, arbitrary user machines, especially portable machines, should not be configured to interact directly with Ceph, since that mode of use would require the storage of a plaintext authentication key on an insecure machine. Anyone who stole that machine or obtained surreptitious access to it could obtain the key that will allow them to authenticate their own machines to Ceph.

Rather than permitting potentially insecure machines to access a Ceph object store directly, users should be required to sign in to a trusted machine in the environment using a method that provides sufficient security for the purposes. That trusted machine will store the plaintext Ceph keys for the human users. A future version of Ceph may address these particular authentication issues more fully.

At the moment, none of the Ceph authentication protocols provide secrecy for messages in transit. Thus, an eavesdropper on the wire can hear and understand all data sent between clients and servers in Ceph, even if he cannot create or alter them. Those storing sensitive data in Ceph should consider encrypting their data before providing it to the Ceph system.

For example, Ceph Object Gateway provides S3 API Server-side Encryption, which encrypts unencrypted data received from a Ceph Object Gateway client before storing it in the Ceph Storage cluster and similarly decrypts data retrieved from the Ceph Storage cluster before sending it back to the client. To ensure encryption in transit between the client and the Ceph Object Gateway, the Ceph Object Gateway should be configured to use SSL.