Chapter 1. The basics of Ceph configuration

As a storage administrator, you need to have a basic understanding of how to view the Ceph configuration, and how to set the Ceph configuration options for the Red Hat Ceph Storage cluster. You can view and set the Ceph configuration options at runtime.

1.1. Prerequisites

  • Installation of the Red Hat Ceph Storage software.

1.2. Ceph configuration

All Red Hat Ceph Storage clusters have a configuration, which defines:

  • Cluster Identity
  • Authentication settings
  • Ceph daemons
  • Network configuration
  • Node names and addresses
  • Paths to keyrings
  • Paths to OSD log files
  • Other runtime options

A deployment tool such as Ansible will typically create an initial Ceph configuration file for you. However, you can create one yourself if you prefer to bootstrap a Red Hat Ceph Storage cluster without using a deployment tool.

1.3. The Ceph configuration database

The Ceph Monitor manages a configuration database of Ceph options which centralizes configuration management by storing configuration options for the entire storage cluster. By centralizing the Ceph configuration in a database, this helps with storage cluster administration. There are still a handful of Ceph options that can be defined in the local Ceph configuration file, by default, /etc/ceph/ceph.conf. These few Ceph configuration options control how other Ceph components connect to the Ceph Monitors to authenticate, and fetch the configuration information from the database.

Ceph allows you to make changes to the configuration of a daemon at runtime. This capability can be useful for increasing or decreasing the logging output, by enabling or disabling debug settings, and can even be used for runtime optimization.


When the same option exists in the configuration database and the Ceph configuration file, the configuration database option has a lower priority than what is set in the Ceph configuration file.

Sections and Masks

Just as you can configure Ceph options globally, per daemon type, or by a specific daemon in the Ceph configuration file, you can also configure the Ceph options in the configuration database according to these sections. Ceph configuration options can have a mask associated with them. These masks can further restrict which daemons or clients the options apply to.

Masks have two forms:

The type is a CRUSH property, for example, rack or host. The location is a value for the property type. For example, host:foo limits the option only to daemons or clients running on a particular node, foo in this example.
The device-class is the name of the CRUSH device class, such as hdd or ssd. For example, class:ssd limits the option only to Ceph OSDs backed by solid state drives (SSD). This mask has no effect on non-OSD daemons of clients.

Administrative Commands

The Ceph configuration database can be administered with the sub-command ceph config ACTION. These are the actions you can do:

Dumps the entire configuration database of options for the storage cluster.
get WHO
Dumps the configuration for a specific daemon or client. For example, WHO can be a daemon, like mds.a.
Sets a configuration option in the Ceph configuration database.
show WHO
Shows the reported running configuration for a running daemon. These options might be different from those stored by the Ceph Monitors, if there is a local configuration file in use or options have been overridden on the command line or at run time. Also, the source of the option values is reported as part of the output.
assimilate-conf -i INPUT_FILE -o OUTPUT_FILE
Assimilate a configuration file from the INPUT_FILE and move any valid options into the Ceph Monitors’ configuration database. Any options that are unrecognized, invalid, or cannot be controlled by the Ceph Monitor return in an abbreviated configuration file stored in the OUTPUT_FILE. This command can be useful for transitioning from legacy configuration files to a centralized configuration database.
help OPTION -f json-pretty
Displays help for a particular OPTION using a JSON-formatted output.

1.4. The Ceph configuration file

The Ceph configuration file configures the Ceph daemons at start time, which will override their default values.


Each Ceph daemon has a series of default values, which are set by the ceph/src/common/config_opts.h file.

The location of Ceph’s default configuration file is /etc/ceph/ceph.conf. You can change that location by setting a different path by:

  • Setting the path in the $CEPH_CONF environment variable.
  • Specifying the -c command line argument, for example, -c path/ceph.conf.

Ceph configuration files use an ini style syntax. You can add comments by preceding comments with a pound sign (#) or a semi-colon (;).


# <--A pound sign (#) sign precedes a comment.
# Comments always follow a semi-colon (;) or a pound (#) on each line.
# The end of the line terminates a comment.
# We recommend that you provide comments in your configuration file(s).
; A comment may be anything.

The configuration file can configure all Ceph daemons in a Ceph storage cluster or all Ceph daemons of a particular type at start time. To configure a series of daemons, the settings must be included under the processes that will receive the configuration as follows:

Settings under [global] affect all daemons in a Ceph Storage Cluster.
auth supported = cephx
Settings under [osd] affect all ceph-osd daemons in the Ceph storage cluster, and override the same setting in [global].
Settings under [mon] affect all ceph-mon daemons in the Ceph storage cluster, and override the same setting in [global].
mon host = hostname1,hostname2,hostname3mon addr =
Settings under [client] affect all Ceph clients. For example, mounted Ceph block devices, Ceph object gateways, and so on.
log file = /var/log/ceph/radosgw.log

Global settings affect all instances of all daemon in the Ceph storage cluster. Use the [global] heading for values that are common for all daemons in the Ceph storage cluster. You can override each [global] option by:

  • Changing the options for a particular process type:


    [osd], [mon]


  • Changing the options for a particular process:



Overriding a global setting affects all child processes, except those that you specifically override in a particular daemon.

A typical global setting involves activating authentication.


#Enable authentication between hosts within the cluster.
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

You can specify settings that apply to a particular type of daemon. When you specify settings under [osd] or [mon] without specifying a particular instance, the setting will apply to all OSD or monitor daemons respectively. One example of a daemon-wide setting is the osd memory target.


osd_memory_target = 5368709120

You can specify settings for particular instances of a daemon. You may specify an instance by entering its type, delimited by a period (.) and by the instance ID. The instance ID for a Ceph OSD daemons is always numeric, but it may be alphanumeric for Ceph monitors.


# settings affect osd.1 only.

# settings affect mon.a only.

A typical Ceph configuration file has at least the following settings:

mon_initial_members = NODE_NAME[, NODE_NAME]

#All clusters have a front-side public network.
#If you have two NICs, you can configure a back side cluster
#network for OSD object replication, heart beats, backfilling,
#recovery, and so on
public_network = PUBLIC_NET[, PUBLIC_NET]
#cluster_network = PRIVATE_NET[, PRIVATE_NET]

#Clusters require authentication by default.
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

#Choose reasonable numbers for your number of replicas
#and placement groups.
osd_pool_default_size = NUM  # Write an object n times.
osd_pool_default_min_size = NUM # Allow writing n copy in a degraded state.
osd_pool_default_pg_num = NUM
osd_pool_default_pgp_num = NUM

#Choose a reasonable crush leaf type.
#0 for a 1-node cluster.
#1 for a multi node cluster in a single rack
#2 for a multi node, multi chassis cluster with multiple hosts in a chassis
#3 for a multi node cluster with hosts across racks, and so on
osd_crush_chooseleaf_type = NUM

1.5. Using the Ceph metavariables

Metavariables simplify Ceph storage cluster configuration dramatically. When a metavariable is set in a configuration value, Ceph expands the metavariable into a concrete value.

Metavariables are very powerful when used within the [global], [osd], [mon], or [client] sections of the Ceph configuration file. However, you can also use them with the administration socket. Ceph metavariables are similar to Bash shell expansion.

Ceph supports the following metavariables:

Expands to the Ceph storage cluster name. Useful when running multiple Ceph storage clusters on the same hardware.
Expands to one of osd or mon, depending on the type of the instance daemon.
Expands to the daemon identifier. For osd.0, this would be 0.
Expands to the host name of the instance daemon.
Expands to $type.$id.

1.6. Viewing the Ceph configuration at runtime

The Ceph configuration files can be viewed at boot time and run time.


  • Root-level access to the Ceph node.
  • Access to admin keyring.


  1. To view a runtime configuration, log in to a Ceph node running the daemon and execute:


    ceph daemon DAEMON_TYPE.ID config show

    To see the configuration for osd.0, you can log into the node containing osd.0 and execute this command:


    [root@osd ~]# ceph daemon osd.0 config show

  2. For additional options, specify a daemon and help.


    [root@osd ~]# ceph daemon osd.0 help

1.7. Viewing a specific configuration at runtime

Configuration settings for Red Hat Ceph Storage can be viewed at runtime from the Ceph Monitor node.


  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the Ceph Monitor node.


  1. Log into a Ceph node and execute:


    ceph daemon DAEMON_TYPE.ID config get PARAMETER


    [root@mon ~]# ceph daemon osd.0 config get public_addr

1.8. Setting a specific configuration at runtime

There are two general ways to set a runtime configuration.

  • Using the Ceph Monitor.
  • Using the Ceph administration socket.

You can set a Ceph runtime configuration option by contacting the monitor using the tell and injectargs command.


  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the Ceph Monitor or OSD nodes.


  1. Using the Ceph Monitor by injecting options:

    ceph tell DAEMON_TYPE.DAEMON_ID or * injectargs --NAME VALUE [--NAME VALUE]

    Replace DAEMON_TYPE with one of osd or mon.

    You can apply the runtime setting to all daemons of a particular type with *, or specify a specific DAEMON_ID, a number or name..

    For example, to change the debug logging for a ceph-osd daemon named osd.0 to 0/5, execute the following command:

    [root@osd ~]# ceph tell osd.0 injectargs '--debug-osd 0/5'

    The tell command takes multiple arguments, so each argument for tell must be within single quotes, and the configuration prepended with two dashes ('--NAME VALUE [--NAME VALUE]' ['--NAME VALUE [--NAME VALUE]']). The ceph tell command goes through the monitors.

    If you cannot bind to the monitor, you can still make the change by using the Ceph administration socket.

  2. Log into the node of the daemon whose configuration you want to change.

    1. Issue the configuration change directly to the Ceph daemon:

      [root@osd ~]# ceph osd.0 config set debug_osd 0/5

      Quotes are not necessary for the daemon command, because it only takes one argument.

1.9. OSD Memory Target

BlueStore keeps OSD heap memory usage under a designated target size with the osd_memory_target configuration option.

The option osd_memory_target sets OSD memory based upon the available RAM in the system. By default, Ansible sets the value to 4 GB. You can change the value, expressed in bytes, in the /usr/share/ceph-ansible/group_vars/all.yml file when deploying the daemon. You can also use the Ceph overrides in the ceph.conf file to manually set the osd memory target, for example to 6 GB.


    osd memory target: 6442450944


When setting the option using Ceph overrides, use the option without the underscores.

Ceph OSD memory caching is more important when the block device is slow, for example, traditional hard drives, because the benefit of a cache hit is much higher than it would be with a solid state drive. However, this has to be weighed-in to co-locate OSDs with other services, such as in a hyper-converged infrastructure (HCI), or other applications.


The value of osd_memory_target is one OSD per device for traditional hard drive device, and two OSDs per device for NVMe SSD devices. The osds_per_device is defined in group_vars/osds.yml file.

Additional Resources

1.10. MDS Memory Cache Limit

MDS servers keep their metadata in a separate storage pool, named cephfs_metadata, and are the users of Ceph OSDs. For Ceph File Systems, MDS servers have to support an entire Red Hat Ceph Storage cluster, not just a single storage device within the storage cluster, so their memory requirements can be significant, particularly if the workload consists of small-to-medium-size files, where the ratio of metadata to data is much higher.

Example: Set the mds_cache_memory_limit to 2 GiB

    mds_cache_memory_limit: 2147483648

For a large Red Hat Ceph Storage cluster with a metadata-intensive workload, do not put an MDS server on the same node as other memory-intensive services, doing so gives you the option to allocate more memory to MDS, for example, sizes greater than 100 GB.

Additional Resources

1.11. Additional Resources

  • See the general Ceph configuration options in Appendix A for specific option descriptions and usage.