Red Hat Training

A Red Hat training course is available for Red Hat Ceph Storage

Chapter 3. Monitor Configuration Reference

Understanding how to configure a Ceph monitor is an important part of building a reliable Red Hat Ceph Storage cluster. All clusters have at least one monitor. A monitor configuration usually remains fairly consistent, but you can add, remove or replace a monitor in a cluster.

3.1. Background

Ceph monitors maintain a "master copy" of the cluster map. That means a Ceph client can determine the location of all Ceph monitors and Ceph OSDs just by connecting to one Ceph monitor and retrieving a current cluster map.

Before Ceph clients can read from or write to Ceph OSDs, they must connect to a Ceph monitor first. With a current copy of the cluster map and the CRUSH algorithm, a Ceph client can compute the location for any object. The ability to compute object locations allows a Ceph client to talk directly to Ceph OSDs, which is a very important aspect of Ceph high scalability and performance.

The primary role of the Ceph monitor is to maintain a master copy of the cluster map. Ceph monitors also provide authentication and logging services. Ceph monitors write all changes in the monitor services to a single Paxos instance, and Paxos writes the changes to a key-value store for strong consistency. Ceph monitors can query the most recent version of the cluster map during synchronization operations. Ceph monitors leverage the key-value store’s snapshots and iterators (using the leveldb database) to perform store-wide synchronization.

Paxos

3.1.1. Cluster Maps

The cluster map is a composite of maps, including the monitor map, the OSD map, and the placement group map. The cluster map tracks a number of important events:

  • Which processes are in the Red Hat Ceph Storage cluster
  • Which processes that are in the Red Hat Ceph Storage cluster are up and running or down.
  • Whether, the placement groups are active or inactive, and clean or in some other state.
  • other details that reflect the current state of the cluster such as:

    • the total amount of storage space or
    • the amount of storage used.

When there is a significant change in the state of the cluster for example, a Ceph OSD goes down, a placement group falls into a degraded state, and so on, the cluster map gets updated to reflect the current state of the cluster. Additionally, the Ceph monitor also maintains a history of the prior states of the cluster. The monitor map, OSD map, and placement group map each maintain a history of their map versions. Each version is called an epoch.

When operating the Red Hat Ceph Storage cluster, keeping track of these states is an important part of the cluster administration.

3.1.2. Monitor Quorum

A cluster will run sufficiently with a single monitor. However, a single monitor is a single-point-of-failure. To ensure high availability in a production Ceph storage cluster, run Ceph with multiple monitors so that the failure of a single monitor will not cause a failure of the entire cluster.

When a Ceph storage cluster runs multiple Ceph monitors for high availability, Ceph monitors use the Paxos algorithm to establish consensus about the master cluster map. A consensus requires a majority of monitors running to establish a quorum for consensus about the cluster map (for example, 1; 2 out of 3; 3 out of 5; 4 out of 6; and so on).

mon_force_quorum_join

Description
Force monitor to join quorum even if it has been previously removed from the map
Type
Boolean
Default
False

3.1.3. Consistency

When you add monitor settings to the Ceph configuration file, you need to be aware of some of the architectural aspects of Ceph monitors. Ceph imposes strict consistency requirements for a Ceph monitor when discovering another Ceph monitor within the cluster. Whereas, Ceph clients and other Ceph daemons use the Ceph configuration file to discover monitors, monitors discover each other using the monitor map (monmap), not the Ceph configuration file.

A Ceph monitor always refers to the local copy of the monitor map when discovering other Ceph monitors in the Red Hat Ceph Storage cluster. Using the monitor map instead of the Ceph configuration file avoids errors that could break the cluster, for example, typos in the Ceph configuration file when specifying a monitor address or port). Since monitors use monitor maps for discovery and they share monitor maps with clients and other Ceph daemons, the monitor map provides monitors with a strict guarantee that their consensus is valid.

Strict consistency also applies to updates to the monitor map. As with any other updates on the Ceph monitor, changes to the monitor map always run through a distributed consensus algorithm called Paxos. The Ceph monitors must agree on each update to the monitor map, such as adding or removing a Ceph monitor, to ensure that each monitor in the quorum has the same version of the monitor map. Updates to the monitor map are incremental so that Ceph monitors have the latest agreed upon version, and a set of previous versions. Maintaining a history enables a Ceph monitor that has an older version of the monitor map to catch up with the current state of the Red Hat Ceph Storage cluster.

If Ceph monitors discovered each other through the Ceph configuration file instead of through the monitor map, it would introduce additional risks because the Ceph configuration files are not updated and distributed automatically. Ceph monitors might inadvertently use an older Ceph configuration file, fail to recognize a Ceph monitor, fall out of a quorum, or develop a situation where Paxos is not able to determine the current state of the system accurately.

3.1.4. Bootstrapping Monitors

In most configuration and deployment cases, tools that deploy Ceph might help bootstrap the Ceph monitors by generating a monitor map for you, for example, Red Hat Storage Console or Ansible. A Ceph monitor requires a few explicit settings:

  • File System ID: The fsid is the unique identifier for your object store. Since you can run multiple clusters on the same hardware, you must specify the unique ID of the object store when bootstrapping a monitor. Using deployment tools for example, Red Hat Storage Console or Ansible will generate a file system identifier, but you can specify the fsid manually too.
  • Monitor ID: A monitor ID is a unique ID assigned to each monitor within the cluster. It is an alphanumeric value, and by convention the identifier usually follows an alphabetical increment (for example, a, b, and so on). This can be set in the Ceph configuration file (for example, [mon.a], [mon.b], and so on), by a deployment tool, or using the ceph command.
  • Keys: The monitor must have secret keys.

3.2. Configuring Monitors

To apply configuration settings to the entire cluster, enter the configuration settings under the [global] section. To apply configuration settings to all monitors in the cluster, enter the configuration settings under the [mon] section. To apply configuration settings to specific monitors, specify the monitor instance (for example, [mon.a]). By convention, monitor instance names use alpha notation.

[global]

[mon]

[mon.a]

[mon.b]

[mon.c]

3.2.1. Minimum Configuration

The bare minimum monitor settings for a Ceph monitor in the Ceph configuration file includes a host name for each monitor if it is not configured for DNS and the monitor address. You can configure these under [mon] or under the entry for a specific monitor.

[mon]
mon_host = hostname1,hostname2,hostname3
mon_addr = 10.0.0.10:6789,10.0.0.11:6789,10.0.0.12:6789

Or

[mon.a]
host = hostname1
mon_addr = 10.0.0.10:6789
Note

This minimum configuration for monitors assumes that a deployment tool generates the fsid and the mon. key for you.

Important

Once you deploy a Ceph cluster, do not change the IP address of the monitors.

As of RHCS 2.4, Ceph does not require the mon_host when the cluster is configured to look up a monitor via the DNS server. To configure the Ceph cluster for DNS lookup, set the mon_dns_srv_name setting in the Ceph configuration file.

mon_dns_srv_name
Description
The service name used for querying the DNS for the monitor hosts/addresses.
Type
String
Default
ceph-mon

Once set, configure the DNS. Create records either IPv4 (A) or IPv6 (AAAA) for the monitors in the DNS zone. For example:

#IPv4
mon1.example.com. A 192.168.0.1
mon2.example.com. A 192.168.0.2
mon3.example.com. A 192.168.0.3

#IPv6
mon1.example.com. AAAA 2001:db8::100
mon2.example.com. AAAA 2001:db8::200
mon3.example.com. AAAA 2001:db8::300

Where: example.com is the DNS search domain.

Then, create the SRV TCP records with the name mon_dns_srv_name configuration setting pointing to the three Monitors. The following example uses the default ceph-mon value.

_ceph-mon._tcp.example.com. 60 IN SRV 10 60 6789 mon1.example.com.
_ceph-mon._tcp.example.com. 60 IN SRV 10 60 6789 mon2.example.com.
_ceph-mon._tcp.example.com. 60 IN SRV 10 60 6789 mon3.example.com.

Monitors run on port 6789 by default, and their priority and weight are all set to 10 and 60 respectively in the foregoing example.

3.2.2. Cluster ID

Each Red Hat Ceph Storage cluster has a unique identifier (fsid). If specified, it usually appears under the [global] section of the configuration file. Deployment tools usually generate the fsid and store it in the monitor map, so the value may not appear in a configuration file. The fsid makes it possible to run daemons for multiple clusters on the same hardware.

fsid
Description
The cluster ID. One per cluster.
Type
UUID
Required
Yes.
Default
N/A. May be generated by a deployment tool if not specified.
Note

Do not set this value if you use a deployment tool that does it for you.

3.2.3. Initial Members

Red Hat recommends running a production Red Hat Ceph Storage cluster with at least three Ceph monitors to ensure high availability. When you run multiple monitors, you can specify the initial monitors that must be members of the cluster in order to establish a quorum. This may reduce the time it takes for the cluster to come online.

[mon]
mon_initial_members = a,b,c
mon_initial_members
Description
The IDs of initial monitors in a cluster during startup. If specified, Ceph requires an odd number of monitors to form an initial quorum (for example, 3).
Type
String
Default
None
Note

A majority of monitors in your cluster must be able to reach each other in order to establish a quorum. You can decrease the initial number of monitors to establish a quorum with this setting.

3.2.4. Data

Ceph provides a default path where Ceph monitors store data. For optimal performance in a production Red Hat Ceph Storage cluster, Red Hat recommends running Ceph monitors on separate hosts and drives from Ceph OSDs. Ceph monitors call the fsync() function often, which can interfere with Ceph OSD workloads.

Ceph monitors store their data as key-value pairs. Using a data store prevents recovering Ceph monitors from running corrupted versions through Paxos, and it enables multiple modification operations in one single atomic batch, among other advantages.

Note

Red Hat does not recommend changing the default data location. If you modify the default location, make it uniform across Ceph monitors by setting it in the [mon] section of the configuration file.

mon_data
Description
The monitor’s data location.
Type
String
Default
/var/lib/ceph/mon/$cluster-$id
mon_data_size_warn
Description
Ceph issues a HEALTH_WARN status in the cluster log when the monitor’s data store reaches this threshold. The default value is 15GB.
Type
Integer
Default
15*1024*1024*1024*`
mon_data_avail_warn
Description
Ceph issues a HEALTH_WARN status in cluster log when the available disk space of the monitor’s data store is lower than or equal to this percentage.
Type
Integer
Default
30
mon_data_avail_crit
Description
Ceph issues a HEALTH_ERR status in cluster log when the available disk space of the monitor’s data store is lower or equal to this percentage.
Type
Integer
Default
5
mon_warn_on_cache_pools_without_hit_sets
Description
Ceph issues a HEALTH_WARN status in cluster log if a cache pool does not have the hit_set_type paramater set. See Pool Values for more details.
Type
Boolean
Default
True
mon_warn_on_crush_straw_calc_version_zero
Description
Ceph issues a HEALTH_WARN status in the cluster log if the CRUSH’s straw_calc_version is zero. See CRUSH tunables for details.
Type
Boolean
Default
True
mon_warn_on_legacy_crush_tunables
Description
Ceph issues a HEALTH_WARN status in the cluster log if CRUSH tunables are too old (older than mon_min_crush_required_version).
Type
Boolean
Default
True
mon_crush_min_required_version
Description
This setting defines the minimum tunable profile version required by the cluster. See CRUSH tunables for details.
Type
String
Default
firefly
mon_warn_on_osd_down_out_interval_zero
Description
Ceph issues a HEALTH_WARN status in the cluster log if the mon_osd_down_out_interval setting is zero, because the Leader behaves in a similar manner when the noout flag is set. Administrators find it easier to troubleshoot a cluster by setting the noout flag. Ceph issues the warning to ensure administrators know that the setting is zero.
Type
Boolean
Default
True
mon_cache_target_full_warn_ratio
Description
Ceph issues a warning when between the ratio of cache_target_full and target_max_object.
Type
Float
Default
0.66
mon_health_data_update_interval
Description
How often (in seconds) a monitor in the quorum shares its health status with its peers. A negative number disables health updates.
Type
Float
Default
60
mon_health_to_clog
Description
This setting enable Ceph to send a health summary to the cluster log periodically.
Type
Boolean
Default
True
mon_health_to_clog_tick_interval
Description
How often (in seconds) the monitor sends a health summary to the cluster log. A non-positive number disables it. If the current health summary is empty or identical to the last time, the monitor will not send the status to the cluster log.
Type
Integer
Default
3600
mon_health_to_clog_interval
Description
How often (in seconds) the monitor sends a health summary to the cluster log. A non-positive number disables it. The monitor will always send the summary to cluster log.
Type
Integer
Default
60

3.2.5. Storage Capacity

When a Red Hat Ceph Storage cluster gets close to its maximum capacity (specifies by the mon_osd_full_ratio parameter), Ceph prevents you from writing to or reading from Ceph OSDs as a safety measure to prevent data loss. Therefore, letting a production Red Hat Ceph Storage cluster approach its full ratio is not a good practice, because it sacrifices high availability. The default full ratio is .95, or 95% of capacity. This a very aggressive setting for a test cluster with a small number of OSDs.

Tip

When monitoring a cluster, be alert to warnings related to the nearfull ratio. This means that a failure of some OSDs could result in a temporary service disruption if one or more OSDs fails. Consider adding more OSDs to increase storage capacity.

A common scenario for test clusters involves a system administrator removing a Ceph OSD from the Red Hat Ceph Storage cluster to watch the cluster re-balance. Then, removing another Ceph OSD, and so on until the Red Hat Ceph Storage cluster eventually reaches the full ratio and locks up.

Red Hat recommends a bit of capacity planning even with a test cluster. Planning enables you to gauge how much spare capacity you will need in order to maintain high availability. Ideally, you want to plan for a series of Ceph OSD failures where the cluster can recover to an active + clean state without replacing those Ceph OSDs immediately. You can run a cluster in an active + degraded state, but this is not ideal for normal operating conditions.

The following diagram depicts a simplistic Red Hat Ceph Storage cluster containing 33 Ceph Nodes with one Ceph OSD per host, each Ceph OSD Daemon reading from and writing to a 3TB drive. So this exemplary Red Hat Ceph Storage cluster has a maximum actual capacity of 99TB. With a mon osd full ratio of 0.95, if the Red Hat Ceph Storage cluster falls to 5 TB of remaining capacity, the cluster will not allow Ceph clients to read and write data. So the Red Hat Ceph Storage cluster’s operating capacity is 95 TB, not 99 TB.

Storage Capacity

It is normal in such a cluster for one or two OSDs to fail. A less frequent but reasonable scenario involves a rack’s router or power supply failing, which brings down multiple OSDs simultaneously (for example, OSDs 7-12). In such a scenario, you should still strive for a cluster that can remain operational and achieve an active + clean state, even if that means adding a few hosts with additional OSDs in short order. If your capacity utilization is too high, you might not lose data, but you could still sacrifice data availability while resolving an outage within a failure domain if capacity utilization of the cluster exceeds the full ratio. For this reason, Red Hat recommends at least some rough capacity planning.

Identify two numbers for your cluster:

  • the number of OSDs
  • the total capacity of the cluster

To determine the mean average capacity of an OSD within a cluster, divide the total capacity of the cluster by the number of OSDs in the cluster. Consider multiplying that number by the number of OSDs you expect to fail simultaneously during normal operations (a relatively small number). Finally, multiply the capacity of the cluster by the full ratio to arrive at a maximum operating capacity. Then, subtract the number of amount of data from the OSDs you expect to fail to arrive at a reasonable full ratio. Repeat the foregoing process with a higher number of OSD failures (for example, a rack of OSDs) to arrive at a reasonable number for a near full ratio.

[global]
...
mon_osd_full_ratio = .80
mon_osd_nearfull_ratio = .70
mon_osd_full_ratio
Description
The percentage of disk space used before an OSD is considered full.
Type
Float:
Default
.95
mon_osd_nearfull_ratio
Description
The percentage of disk space used before an OSD is considered nearfull.
Type
Float
Default
.85
Tip

If some OSDs are nearfull, but others have plenty of capacity, you might have a problem with the CRUSH weight for the nearfull OSDs.

3.2.6. Heartbeat

Ceph monitors know about the cluster by requiring reports from each OSD, and by receiving reports from OSDs about the status of their neighboring OSDs. Ceph provides reasonable default settings for interaction between monitor and OSD, however, you can modify them as needed.

3.2.7. Monitor Store Synchronization

When you run a production cluster with multiple monitors which is recommended, each monitor checks to see if a neighboring monitor has a more recent version of the cluster map. For example, a map in a neighboring monitor with one or more epoch numbers higher than the most current epoch in the map of the instant monitor. Periodically, one monitor in the cluster might fall behind the other monitors to the point where it must leave the quorum, synchronize to retrieve the most current information about the cluster, and then rejoin the quorum. For the purposes of synchronization, monitors can assume one of three roles:

  • Leader: The Leader is the first monitor to achieve the most recent Paxos version of the cluster map.
  • Provider: The Provider is a monitor that has the most recent version of the cluster map, but was not the first to achieve the most recent version.
  • Requester: The Requester is a monitor that has fallen behind the leader and must synchronize in order to retrieve the most recent information about the cluster before it can rejoin the quorum.

These roles enable a leader to delegate synchronization duties to a provider, which prevents synchronization requests from overloading the leader and improving performance. In the following diagram, the requester has learned that it has fallen behind the other monitors. The requester asks the leader to synchronize, and the leader tells the requester to synchronize with a provider.

Monitor Synchronization

Synchronization always occurs when a new monitor joins the cluster. During runtime operations, monitors can receive updates to the cluster map at different times. This means the leader and provider roles may migrate from one monitor to another. If this happens while synchronizing (for example, a provider falls behind the leader), the provider can terminate synchronization with a requester.

Once synchronization is complete, Ceph requires trimming across the cluster. Trimming requires that the placement groups are active + clean.

mon_sync_trim_timeout
Description, Type
Double
Default
30.0
mon_sync_heartbeat_timeout
Description, Type
Double
Default
30.0
mon_sync_heartbeat_interval
Description, Type
Double
Default
5.0
mon_sync_backoff_timeout
Description, Type
Double
Default
30.0
mon_sync_timeout
Description
Number of seconds the monitor will wait for the next update message from its sync provider before it gives up and bootstraps again.
Type
Double
Default
30.0
mon_sync_max_retries
Description, Type
Integer
Default
5
mon_sync_max_payload_size
Description
The maximum size for a sync payload (in bytes).
Type
32-bit Integer
Default
1045676
paxos_max_join_drift
Description
The maximum Paxos iterations before we must first sync the monitor data stores. When a monitor finds that its peer is too far ahead of it, it will first sync with data stores before moving on.
Type
Integer
Default
10
paxos_stash_full_interval
Description
How often (in commits) to stash a full copy of the PaxosService state. Current this setting only affects mds, mon, auth and mgr PaxosServices.
Type
Integer
Default
25
paxos_propose_interval
Description
Gather updates for this time interval before proposing a map update.
Type
Double
Default
1.0
paxos_min
Description
The minimum number of paxos states to keep around
Type
Integer
Default
500
paxos_min_wait
Description
The minimum amount of time to gather updates after a period of inactivity.
Type
Double
Default
0.05
paxos_trim_min
Description
Number of extra proposals tolerated before trimming
Type
Integer
Default
250
paxos_trim_max
Description
The maximum number of extra proposals to trim at a time
Type
Integer
Default
500
paxos_service_trim_min
Description
The minimum amount of versions to trigger a trim (0 disables it)
Type
Integer
Default
250
paxos_service_trim_max
Description
The maximum amount of versions to trim during a single proposal (0 disables it)
Type
Integer
Default
500
mon_max_log_epochs
Description
The maximum amount of log epochs to trim during a single proposal
Type
Integer
Default
500
mon_max_pgmap_epochs
Description
The maximum amount of pgmap epochs to trim during a single proposal
Type
Integer
Default
500
mon_mds_force_trim_to
Description
Force monitor to trim mdsmaps to this point (0 disables it. dangerous, use with care)
Type
Integer
Default
0
mon_osd_force_trim_to
Description
Force monitor to trim osdmaps to this point, even if there is PGs not clean at the specified epoch (0 disables it. dangerous, use with care)
Type
Integer
Default
0
mon_osd_cache_size
Description
The size of osdmaps cache, not to rely on underlying store’s cache
Type
Integer
Default
10
mon_election_timeout
Description
On election proposer, maximum waiting time for all ACKs in seconds.
Type
Float
Default
5
mon_lease
Description
The length (in seconds) of the lease on the monitor’s versions.
Type
Float
Default
5
mon_lease_renew_interval_factor
Description
mon lease * mon lease renew interval factor will be the interval for the Leader to renew the other monitor’s leases. The factor should be less than 1.0.
Type
Float
Default
0.6
mon_lease_ack_timeout_factor
Description
The Leader will wait mon lease * mon lease ack timeout factor for the Providers to acknowledge the lease extension.
Type
Float
Default
2.0
mon_accept_timeout_factor
Description
The Leader will wait mon lease * mon accept timeout factor for the Requester(s) to accept a Paxos update. It is also used during the Paxos recovery phase for similar purposes.
Type
Float
Default
2.0
mon_min_osdmap_epochs
Description
Minimum number of OSD map epochs to keep at all times.
Type
32-bit Integer
Default
500
mon_max_pgmap_epochs
Description
Maximum number of PG map epochs the monitor should keep.
Type
32-bit Integer
Default
500
mon_max_log_epochs
Description
Maximum number of Log epochs the monitor should keep.
Type
32-bit Integer
Default
500

3.2.8. Clock

Ceph daemons pass critical messages to each other, which must be processed before daemons reach a timeout threshold. If the clocks in Ceph monitors are not synchronized, it can lead to a number of anomalies. For example:

  • Daemons ignoring received messages (for example, timestamps outdated).
  • Timeouts triggered too soon or late when a message was not received in time.

See Monitor Store Synchronization for details.

Tip

Install NTP on the Ceph monitor hosts to ensure that the monitor cluster operates with synchronized clocks.

Clock drift may still be noticeable with NTP even though the discrepancy is not yet harmful. Ceph clock drift and clock skew warnings can get triggered even though NTP maintains a reasonable level of synchronization. Increasing your clock drift may be tolerable under such circumstances. However, a number of factors such as workload, network latency, configuring overrides to default timeouts and the Monitor Store Synchronization settings may influence the level of acceptable clock drift without compromising Paxos guarantees.

Ceph provides the following tunable options to allow you to find acceptable values.

clock_offset
Description
How much to offset the system clock. See Clock.cc for details.
Type
Double
Default
0
mon_tick_interval
Description
A monitor’s tick interval in seconds.
Type
32-bit Integer
Default
5
mon_clock_drift_allowed
Description
The clock drift in seconds allowed between monitors.
Type
Float
Default
.050
mon_clock_drift_warn_backoff
Description
Exponential backoff for clock drift warnings.
Type
Float
Default
5
mon_timecheck_interval
Description
The time check interval (clock drift check) in seconds for the leader.
Type
Float
Default
300.0
mon_timecheck_skew_interval
Description
The time check interval (clock drift check) in seconds when in the presence of a skew in seconds for the Leader.
Type
Float
Default
30.0

3.2.9. Client

mon_client_hunt_interval
Description
The client will try a new monitor every N seconds until it establishes a connection.
Type
Double
Default
3.0
mon_client_ping_interval
Description
The client will ping the monitor every N seconds.
Type
Double
Default
10.0
mon_client_max_log_entries_per_message
Description
The maximum number of log entries a monitor will generate per client message.
Type
Integer
Default
1000
mon_client_bytes
Description
The amount of client message data allowed in memory (in bytes).
Type
64-bit Integer Unsigned
Default
100ul << 20

3.3. Miscellaneous

mon_max_osd
Description
The maximum number of OSDs allowed in the cluster.
Type
32-bit Integer
Default
10000
mon_globalid_prealloc
Description
The number of global IDs to pre-allocate for clients and daemons in the cluster.
Type
32-bit Integer
Default
100
mon_sync_fs_threshold
Description
Synchronize with the filesystem when writing the specified number of objects. Set it to 0 to disable it.
Type
32-bit Integer
Default
5
mon_subscribe_interval
Description
The refresh interval (in seconds) for subscriptions. The subscription mechanism enables obtaining the cluster maps and log information.
Type
Double
Default
300
mon_stat_smooth_intervals
Description
Ceph will smooth statistics over the last N PG maps.
Type
Integer
Default
2
mon_probe_timeout
Description
Number of seconds the monitor will wait to find peers before bootstrapping.
Type
Double
Default
2.0
mon_daemon_bytes
Description
The message memory cap for metadata server and OSD messages (in bytes).
Type
64-bit Integer Unsigned
Default
400ul << 20
mon_max_log_entries_per_event
Description
The maximum number of log entries per event.
Type
Integer
Default
4096
mon_osd_prime_pg_temp
Description
Enables or disable priming the PGMap with the previous OSDs when an out OSD comes back into the cluster. With the true setting the clients will continue to use the previous OSDs until the newly in OSDs as that PG peered.
Type
Boolean
Default
true
mon_osd_prime_pg_temp_max_time
Description
How much time in seconds the monitor should spend trying to prime the PGMap when an out OSD comes back into the cluster.
Type
Float
Default
0.5
mon_osd_prime_pg_temp_max_time_estimate
Description
Maximum estimate of time spent on each PG before we prime all PGs in parallel.
Type
Float
Default
0.25
mon_osd_allow_primary_affinity
Description
allow primary_affinity to be set in the osdmap.
Type
Boolean
Default
False
mon_osd_pool_ec_fast_read
Description
Whether turn on fast read on the pool or not. It will be used as the default setting of newly created erasure pools if fast_read is not specified at create time.
Type
Boolean
Default
False
mon_mds_skip_sanity
Description
Skip safety assertions on FSMap (in case of bugs where we want to continue anyway). Monitor terminates if the FSMap sanity check fails, but we can disable it by enabling this option.
Type
Boolean
Default
False
mon_max_mdsmap_epochs
Description
The maximum amount of mdsmap epochs to trim during a single proposal.
Type
Integer
Default
500
mon_config_key_max_entry_size
Description
The maximum size of config-key entry (in bytes)
Type
Integer
Default
4096
mon_scrub_interval
Description
How often (in seconds) the monitor scrub its store by comparing the stored checksums with the computed ones of all the stored keys.
Type
Integer
Default
3600*24
mon_scrub_max_keys
Description
The maximum number of keys to scrub each time.
Type
Integer
Default
100
mon_compact_on_start
Description
Compact the database used as Ceph Monitor store on ceph-mon start. A manual compaction helps to shrink the monitor database and improve the performance of it if the regular compaction fails to work.
Type
Boolean
Default
False
mon_compact_on_bootstrap
Description
Compact the database used as Ceph Monitor store on on bootstrap. Monitor starts probing each other for creating a quorum after bootstrap. If it times out before joining the quorum, it will start over and bootstrap itself again.
Type
Boolean
Default
False
mon_compact_on_trim
Description
Compact a certain prefix (including paxos) when we trim its old states.
Type
Boolean
Default
True
mon_cpu_threads
Description
Number of threads for performing CPU intensive work on monitor.
Type
Boolean
Default
True
mon_osd_mapping_pgs_per_chunk
Description
We calculate the mapping from placement group to OSDs in chunks. This option specifies the number of placement groups per chunk.
Type
Integer
Default
4096
mon_osd_max_split_count
Description
Largest number of PGs per "involved" OSD to let split create. When we increase the pg_num of a pool, the placement groups will be splitted on all OSDs serving that pool. We want to avoid extreme multipliers on PG splits.
Type
Integer
Default
300
mon_session_timeout
Description
Monitor will terminate inactive sessions stay idle over this time limit.
Type
Integer
Default
300
rados_mon_op_timeout
Description
Number of seconds that RADOS waits for a response from the Ceph Monitor before returning an error from a RADOS operation. A value of 0 means no limit.
Type
Double
Default
0