Chapter 3. Administration
Administrators can manage the Ceph Object Gateway using the radosgw-admin command-line interface.
3.1. Administrative Data Storage
A Ceph Object Gateway stores administrative data in a series of pools defined in an instance’s zone configuration. For example, the buckets, users, user quotas and usage statistics discussed in the subsequent sections are stored in pools in the Ceph Storage Cluster. By default, Ceph Object Gateway will create the following pools and map them to the default zone.
-
.rgw -
.rgw.control -
.rgw.gc -
.log -
.intent-log -
.usage -
.users -
.users.email -
.users.swift -
.users.uid
You should consider creating these pools manually so that you can set the CRUSH ruleset and the number of placement groups. In a typical configuration, the pools that store the Ceph Object Gateway’s administrative data will often use the same CRUSH ruleset and use fewer placement groups, because there are 10 pools for the administrative data. See Pools and the Storage Strategies guide for Red Hat Ceph Storage 3 for additional details.
Also see Ceph Placement Groups (PGs) per Pool Calculator for placement group calculation details. The mon_pg_warn_max_per_osd setting warns you if assign too many placement groups to a pool (i.e., 300 by default). You may adjust the value to suit your needs and the capabilities of your hardware where n is the maximum number of PGs per OSD.
mon_pg_warn_max_per_osd = n
3.2. Creating Storage Policies
The Ceph Object Gateway stores the client bucket and object data by identifying placement targets, and storing buckets and objects in the pools associated with a placement target. If you don’t configure placement targets and map them to pools in the instance’s zone configuration, the Ceph Object Gateway will use default targets and pools, for example, default_placement.
Storage policies give Ceph Object Gateway clients a way of accessing a storage strategy, that is, the ability to target a particular type of storage, for example, SSDs, SAS drives, SATA drives. A particular way of ensuring durability, replication, erasure coding, and so on. For details, see the Storage Strategies guide for Red Hat Ceph Storage 3.
To create a storage policy, use the following procedure:
-
Create a new pool
.rgw.buckets.specialwith the desired storage strategy. For example, a pool customized with erasure-coding, a particular CRUSH ruleset, the number of replicas, and thepg_numandpgp_numcount. Get the zone group configuration and store it in a file, for example,
zonegroup.json:Syntax
[root@master-zone]# radosgw-admin zonegroup --rgw-zonegroup=<zonegroup_name> [--cluster <cluster_name>] get > zonegroup.json
Example
[root@master-zone]# radosgw-admin zonegroup --rgw-zonegroup=default get > zonegroup.json
Add a
special-placemententry underplacement_targetin thezonegroup.jsonfile.{ "name": "default", "api_name": "", "is_master": "true", "endpoints": [], "hostnames": [], "master_zone": "", "zones": [{ "name": "default", "endpoints": [], "log_meta": "false", "log_data": "false", "bucket_index_max_shards": 5 }], "placement_targets": [{ "name": "default-placement", "tags": [] }, { "name": "special-placement", "tags": [] }], "default_placement": "default-placement" }Set the zone group with the modified
zonegroup.jsonfile:[root@master-zone]# radosgw-admin zonegroup set < zonegroup.json
Get the zone configuration and store it in a file, for example,
zone.json:[root@master-zone]# radosgw-admin zone get > zone.json
Edit the zone file and add the new placement policy key under
placement_pool:{ "domain_root": ".rgw", "control_pool": ".rgw.control", "gc_pool": ".rgw.gc", "log_pool": ".log", "intent_log_pool": ".intent-log", "usage_log_pool": ".usage", "user_keys_pool": ".users", "user_email_pool": ".users.email", "user_swift_pool": ".users.swift", "user_uid_pool": ".users.uid", "system_key": { "access_key": "", "secret_key": "" }, "placement_pools": [{ "key": "default-placement", "val": { "index_pool": ".rgw.buckets.index", "data_pool": ".rgw.buckets", "data_extra_pool": ".rgw.buckets.extra" } }, { "key": "special-placement", "val": { "index_pool": ".rgw.buckets.index", "data_pool": ".rgw.buckets.special", "data_extra_pool": ".rgw.buckets.extra" } }] }Set the new zone configuration.
[root@master-zone]# radosgw-admin zone set < zone.json
Update the zone group map.
[root@master-zone]# radosgw-admin period update --commit
The
special-placemententry is listed as aplacement_target.
To specify the storage policy when making a request:
Example:
$ curl -i http://10.0.0.1/swift/v1/TestContainer/file.txt -X PUT -H "X-Storage-Policy: special-placement" -H "X-Auth-Token: AUTH_rgwtxxxxxx"
3.3. Creating Indexless Buckets
It is possible to configure a placement target where created buckets do not use the bucket index to store objects index; that is, indexless buckets. Placement targets that do not use data replication or listing may implement indexless buckets.
Indexless buckets provides a mechanism in which the placement target does not track objects in specific buckets. This removes a resource contention that happens whenever an object write happens and reduces the number of round trips that Ceph Object Gateway needs to make to the Ceph Storage cluster. This can have a positive effect on concurrent operations and small object write performance.
To specify a placement target as indexless, use the following procedure:
$ radosgw-admin zone get --rgw-zone=<zone> > zone.json
Modify zone.jsone by adding a new placement target or by modifying an existing one to have "index_type": 1, for example:
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "default.rgw.buckets.index",
"data_pool": "default.rgw.buckets.data",
"data_extra_pool": "default.rgw.buckets.non-ec",
"index_type": 0
}
},
{
"key": "indexeless",
"val": {
"index_pool": "default.rgw.buckets.index",
"data_pool": "default.rgw.buckets.data",
"data_extra_pool": "default.rgw.buckets.non-ec",
"index_type": 1
}
}
],$ radosgw-admin zone set --rgw-zone=<zone> --infile zone.json
Make sure the zonegroup refers to the new placement target if you created a new placement target:
$ radosgw-admin zonegroup get --rgw-zonegroup=<zonegroup> > zonegroup.json
Modify the zonegroup.json as needed. For example:
"placement_targets": [
{
"name": "default-placement",
"tags": []
},
{ "name": "indexless",
"tags": []
}
],
"default_placement": "default-placement",$ radosgw-admin zonegroup set --rgw-zonegroup=<zonegroup> < zonegroup.json
Update and commit the period if the cluster is in a multi-site configuration:
$ radosgw-admin period update --commit
In this example, the buckets created in the "indexless" target will be indexless buckets.
The bucket index will not reflect the correct state of the bucket, and listing these buckets will not correctly return their list of objects. This affects multiple features. Specifically, these buckets will not be synced in a multi-zone environment because the bucket index is not used to store change information. It is not recommended to use S3 object versioning on indexless buckets because the bucket index is necessary for this feature.
Using indexless buckets removes the limit of the max number of objects in a single bucket.
Indexless buckets cannot be viewed from NFS.
3.4. Configuring Bucket Sharding
The Ceph Object Gateway stores bucket index data in the index pool (index_pool), which defaults to .rgw.buckets.index. When the client puts many objects—hundreds of thousands to millions of objects—in a single bucket without having set quotas for the maximum number of objects per bucket, the index pool can suffer significant performance degradation.
Bucket index sharding helps prevent performance bottlenecks when allowing a high number of objects per bucket.
See Configuring Bucket Index Sharding for details on configuring bucket index sharding for new buckets.
See Bucket Index Resharding for details on changing the bucket index sharding on already existing buckets.
Configuring Bucket Index Sharding
To enable and configure bucket index sharding on all new buckets, use the rgw_override_bucket_index_max_shards setting.
Set the setting to:
-
0to disable bucket index sharding. This is the default value. * A value greater than0to enable bucket sharding and to set the maximum number of shards.
Use the following formula to calculate the recommended number of shards:
number of objects expected in a bucket / 100,000
Note that maximum number of shards is 7877.
Simple configurations
Add
rgw_override_bucket_index_max_shardsto the Ceph configuration file:rgw_override_bucket_index_max_shards = 10
-
To configure bucket index sharding for all instances of the Ceph Object Gateway, add
rgw_override_bucket_index_max_shardsunder the[global]section. -
To configure bucket index sharding only for a particular instance of the Ceph Object Gateway, add
rgw_override_bucket_index_max_shardsunder the instance.
-
To configure bucket index sharding for all instances of the Ceph Object Gateway, add
Restart the Ceph Object Gateway:
$ sudo service radosgw restart id=rgw.<hostname>
Replace
<hostname>with the short host name of the node where the Ceph Object Gateway is running.
Multi-site configurations
In multi-site configurations, each zone can have a different index_pool setting to manage failover. To configure a consistent shard count for zones in one zone group, set the rgw_override_bucket_index_max_shards setting in the configuration for that zone group. To do so:
Extract the zone group configuration to the
zonegroup.jsonfile:$ radosgw-admin zonegroup get > zonegroup.json
-
In the
zonegroup.jsonfile, set thergw_override_bucket_index_max_shardssetting for each named zone. Reset the zone group:
$ radosgw-admin zonegroup set < zonegroup.json
Update the period:
radosgw-admin period update --commit
Mapping the index pool (for each zone, if applicable) to a CRUSH ruleset of SSD-based OSDs might also help with bucket index performance.
Dynamic Bucket Index Resharding in RHCS 3
The process for dynamic bucket resharding periodically checks all the Ceph Object Gateway buckets and detects buckets that require resharding. If a bucket has grown larger than rgw_max_objs_per_shard, the Ceph Object Gateway reshards the bucket dynamically in the background. The default value for rgw_max_objs_per_shard is 100k objects per shard.
Red Hat does not support dynamic_bucket_resharding in multisite configurations for RHCS 3.0.
To enable dynamic bucket index resharding, set the rgw_dynamic_resharding setting to true, which is the default value.
Parameters for the Ceph configuration file include:
-
rgw_reshard_num_logs: The number of shards for the resharding log. The default value is16. -
rgw_reshard_bucket_lock_duration: The duration of the lock on a bucket during resharding. The default value is120seconds. -
rgw_dynamic_resharding: Enables or disables dynamic resharding. The default value istrue. -
rgw_max_objs_per_shard: The maximum number of objects per shard. The default value is100000objects per shard. -
rgw_reshard_thread_interval: The maximum time between rounds of reshard thread processing. The default value is600seconds.
To add a bucket to the resharding queue, execute the following:
# radosgw-admin reshard add --bucket <bucket_name> --num-shards <new number of shards>
To list the resharding queue, execute the following:
# radosgw-admin reshard list
To check bucket resharding status, execute the following:
# radosgw-admin reshard status --bucket <bucket_name>
To process a bucket resharding manually, execute:
# radosgw-admin reshard process
To cancel pending bucket resharding, execute:
# radosgw-admin reshard cancel --bucket <bucket_name>
Administrators may only cancel pending resharding operations. Administrators MAY NOT cancel ongoing resharding operations.
Bucket Index Resharding in RHCS 2
If a bucket has grown larger than the initial configuration was optimized for, reshard the bucket index pool by using the radosgw-admin bucket reshard command. This command:
- Creates a new set of bucket index objects for the specified object.
- Spreads all objects entries of these index objects.
- Creates a new bucket instance.
- Links the new bucket instance with the bucket so that all new index operations go through the new bucket indexes.
- Prints the old and the new bucket ID to the command output.
To reshard the bucket index pool:
- Make sure that all operations to the bucket are stopped.
Back the original bucket index up:
radosgw-admin bi list --bucket=<bucket_name> > <bucket_name>.list.backup
For example, for a bucket named
data, enter:$ radosgw-admin bi list --bucket=data > data.list.backup
Reshard the bucket index:
radosgw-admin bucket reshard --bucket=<bucket_name> --num-shards=<new_shards_number>
For example, for a bucket named
dataand the required number of shards being 100, enter:$ radosgw-admin reshard --bucket=data --num-shards=100
As part of its output, this command also prints the new and the old bucket ID. Note the old bucket ID down; you will need it to purge the old bucket index objects.
- Verify that the objects are listed correctly by comparing the old bucket index listing with the new one.
Purge the old bucket index objects:
radosgw-admin bi purge --bucket=<bucket_name> --bucket-id=<old_bucket_id>
For example, for a bucket named
dataand the old bucket ID being123456, enter:$ radosgw-admin bi purge --bucket=data --bucket-id=123456
The following limitations should be used with caution. There are implications related to your hardware selections, so you should always discuss these requirements with your Red Hat account team.
- Maximum number of objects in one bucket before it needs sharding: Red Hat Recommends a maximum of 102,400 objects per bucket index shard. To take full advantage of sharding, provide a sufficient number of OSDs in the Ceph Object Gateway bucket index pool to get maximum parallelism.
- Maximum number of objects when using sharding: Based on prior testing, the number of bucket index shards currently supported is 7877. Red Hat quality assurance has NOT performed full scalability testing on bucket sharding.
3.5. Enabling Compression
The Ceph Object Gateway supports server-side compression of uploaded objects using any of Ceph’s compression plugins. These include:
-
zlib: Supported. -
snappy: Technology Preview. -
zstd: Technology Preview.
The snappy and zstd compression plugins are Technology Preview features and as such they are not fully supported, as Red Hat has not completed quality assurance testing on them yet.
Configuration
To enable compression on a zone’s placement target, provide the --compression=<type> option to the radosgw-admin zone placement modify command. The compression type refers to the name of the compression plugin to use when writing new object data.
Each compressed object stores the compression type. Changing the setting does not hinder the ability to decompress existing compressed objects, nor does it force the Ceph Object Gateway to recompress existing objects.
This compression setting applies to all new objects uploaded to buckets using this placement target.
To disable compression on a zone’s placement target, provide the --compression=<type> option to the radosgw-admin zone placement modify command and specify an empty string or none.
For example:
$ radosgw-admin zone placement modify --rgw-zone=default --placement-id=default-placement --compression=zlib
{
...
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "default.rgw.buckets.index",
"data_pool": "default.rgw.buckets.data",
"data_extra_pool": "default.rgw.buckets.non-ec",
"index_type": 0,
"compression": "zlib"
}
}
],
...
}After enabling or disabling compression, restart the Ceph Object Gateway instance so the change will take effect.
Ceph Object Gateway creates a default zone and a set of pools. For production deployments, see the Ceph Object Gateway for Production guide, more specifically, the Creating a Realm section first. See also Multisite.
Statistics
While all existing commands and APIs continue to report object and bucket sizes based on their uncompressed data, the radosgw-admin bucket stats command includes compression statistics for a given bucket.
$ radosgw-admin bucket stats --bucket=<name>
{
...
"usage": {
"rgw.main": {
"size": 1075028,
"size_actual": 1331200,
"size_utilized": 592035,
"size_kb": 1050,
"size_kb_actual": 1300,
"size_kb_utilized": 579,
"num_objects": 104
}
},
...
}
The size_utilized and size_kb_utilized fields represent the total size of compressed data in bytes and kilobytes respectively.
3.6. User Management
Ceph Object Storage user management refers to users that are client applications of the Ceph Object Storage service; not the Ceph Object Gateway as a client application of the Ceph Storage Cluster. You must create a user, access key and secret to enable client applications to interact with the Ceph Object Gateway service.
There are two user types:
- User: The term 'user' reflects a user of the S3 interface.
- Subuser: The term 'subuser' reflects a user of the Swift interface. A subuser is associated to a user .
You can create, modify, view, suspend and remove users and subusers.
When managing users in a multi-site deployment, ALWAYS execute the radosgw-admin command on a Ceph Object Gateway node within the master zone of the master zone group to ensure that users synchronize throughout the multi-site cluster. DO NOT create, modify or delete users on a multi-site cluster from a secondary zone or a secondary zone group. This document uses [root@master-zone]# as a command line convention for a host in the master zone of the master zone group.
In addition to creating user and subuser IDs, you may add a display name and an email address for a user. You can specify a key and secret, or generate a key and secret automatically. When generating or specifying keys, note that user IDs correspond to an S3 key type and subuser IDs correspond to a swift key type. Swift keys also have access levels of read, write, readwrite and full.
User management command-line syntax generally follows the pattern user <command> <user-id> where <user-id> is either the --uid= option followed by the user’s ID (S3) or the --subuser= option followed by the user name (Swift). For example:
[root@master-zone]# radosgw-admin user <create|modify|info|rm|suspend|enable|check|stats> <--uid={id}|--subuser={name}> [other-options]Additional options may be required depending on the command you execute.
3.6.1. Multi Tenancy
In Red Hat Ceph Storage 2, the Ceph Object Gateway supports multi-tenancy for both the S3 and Swift APIs, where each user and bucket lies under a "tenant." Multi tenancy prevents namespace clashing when multiple tenants are using common bucket names, such as "test", "main" and so forth.
Each user and bucket lies under a tenant. For backward compatibility, a "legacy" tenant with an empty name is added. Whenever referring to a bucket without specifically specifying a tenant, the Swift API will assume the "legacy" tenant. Existing users are also stored under the legacy tenant, so they will access buckets and objects the same way as earlier releases.
Tenants as such do not have any operations on them. They appear and and disappear as needed, when users are administered. In order to create, modify, and remove users with explicit tenants, either an additional option --tenant is supplied, or a syntax "<tenant>$<user>" is used in the parameters of the radosgw-admin command.
To create a user testx$tester for S3, execute the following:
[root@master-zone]# radosgw-admin --tenant testx --uid tester \
--display-name "Test User" --access_key TESTER \
--secret test123 user create
To create a user testx$tester for Swift, execute one of the following:
[root@master-zone]# radosgw-admin --tenant testx --uid tester \
--display-name "Test User" --subuser tester:test \
--key-type swift --access full user create
[root@master-zone]# radosgw-admin key create --subuser 'testx$tester:test' \
--key-type swift --secret test123The subuser with explicit tenant had to be quoted in the shell.
3.6.2. Create a User
Use the user create command to create an S3-interface user. You MUST specify a user ID and a display name. You may also specify an email address. If you DO NOT specify a key or secret, radosgw-admin will generate them for you automatically. However, you may specify a key and/or a secret if you prefer not to use generated key/secret pairs.
[root@master-zone]# radosgw-admin user create --uid=<id> \ [--key-type=<type>] [--gen-access-key|--access-key=<key>]\ [--gen-secret | --secret=<key>] \ [--email=<email>] --display-name=<name>
For example:
[root@master-zone]# radosgw-admin user create --uid=janedoe --display-name="Jane Doe" --email=jane@example.com
{ "user_id": "janedoe",
"display_name": "Jane Doe",
"email": "jane@example.com",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{ "user": "janedoe",
"access_key": "11BS02LGFB6AL6H1ADMW",
"secret_key": "vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY"}],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": { "enabled": false,
"max_size_kb": -1,
"max_objects": -1},
"user_quota": { "enabled": false,
"max_size_kb": -1,
"max_objects": -1},
"temp_url_keys": []}
Check the key output. Sometimes radosgw-admin generates a JSON escape (\) character, and some clients do not know how to handle JSON escape characters. Remedies include removing the JSON escape character (\), encapsulating the string in quotes, regenerating the key and ensuring that it does not have a JSON escape character or specify the key and secret manually.
3.6.3. Create a Subuser
To create a subuser (Swift interface), you must specify the user ID (--uid={username}), a subuser ID and the access level for the subuser. If you DO NOT specify a key or secret, radosgw-admin will generate them for you automatically. However, you may specify a key and/or a secret if you prefer not to use generated key/secret pairs.
full is not readwrite, as it also includes the access control policy.
[root@master-zone]# radosgw-admin subuser create --uid={uid} --subuser={uid} --access=[ read | write | readwrite | full ]For example:
[root@master-zone]# radosgw-admin subuser create --uid=janedoe --subuser=janedoe:swift --access=full
{ "user_id": "janedoe",
"display_name": "Jane Doe",
"email": "jane@example.com",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
{ "id": "janedoe:swift",
"permissions": "full-control"}],
"keys": [
{ "user": "janedoe",
"access_key": "11BS02LGFB6AL6H1ADMW",
"secret_key": "vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY"}],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": { "enabled": false,
"max_size_kb": -1,
"max_objects": -1},
"user_quota": { "enabled": false,
"max_size_kb": -1,
"max_objects": -1},
"temp_url_keys": []}3.6.4. Get User Information
To get information about a user, you must specify user info and the user ID (--uid={username}).
# radosgw-admin user info --uid=janedoe
3.6.5. Modify User Information
To modify information about a user, you must specify the user ID (--uid={username}) and the attributes you want to modify. Typical modifications are to keys and secrets, email addresses, display names and access levels. For example:
[root@master-zone]# radosgw-admin user modify --uid=janedoe / --display-name="Jane E. Doe"
To modify subuser values, specify subuser modify and the subuser ID. For example:
[root@master-zone]# radosgw-admin subuser modify --subuser=janedoe:swift / --access=full
3.6.6. Enable and Suspend Users
When you create a user, the user is enabled by default. However, you may suspend user privileges and re-enable them at a later time. To suspend a user, specify user suspend and the user ID.
[root@master-zone]# radosgw-admin user suspend --uid=johndoe
To re-enable a suspended user, specify user enable and the user ID. :
[root@master-zone]# radosgw-admin user enable --uid=johndoe
Disabling the user disables the subuser.
3.6.7. Remove a User
When you remove a user, the user and subuser are removed from the system. However, you may remove just the subuser if you wish. To remove a user (and subuser), specify user rm and the user ID.
[root@master-zone]# radosgw-admin user rm --uid=<uid> [--purge-keys] [--purge-data]
For example:
[root@master-zone]# radosgw-admin user rm --uid=johndoe --purge-data
To remove the subuser only, specify subuser rm and the subuser name.
[root@master-zone]# radosgw-admin subuser rm --subuser=johndoe:swift --purge-keys
Options include:
-
Purge Data: The
--purge-dataoption purges all data associated to the UID. -
Purge Keys: The
--purge-keysoption purges all keys associated to the UID.
3.6.8. Remove a Subuser
When you remove a sub user, you are removing access to the Swift interface. The user will remain in the system. The Ceph Object Gateway To remove the subuser, specify subuser rm and the subuser ID.
[root@master-zone]# radosgw-admin subuser rm --subuser=johndoe:test
Options include:
-
Purge Keys: The
--purge-keysoption purges all keys associated to the UID.
3.6.9. Create a Key
To create a key for a user, you must specify key create. For a user, specify the user ID and the s3 key type. To create a key for subuser, you must specify the subuser ID and the swift keytype. For example:
[root@master-zone]# radosgw-admin key create --subuser=johndoe:swift --key-type=swift --gen-secret
{ "user_id": "johndoe",
"rados_uid": 0,
"display_name": "John Doe",
"email": "john@example.com",
"suspended": 0,
"subusers": [
{ "id": "johndoe:swift",
"permissions": "full-control"}],
"keys": [
{ "user": "johndoe",
"access_key": "QFAMEDSJP5DEKJO0DDXY",
"secret_key": "iaSFLDVvDdQt6lkNzHyW4fPLZugBAI1g17LO0+87"}],
"swift_keys": [
{ "user": "johndoe:swift",
"secret_key": "E9T2rUZNu2gxUjcwUBO8n\/Ev4KX6\/GprEuH4qhu1"}]}3.6.10. Add and Remove Access Keys
Users and subusers must have access keys to use the S3 and Swift interfaces. When you create a user or subuser and you do not specify an access key and secret, the key and secret get generated automatically. You may create a key and either specify or generate the access key and/or secret. You may also remove an access key and secret. Options include:
-
--secret=<key>specifies a secret key (e.g,. manually generated). -
--gen-access-keygenerates random access key (for S3 user by default). -
--gen-secretgenerates a random secret key. -
--key-type=<type>specifies a key type. The options are: swift, s3
To add a key, specify the user:
[root@master-zone]# radosgw-admin key create --uid=johndoe --key-type=s3 --gen-access-key --gen-secret
You may also specify a key and a secret. To remove an access key, specify the user:
[root@master-zone]# radosgw-admin key rm --uid=johndoe
3.6.11. Add and Remove Admin Capabilities
The Ceph Storage Cluster provides an administrative API that enables users to execute administrative functions via the REST API. By default, users DO NOT have access to this API. To enable a user to exercise administrative functionality, provide the user with administrative capabilities.
To add administrative capabilities to a user, execute the following:
[root@master-zone]# radosgw-admin caps add --uid={uid} --caps={caps}You can add read, write or all capabilities to users, buckets, metadata and usage (utilization). For example:
--caps="[users|buckets|metadata|usage|zone]=[*|read|write|read, write]"
For example:
[root@master-zone]# radosgw-admin caps add --uid=johndoe --caps="users=*"
To remove administrative capabilities from a user, execute the following:
[root@master-zone]# radosgw-admin caps remove --uid=johndoe --caps={caps}3.7. Quota Management
The Ceph Object Gateway enables you to set quotas on users and buckets owned by users. Quotas include the maximum number of objects in a bucket and the maximum storage size in megabytes.
-
Bucket: The
--bucketoption allows you to specify a quota for buckets the user owns. -
Maximum Objects: The
--max-objectssetting allows you to specify the maximum number of objects. A negative value disables this setting. -
Maximum Size: The
--max-sizeoption allows you to specify a quota for the maximum number of bytes. A negative value disables this setting. -
Quota Scope: The
--quota-scopeoption sets the scope for the quota. The options arebucketanduser. Bucket quotas apply to buckets a user owns. User quotas apply to a user.
Buckets with a large number of objects can cause serious performance issues. The recommended maximum number of objects in a one bucket is 100,000. To increase this number, configure bucket index sharding. See Section 3.4, “Configuring Bucket Sharding” for details.
3.7.1. Set User Quotas
Before you enable a quota, you must first set the quota parameters. For example:
[root@master-zone]# radosgw-admin quota set --quota-scope=user --uid=<uid> [--max-objects=<num objects>] [--max-size=<max size>]
For example:
radosgw-admin quota set --quota-scope=user --uid=johndoe --max-objects=1024 --max-size=1024
A negative value for num objects and / or max size means that the specific quota attribute check is disabled.
3.7.2. Enable and Disable User Quotas
Once you set a user quota, you may enable it. For example:
[root@master-zone]# radosgw-admin quota enable --quota-scope=user --uid=<uid>
You may disable an enabled user quota. For example:
[root@master-zone]# radosgw-admin quota disable --quota-scope=user --uid=<uid>
3.7.3. Set Bucket Quotas
Bucket quotas apply to the buckets owned by the specified uid. They are independent of the user.
[root@master-zone]# radosgw-admin quota set --uid=<uid> --quota-scope=bucket [--max-objects=<num objects>] [--max-size=<max size]
A negative value for num objects and / or max size means that the specific quota attribute check is disabled.
3.7.4. Enable and Disable Bucket Quotas
Once you set a bucket quota, you may enable it. For example:
[root@master-zone]# radosgw-admin quota enable --quota-scope=bucket --uid=<uid>
You may disable an enabled bucket quota. For example:
[root@master-zone]# radosgw-admin quota-disable --quota-scope=bucket --uid=<uid>
3.7.5. Get Quota Settings
You may access each user’s quota settings via the user information API. To read user quota setting information with the CLI interface, execute the following:
# radosgw-admin user info --uid=<uid>
3.7.6. Update Quota Stats
Quota stats get updated asynchronously. You can update quota statistics for all users and all buckets manually to retrieve the latest quota stats.
[root@master-zone]# radosgw-admin user stats --uid=<uid> --sync-stats
3.7.7. Get User Quota Usage Stats
To see how much of the quota a user has consumed, execute the following:
# radosgw-admin user stats --uid=<uid>
You should execute radosgw-admin user stats with the --sync-stats option to receive the latest data.
3.7.8. Quota Cache
Quota statistics are cached for each Ceph Gateway instance. If there are multiple instances, then the cache can keep quotas from being perfectly enforced, as each instance will have a different view of the quotas. The options that control this are rgw bucket quota ttl, rgw user quota bucket sync interval and rgw user quota sync interval. The higher these values are, the more efficient quota operations are, but the more out-of-sync multiple instances will be. The lower these values are, the closer to perfect enforcement multiple instances will achieve. If all three are 0, then quota caching is effectively disabled, and multiple instances will have perfect quota enforcement. See Chapter 4, Configuration Reference for more details on these options.
3.7.9. Reading and Writing Global Quotas
You can read and write quota settings in a zonegroup map. To get a zonegroup map:
[root@master-zone]# radosgw-admin zonegroup-map get > zonegroup-map.json
To set quota settings for the entire zone group, modify the quota settings in the zone group map. Then, use the zonegroup-map set command to update the zonegroup map:
[root@master-zone]# radosgw-admin zonegroup-map set < zonegroup-map.json
After updating the zonegroup map, you must restart the gateway.
3.8. Usage
The Ceph Object Gateway logs usage for each user. You can track user usage within date ranges too.
Options include:
-
Start Date: The
--start-dateoption allows you to filter usage stats from a particular start date (format:yyyy-mm-dd[HH:MM:SS]). -
End Date: The
--end-dateoption allows you to filter usage up to a particular date (format:yyyy-mm-dd[HH:MM:SS]). -
Log Entries: The
--show-log-entriesoption allows you to specify whether or not to include log entries with the usage stats (options:true|false).
You may specify time with minutes and seconds, but it is stored with 1 hour resolution.
3.8.1. Show Usage
To show usage statistics, specify the usage show. To show usage for a particular user, you must specify a user ID. You may also specify a start date, end date, and whether or not to show log entries.
# radosgw-admin usage show \
--uid=johndoe --start-date=2012-03-01 \
--end-date=2012-04-01You may also show a summary of usage information for all users by omitting a user ID.
# radosgw-admin usage show --show-log-entries=false
3.8.2. Trim Usage
With heavy use, usage logs can begin to take up storage space. You can trim usage logs for all users and for specific users. You may also specify date ranges for trim operations.
[root@master-zone]# radosgw-admin usage trim --start-date=2010-01-01 \
--end-date=2010-12-31
[root@master-zone]# radosgw-admin usage trim --uid=johndoe
[root@master-zone]# radosgw-admin usage trim --uid=johndoe --end-date=2013-12-313.8.3. Finding Orphan Objects
Normally, in a healthy storage cluster you should not have any leaking objects, but in some cases leaky objects can occur. For example, if the RADOS Gateway goes down in the middle of an operation, this may cause some RADOS objects to become orphans. Also, unknown bugs may cause these orphan objects to occur. The radosgw-admin command provides you a tool to search for these orphan objects and clean them up. With the --pool option, you can specify which pool to scan for leaky RADOS objects. With the --num-shards option, you may specify the number of shards to use for keeping temporary scan data.
Create a new log pool:
Example
# rados mkpool .log
Search for orphan objects:
Syntax
# radosgw-admin orphans find --pool=<data_pool> --job-id=<job_name> [--num-shards=<num_shards>] [--orphan-stale-secs=<seconds>]
Example
# radosgw-admin orphans find --pool=.rgw.buckets --job-id=abc123
Clean up the search data:
Syntax
# radosgw-admin orphans finish --job-id=<job_name>
Example
# radosgw-admin orphans finish --job-id=abc123

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.