-
Language:
English
-
Language:
English
Red Hat Training
A Red Hat training course is available for Red Hat Ceph Storage
Chapter 2. Configuration
2.1. Changing Your Default Port
Civetweb runs on port 7480
by default. To change the default port (e.g,. to port 80
), modify your Ceph configuration file in /etc/ceph
directory of your administration server. Add a section entitled [client.rgw.<gateway-node>]
, replacing <gateway-node>
with the short node name of your Ceph Object Gateway node (i.e., hostname -s
).
In version 1.3, the Ceph Object Gateway does not support SSL. You may setup a reverse proxy web server with SSL to dispatch HTTPS requests as HTTP requests to CivetWeb.
For example, if your node name is gateway-node1
, add a section like this after the [global]
section in /etc/ceph/ceph.conf
file:
[client.rgw.gateway-node1] rgw_frontends = "civetweb port=80"
Ensure that you leave no whitespace between port=<port-number>
in the rgw_frontends
key/value pair. The [client.rgw.gateway-node1]
heading identifies this portion of the Ceph configuration file as configuring a Ceph Storage Cluster client where the client type is a Ceph Object Gateway (i.e., rgw
), and the name of the instance is gateway-node1
.
Copy the updated configuration file from /etc/ceph
directory to the working directory of your administration server.
# scp /etc/ceph/ceph.conf <admin-server>:/etc/ceph
Then, copy the updated configuration file to your Ceph Object Gateway node and other Ceph nodes. From the working directory of your administration server, execute:
# ssh <admin-server> # scp /etc/ceph/ceph.conf <ceph-node>:/etc/ceph
To make the new port setting take effect, from your Ceph Object Gateway node, restart the Ceph Object Gateway.
$ sudo service radosgw restart id=rgw.<short-hostname>
Finally, check to ensure that the port you selected is open on the node’s firewall (e.g., port 80
). If it is not open, add the port and reload the firewall configuration. For example, on your Ceph Object Gateway node, execute:
$ sudo iptables --list $ sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 80 -j ACCEPT
Replace <iface>
and <ip-address>/<netmask>
with the relevant values for your Ceph Object Gateway node.
Once you have finished configuring iptables
, ensure that you make the change persistent so that it will be in effect when your Ceph Object Gateway node reboots.
Execute:
$ sudo apt-get install iptables-persistent
A terminal UI will open up. Select yes
for the prompts to save current IPv4
iptables rules to /etc/iptables/rules.v4
and current IPv6
iptables rules to /etc/iptables/rules.v6
.
The IPv4
iptables rule that you set in the earlier step will be loaded in /etc/iptables/rules.v4
and will be persistent across reboots.
If you add a new IPv4
iptables rule after installing iptables-persistent
you will have to add it to the rule file. In such case, execute the following as a root
user:
$ iptables-save > /etc/iptables/rules.v4
2.2. Migrating from Apache to Civetweb
If you’re running the Ceph Object Gateway on Apache and FastCGI with Red Hat Ceph Storage v1.2.x or above, you’re already running Civetweb—it starts with the ceph-radosgw
daemon and it’s running on port 7480 by default so that it doesn’t conflict with your Apache and FastCGI installation and other commonly used web service ports. Migrating to use Civetweb basically involves removing your Apache installation. Then, you must remove Apache and FastCGI settings from your Ceph configuration file and reset rgw_frontends
to Civetweb.
Referring to the documentation for installing a Ceph Object Gateway notice that the configuration file has an rgw_frontends
setting, which enables you to specify civetweb as a front end and change its port. Since you already have keys and a data directory, you will want to maintain those paths in your Ceph configuration file if you used something other than default paths.
A typical Ceph Object Gateway configuration file for an Apache-based deployment looks something like this:
[client.rgw.gateway-node1] host = {hostname} keyring = /etc/ceph/ceph.client.radosgw.keyring rgw socket path = "" log file = /var/log/radosgw/client.radosgw.gateway-node1.log rgw frontends = fastcgi socket_port=9000 socket_host=0.0.0.0 rgw print continue = false
To modify it for use with Civetweb, simply remove the Apache-specific settings such as rgw_socket_path
and rgw_print_continue
. Then, change the rgw_frontends
setting to reflect Civetweb rather than the Apache FastCGI front end and specify the port number you intend to use. For example:
[client.rgw.gateway-node1] host = {hostname} keyring = /etc/ceph/ceph.client.radosgw.keyring log file = /var/log/radosgw/client.radosgw.gateway-node1.log rgw_frontends = civetweb port=80
Finally, on your Ceph Object Gateway execute the following to restart the Ceph Object Gateway:
$ sudo service radosgw restart id=rgw.<short-hostname>
If you used a port number that is not open, you will also need to open that port on your firewall.
2.3. Using SSL with Civetweb
In previous versions, Civetweb SSL support for the Ceph Object Gateway relied on HAProxy and keepalived. In Red Hat Ceph Storage 2, Civetweb can use the OpenSSL library to provide Transport Layer Security (TLS).
Production deployments MUST use HAProxy and keepalived to terminate the SSL connection at HAProxy. Using SSL with Civetweb is recommended ONLY for small-to-medium sized test and pre-production deployments.
To use SSL with Civetweb, obtain a certificate from a Certificate Authority (CA) that matches the hostname of the gateway node. Red Hat recommends obtaining a certificate from a CA that has subject alternate name
fields and a wildcard for use with S3-style subdomains.
Civetweb requires the key, server certificate and any other certificate authority or intermediate certificate in a single .pem
file.
A .pem
file contains the secret key. Protect the .pem
file from unauthorized access.
To configure a port for SSL, add the port number to rgw_frontends
and append an s to the port number. Additionally, add ssl_certificate
with a path to the .pem
file. For example:
[client.rgw.{hostname}] rgw_frontends = "civetweb port=443s ssl_certificate=/etc/ceph/private/server.pem"
2.4. Civetweb Configuration Options
The following Civetweb configuration options can be passed to the embedded web server in the Ceph configuration file for the RADOS Gateway. Each option has a default value and if a value is not specified, then the default value is empty.
Option | Description | Default |
---|---|---|
| Path to a file for access logs. Either full path, or relative to the current working directory. If absent (default), then accesses are not logged. | EMPTY |
| Path to a file for error logs. Either full path, or relative to the current working directory. If absent (default), then errors are not logged. | EMPTY |
| Number of worker threads. Civetweb handles each incoming connection in a separate thread. Therefore, the value of this option is effectively the number of concurrent HTTP connections Civetweb can handle. | 50 |
| Timeout for network read and network write operations, in milliseconds. If a client intends to keep long-running connection, either increase this value or (better) use keep-alive messages. | 30000 |
The following is an example of the /etc/ceph/ceph.conf
file with some of these options set:
... [client.rgw.node1] rgw frontends = civetweb request_timeout_ms=30000 error_log_file=/var/log/radosgw/civetweb.error.log access_log_file=/var/log/radosgw/civetweb.access.log
2.5. Adding a Wildcard to DNS
To use Ceph with S3-style subdomains, for example bucket-name.domain-name.com
, add a wildcard to the DNS record of the DNS server the ceph-radosgw
daemon uses to resolve domain names.
For dnsmasq
, add the following address setting with a dot (.) prepended to the host name:
address=/.{hostname-or-fqdn}/{host-ip-address}
For example:
address=/.gateway-node1/192.168.122.75
For bind
, add a wildcard to the DNS record. For example:
$TTL 604800 @ IN SOA gateway-node1. root.gateway-node1. ( 2 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS gateway-node1. @ IN A 192.168.122.113 * IN CNAME @
Restart your DNS server and ping your server with a subdomain to ensure that the ceph-radosgw
daemon can process the subdomain requests:
ping mybucket.{hostname}
For example:
ping mybucket.gateway-node1
If the DNS server is on the local machine, you may need to modify /etc/resolv.conf
by adding a nameserver entry for the local machine.
Finally, specify the host name or address of the DNS server in the appropriate [client.rgw.{instance}]
section of the Ceph configuration file using the rgw_dns_name = {hostname}
setting. For example:
[client.rgw.rgw1] ... rgw_dns_name = {hostname}
As a best practice, make changes to the Ceph configuration file at a centralized location such as an admin node or ceph-ansible
and redistribute the configuration file as necessary to ensure consistency across the cluster.
Finally, restart the Ceph object gateway so that DNS setting takes effect.
2.6. Adjusting Logging and Debugging Output
Once you finish the setup procedure, check your logging output to ensure it meets your needs. Log files are located in /var/log/radosgw
by default. If you encounter issues with your configuration, you can increase logging and debugging messages in the [global]
section of your Ceph configuration file and restart the gateway(s) to help troubleshoot any configuration issues. For example:
[global] #append the following in the global section. debug ms = 1 debug rgw = 20 debug civetweb = 20
You may also modify these settings at runtime. For example:
ceph tell osd.0 injectargs --debug_civetweb 10/20
For general details on logging and debugging, see Logging and Debugging. For Ceph Object Gateway-specific details on logging settings, see Logging Settings in this guide.
2.7. Testing the Object Gateway
To use the REST interfaces, first create an initial Ceph Object Gateway user for the S3 interface. Then, create a subuser for the Swift interface. You then need to verify if the created users are able to access the gateway.
2.7.1. Creating a radosgw
User for S3 Access
A radosgw
user needs to be created and granted access. The command man radosgw-admin
will provide information on additional command options.
To create the user, execute the following on the gateway host
:
sudo radosgw-admin user create --uid="testuser" --display-name="First User"
The output of the command will be something like the following:
{ "user_id": "testuser", "display_name": "First User", "email": "", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [], "keys": [{ "user": "testuser", "access_key": "I0PJDPCIYZ665MW88W9R", "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA" }], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1 }, "user_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1 }, "temp_url_keys": [] }
The values of keys→access_key
and keys→secret_key
are needed for access validation.
Check the key output. Sometimes radosgw-admin
generates a JSON escape character \
in access_key
or secret_key
and some clients do not know how to handle JSON escape characters. Remedies include removing the JSON escape character \
, encapsulating the string in quotes, regenerating the key and ensuring that it does not have a JSON escape character or specify the key and secret manually. Also, if radosgw-admin
generates a JSON escape character \
and a forward slash /
together in a key, like \/
, only remove the JSON escape character \
. Do not remove the forward slash /
as it is a valid character in the key.
2.7.2. Creating a Swift User
A Swift subuser needs to be created if this kind of access is needed. Creating a Swift user is a two step process. The first step is to create the user. The second is to create the secret key.
Execute the following steps on the gateway host
:
Create the Swift user:
sudo radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full
The output will be something like the following:
{ "user_id": "testuser", "display_name": "First User", "email": "", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [{ "id": "testuser:swift", "permissions": "full-control" }], "keys": [{ "user": "testuser:swift", "access_key": "3Y1LNW4Q6X0Y53A52DET", "secret_key": "" }, { "user": "testuser", "access_key": "I0PJDPCIYZ665MW88W9R", "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA" }], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1 }, "user_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1 }, "temp_url_keys": [] }
Create the secret key:
sudo radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret
The output will be something like the following:
{ "user_id": "testuser", "display_name": "First User", "email": "", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [{ "id": "testuser:swift", "permissions": "full-control" }], "keys": [{ "user": "testuser:swift", "access_key": "3Y1LNW4Q6X0Y53A52DET", "secret_key": "" }, { "user": "testuser", "access_key": "I0PJDPCIYZ665MW88W9R", "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA" }], "swift_keys": [{ "user": "testuser:swift", "secret_key": "244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF\/IA" }], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1 }, "user_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1 }, "temp_url_keys": [] }
2.7.3. Testing S3 Access
You need to write and run a Python test script for verifying S3 access. The S3 access test script will connect to the radosgw
, create a new bucket and list all buckets. The values for aws_access_key_id
and aws_secret_access_key
are taken from the values of access_key
and secret_key
returned by the radosgw_admin
command.
Execute the following steps:
You will need to install the
python-boto
package.$ sudo apt-get install python-boto
Create the Python script:
vi s3test.py
Add the following contents to the file:
import boto import boto.s3.connection access_key = $access secret_key = $secret boto.config.add_section('s3') boto.config.set('s3', 'use-sigv4', 'True') conn = boto.connect_s3( aws_access_key_id = access_key, aws_secret_access_key = secret_key, host = 's3.{zone}.hostname', port = {port}, is_secure=False, calling_format = boto.s3.connection.OrdinaryCallingFormat(), ) bucket = conn.create_bucket('my-new-bucket') for bucket in conn.get_all_buckets(): print "{name}\t{created}".format( name = bucket.name, created = bucket.creation_date, )
Replace
{zone}
with the zone name. That is, the fully qualified domain name of the Ceph Object Gateway node. Ensure that thehost
setting resolves with DNS. Replace{port}`
with the port number of the gateway.Run the script:
python s3test.py
The output will be something like the following:
my-new-bucket 2015-02-16T17:09:10.000Z
2.7.4. Testing Swift Access
Swift access can be verified via the swift
command line client. The command man swift
will provide more information on available command line options.
To install swift
client, execute the following:
sudo apt-get install python-setuptools sudo easy_install pip sudo pip install --upgrade setuptools sudo pip install --upgrade python-swiftclient
To test swift access, execute the following:
swift -A http://{IP ADDRESS}:{port}/auth/1.0 -U testuser:swift -K '{swift_secret_key}' list
Replace {IP ADDRESS}
with the public IP address of the gateway server and {swift_secret_key}
with its value from the output of radosgw-admin key create
command executed for the swift
user. Replace {port} with the port number you are using with Civetweb (e.g., 7480
is the default). If you don’t replace the port, it will default to port 80
.
For example:
swift -A http://10.19.143.116:7480/auth/1.0 -U testuser:swift -K '244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF/IA' list
The output should be:
my-new-bucket
During uploads of large objects to versioned swift containers, please use the option --leave-segments
in the upload using python-swiftclient
. Not using this option will lead to an overwrite of the manifest file in which case an existing object is overwritten, leading to data loss.
2.8. Configuring Gateways for Static Web Hosting
Traditional web hosting sometimes involves setting up a web server for each website, which can use resources inefficiently when content doesn’t change dynamically. Ceph Object Gateway can host static web sites in S3 buckets—that is, sites that do not use server-side services like PHP, servlets, databases, nodejs and the like. This approach is substantially more economical than setting up VMs with web servers for each site.
2.8.1. Assumptions
Static web hosting requires at least one running Ceph Storage Cluster, and at least two Ceph Object Gateway instances for static web sites. Red Hat assumes that each zone will have multiple gateway instances load balanced by HAProxy/keepalived.
Red Hat DOES NOT support using a Ceph Object Gateway instance to support both standard S3/Swift APIs and static web hosting.
2.8.2. Requirements
Static web hosting functionality uses its own API, so configuring a gateway to use static web sites in S3 buckets requires the following:
- S3 static web hosting uses Ceph Object Gateway instances that are separate and distinct from instances used for standard S3/Swift API use cases.
- Gateway instances hosting S3 static web sites should have separate, non-overlapping domain names from the standard S3/Swift API gateway instances.
- Gateway instances hosting S3 static web sites should use separate public-facing IP addresses from the standard S3/Swift API gateway instances.
- Gateway instances hosting S3 static web sites load balance, and if necessary terminate SSL, using HAProxy/keepalived.
2.8.3. Setting Up the Gateway
To enable a gateway for static web hosting, edit the Ceph configuration file and add the following settings:
[client.rgw.<STATIC-SITE-HOSTNAME>] ... rgw_enable_static_website = true rgw_enable_apis = s3website rgw_dns_name = objects-zonegroup.domain.com rgw_dns_s3website_name = objects-website-zonegroup.domain.com rgw_resolve_cname = true ...
The rgw_enable_static_website
setting MUST be true
. The rgw_enable_apis
setting MUST enable the s3website
API. The rgw_dns_name
and rgw_dns_s3website_name
settings must provide their fully qualified domains. If the site will use canonical name extensions, set rgw_resolve_cname
to true
.
The FQDNs of rgw_dns_name
and rgw_dns_s3website_name
MUST NOT overlap.
2.8.4. Configuring the DNS
The following is an example of assumed DNS settings, where the first two lines specify the domains of the the gateway instance using a standard S3 interface and point to the IPv4 and IPv6 addresses respectively. The third line provides a wildcard CNAME setting for S3 buckets using canonical name extensions. The fourth and fifth lines specify the domains for the gateway instance using the S3 website interface and point to their IPv4 and IPv6 addresses respectively.
objects-zonegroup.domain.com. IN A 192.0.2.10 objects-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:10 *.objects-zonegroup.domain.com. IN CNAME objects-zonegroup.domain.com. objects-website-zonegroup.domain.com. IN A 192.0.2.20 objects-website-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:20
The IP addresses in the first two lines differ from the IP addresses in the fourth and fifth lines.
If using Ceph Object Gateway in a multi-site configuration, consider using a routing solution to route traffic to the gateway closest to the client.
The Amazon Web Service (AWS) requires static web host buckets to match the host name. Ceph provides a few different ways to configure the DNS, and HTTPS will work if the proxy has a matching certificate.
Hostname to a Bucket on a Subdomain
To use AWS-style S3 subdomains, use a wildcard in the DNS entry and can redirect requests to any bucket. A DNS entry might look like the following:
*.objects-website-zonegroup.domain.com. IN CNAME objects-website-zonegroup.domain.com.
Access the bucket name in the following manner:
http://bucket1.objects-website-zonegroup.domain.com
Where the bucket name is bucket1
.
Hostname to Non-Matching Bucket
Ceph supports mapping domain names to buckets without including the bucket name in the request, which is unique to Ceph Object Gateway. To use a domain name to access a bucket, map the domain name to the bucket name. A DNS entry might look like the following:
www.example.com. IN CNAME bucket2.objects-website-zonegroup.domain.com.
Where the bucket name is bucket2
.
Access the bucket in the following manner:
http://www.example.com
Hostname to Long Bucket with CNAME
AWS typically requires the bucket name to match the domain name. To configure the DNS for static web hosting using CNAME, the DNS entry might look like the following:
www.example.com. IN CNAME www.example.com.objects-website-zonegroup.domain.com.
Access the bucket in the following manner:
http://www.example.com
Hostname to Long Bucket without CNAME
If the DNS name contains other non-CNAME records such as SOA
, NS
, MX
or TXT
, the DNS record must map the domain name directly to the IP address. For example:
www.example.com. IN A 192.0.2.20 www.example.com. IN AAAA 2001:DB8::192:0:2:20
Access the bucket in the following manner:
http://www.example.com
2.8.5. Creating a Site
To create a static website perform the following steps:
-
Create an S3 bucket. The bucket name MAY be the same as the website’s domain name. For example,
mysite.com
may have a bucket name ofmysite.com
. This is required for AWS, but it is NOT required for Ceph. See DNS Settings for details. -
Upload the static website content to the bucket. Contents may include HTML, CSS, client-side JavaScript, images, audio/video content and other downloadable files. A website MUST have an
index.html
file and MAY haveerror.html
file. - Verify the website’s contents. At this point, only the creator of the bucket will have access to the the contents.
- Set permissions on the files so that they are publicly readable.
2.9. Exporting the Namespace to NFS-Ganesha
In Red Hat Ceph Storage 2, the Ceph Object Gateway provides the ability to export S3 object namespaces by using NFS version 4.1 for production systems, and NFS version 3 as a Technology Preview only.
The NFS Ganesha feature is not for general use, but rather for migration to an S3 cloud only.
The implementation conforms to Amazon Web Services (AWS) hierarchical namespace conventions which map UNIX-style path names onto S3 buckets and objects. The top level of the attached namespace, which is subordinate to the NFSv4 pseudo root if present, consists of the Ceph Object Gateway S3 buckets, where buckets are represented as NFS directories. Objects within a bucket are presented as NFS file and directory hierarchies, following S3 conventions. Operations to create files and directories are supported.
Creating or deleting hard or soft links IS NOT supported. Performing rename operations on buckets or directories IS NOT supported via NFS, but rename on files IS supported within and between directories, and between a file system and an NFS mount. File rename operations are more expensive when conducted over NFS, as they change the target directory and typically forces a full readdir
to refresh it.
Editing files via the NFS mount IS NOT supported.
The Ceph Object Gateway with NFS is based on a new, in-process library packaging of the Gateway server and a new File System Abstraction Layer (FSAL) namespace driver for the NFS-Ganesha NFSv4 server. At runtime, an instance of the Ceph Object Gateway daemon with NFS combines a full Ceph Object Gateway daemon, albeit without the Civetweb HTTP service, with an NFS-Ganesha instance in a single process. To make use of this feature, deploy NFS-Ganesha version 2.3.2 or later.
Perform the steps in the Before you Start and Configuring an NFS-Ganesha Instance procedures on the host that will contain the NFS-Ganesha (nfs-ganesha-rgw
) instance.
Running Multiple NFS Gateways
Each NFS-Ganesha instance acts as a full gateway endpoint, with the limitation that currently an NFS-Ganesha instance cannot be configured to export HTTP services. As with ordinary gateway instances, any number of NFS-Ganesha instances can be started, exporting the same or different resources from the cluster. This enables the clustering of NFS-Ganesha instances. However, this does not imply high availability.
When regular gateway instances and NFS-Ganesha instances overlap the same data resources, they will be accessible from both the standard S3 API and through the NFS-Ganesha instance as exported. You can co-locate the NFS-Ganesha instance with a Ceph Object Gateway instance on the same host.
Before you Start
- Disable any running kernel NFS service instances on any host that will run NFS-Ganesha before attempting to run NFS-Ganesha. NFS-Ganesha will not start if another NFS instance is running.
Make sure that the
rpcbind
service is running:# systemctl start rpcbind
NoteThe
rpcbind
package that providesrpcbind
is in usually installed by default. If that is not the case, install the package first.For details on how NFS uses
rpcbind
, see the Required Services section in the Storage Administration Guide for Red Hat Enterprise Linux 7.If the
nfs-service
service is running, stop and disable it:# systemctl stop nfs-server.service # systemctl disable nfs-server.service
Configuring an NFS-Ganesha Instance
NFSv4 is supported for production systems. NFSv3 is a Technology Preview and is not supported for production systems. Use NFSv3 with caution.
Install the
nfs-ganesha-fsal
package:$ sudo apt-get install nfs-ganesha-fsal
Copy the Ceph configuration file from a Ceph Monitor node to the
/etc/ceph/
directory of the NFS-Ganesha host, and edit it as necessary:# scp <mon-host>:/etc/ceph/ceph.conf <nfs-ganesha-rgw-host>:/etc/ceph
NoteThe Ceph configuration file must contain a valid
[client.rgw.{instance-name}]
section and corresponding parameters for the various required Gateway configuration variables such asrgw_data
,keyring
, orrgw_frontends
. If exporting Swift containers that do not conform to valid S3 bucket naming requirements, setrgw_relaxed_s3_bucket_names
totrue
in the[client.rgw]
section of the Ceph configuration file. For example, if a Swift container name contains underscores, it is not a valid S3 bucket name and will not get synchronized unlessrgw_relaxed_s3_bucket_names
is set totrue
. When adding objects and buckets outside of NFS, those objects will appear in the NFS namespace in the time set byrgw_nfs_namespace_expire_secs
, which is about 5 minutes by default. Override the default value forrgw_nfs_namespace_expire_secs
in the Ceph configuration file to change the refresh rate.Copy the Object Gateway keyring from the Ceph Object Gateway host to the NFS Ganesha host.
Create a directory to store the keyring:
# mkdir -p /var/lib/ceph/radosgw/ceph-rgw.<instance-name>/
Set the correct ownership for the directory:
chown ceph.ceph /var/lib/ceph/radosgw/ceph-rgw.<instance-name>
Copy the keyring:
# scp <rgw-instance-host>:/var/lib/ceph/radosgw/ceph-rgw.<instance-name>/keyring <nfs-ganesha-rgw-host>:/var/lib/ceph/radosgw/ceph-rgw-<instance-name>/.
Open the NFS-Ganesha configuration file:
# vim /etc/ganesha/ganesha.conf
Configure the
EXPORT
section with anFSAL
(File System Abstraction Layer) block. Provide an ID, S3 user ID, S3 access key, and secret. For NFSv4, it should look something like this:EXPORT { Export_ID={numeric-id}; Path = "/"; Pseudo = "/"; Access_Type = RW; SecType = "sys"; NFS_Protocols = 4; Transport_Protocols = TCP; Squash = No_Root_Squash; FSAL { Name = RGW; User_Id = {s3-user-id}; Access_Key_Id ="{s3-access-key}"; Secret_Access_Key = "{s3-secret}"; } }
NFSv3 is a Technology Preview and is NOT supported for production systems. Any
EXPORT
block which should support NFSv3 should include version 3 in theNFS_Protocols
setting. Additionally, NFSv3 is the last major version to support the UDP transport. Early versions of the standard included UDP, but RFC 7530 forbids its use. To enable UDP, include it in theTransport_Protocols
setting. For example:EXPORT { ... NFS_Protocols = 3,4; Transport_Protocols = UDP,TCP; ... }
Setting
SecType = sys;
allows clients to attach without Kerberos authentication.Setting
Squash = No_Root_Squash;
enables a user to change directory ownership in the NFS mount.NFS clients using a conventional OS-native NFS 4.1 client typically see a federated namespace of exported file systems defined by the destination server’s
pseudofs
root. Any number of these can be Ceph Object Gateway exports.Each export has its own tuple of
name
,User_Id
,Access_Key
, andSecret_Access_Key
and creates a proxy of the object namespace visible to the specified user.An export in
ganesha.conf
can also contain anNFSV4
block. Red Hat Ceph Storage supports theAllow_Numeric_Owners
andOnly_Numberic_Owners
parameters as an alternative to setting up theidmapper
program.NFSV4 { Allow_Numeric_Owners = true; Only_Numeric_Owners = true; }
Configure the
RGW
section. Specify the name of the instance, provide a path to the Ceph configuration file, and specify any initialization arguments:RGW { name = "client.rgw.{instance-name}"; ceph_conf = "/etc/ceph/ceph.conf"; init_args = "--{arg}={arg-value}"; }
-
Save the
/etc/ganesha/ganesha.conf
configuration file. Start the
nfs-ganesha
service.# systemctl start nfs-ganesha
Configuring NFSv4 clients
To access the namespace, mount the configured NFS-Ganesha export(s) into desired locations in the local POSIX namespace. As noted, this implementation has a few unique restriction:
- Only the NFS 4.1 and higher protocol flavors are supported.
-
To enforce write ordering, use the
sync
mount option.
To mount the NFS-Ganesha exports, add the following entry to the /etc/fstab
file on the client host:
<ganesha-host-name>:/ <mount-point> nfs noauto,soft,nfsvers=4.1,sync,proto=tcp 0 0
Specify the NFS-Ganesha host name and the path to the mount point on the client.
To successfully mount the NFS-Ganesha exports, the /sbin/mount.nfs
file must exist on the client. The nfs-tools
package provides this file. In most cases, the package is installed by default. However, verify that the nfs-tools
package is installed on the client and if not, install it.
For additional details on NFS, see the Network File System (NFS) chapter in the Storage Administration Guide for Red Hat Enterprise Linux 7.
Configuring NFSv3 clients (Technology Preview)
Linux clients can be configured to mount with NFSv3 by supplying nfsvers=3
and noacl
as mount options. To use UDP as the transport, add proto=udp
to the mount options. However, TCP is the preferred protocol.
<ganesha-host-name>:/ <mount-point> nfs noauto,noacl,soft,nfsvers=3,sync,proto=tcp 0 0
Configure the NFS Ganesha EXPORT
block Protocols
setting with version 3 and the Transports
setting with UDP if the mount will use version 3 with UDP.
Since NFSv3 does not communicate client OPEN and CLOSE operations to file servers, RGW NFS cannot use these operations to mark the beginning and ending of file upload transactions. Instead, RGW NFS attempts to start a new upload when the first write is sent to a file at offset 0, and finalizes the upload when no new writes to the file have been seen for a period of time—by default, 10 seconds. To change this value, set a value for rgw_nfs_write_completion_interval_s
in the RGW section(s) of the Ceph configuration file.