Red Hat Training

A Red Hat training course is available for Red Hat Ceph Storage

Chapter 2. Configuration

2.1. The CivetWeb front end

By default, the Ceph Object Gateway exposes its RESTful interfaces over HTTP using the CivetWeb web server. CivetWeb is a C/C++ embeddable web server.

Additional Resources

2.2. Changing the CivetWeb port

When the Ceph Object Gateway is installed using Ansible it configures CivetWeb to run on port 8080. Ansible does this by adding a line similar to the following in the Ceph configuration file:

rgw frontends = civetweb port=192.168.122.199:8080 num_threads=100
Important

If the Ceph configuration file does not include the rgw frontends = civetweb line, the Ceph Object Gateway listens on port 7480. If it includes an rgw_frontends = civetweb line but there is no port specified, the Ceph Object Gateway listens on port 80.

Important

Because Ansible configures the Ceph Object Gateway to listen on port 8080 and the supported way to install Red Hat Ceph Storage 3 is using ceph-ansible, port 8080 is considered the default port in the Red Hat Ceph Storage 3 documentation.

Prerequisites

  • A running Red Hat Ceph Storage 3.3 cluster.
  • A Ceph Object Gateway node.

Procedure

  1. On the gateway node, open the Ceph configuration file in the /etc/ceph/ directory.
  2. Find an RGW client section similar to the example:

    [client.rgw.gateway-node1]
    host = gateway-node1
    keyring = /var/lib/ceph/radosgw/ceph-rgw.gateway-node1/keyring
    log file = /var/log/ceph/ceph-rgw-gateway-node1.log
    rgw frontends = civetweb port=192.168.122.199:8080 num_threads=100

    The [client.rgw.gateway-node1] heading identifies this portion of the Ceph configuration file as configuring a Ceph Storage Cluster client where the client type is a Ceph Object Gateway as identified by rgw, and the name of the node is gateway-node1.

  3. To change the default Ansible configured port of 8080 to 80 edit the rgw frontends line:

    rgw frontends = civetweb port=192.168.122.199:80 num_threads=100

    Ensure there is no whitespace between port=port-number in the rgw_frontends key/value pair.

    Repeat this step on any other gateway nodes you want to change the port on.

  4. Restart the Ceph Object Gateway service from each gateway node to make the new port setting take effect:

    # systemctl restart ceph-radosgw.target
  5. Ensure the configured port is open on each gateway node’s firewall:

    # firewall-cmd --list-all
  6. If the port is not open, add the port and reload the firewall configuration:

    # firewall-cmd --zone=public --add-port 80/tcp --permanent
    # firewall-cmd --reload

Additional Resources

2.3. Using SSL with Civetweb

In Red Hat Ceph Storage 1, Civetweb SSL support for the Ceph Object Gateway relied on HAProxy and keepalived. In Red Hat Ceph Storage 2 and later releases, Civetweb can use the OpenSSL library to provide Transport Layer Security (TLS).

Important

Production deployments MUST use HAProxy and keepalived to terminate the SSL connection at HAProxy. Using SSL with Civetweb is recommended ONLY for small-to-medium sized test and pre-production deployments.

To use SSL with Civetweb, obtain a certificate from a Certificate Authority (CA) that matches the hostname of the gateway node. Red Hat recommends obtaining a certificate from a CA that has subject alternate name fields and a wildcard for use with S3-style subdomains.

Civetweb requires the key, server certificate and any other certificate authority or intermediate certificate in a single .pem file.

Important

A .pem file contains the secret key. Protect the .pem file from unauthorized access.

To configure a port for SSL, add the port number to rgw_frontends and append an s to the port number to indicate that it is a secure port. Additionally, add ssl_certificate with a path to the .pem file. For example:

[client.rgw.{hostname}]
rgw_frontends = "civetweb port=443s ssl_certificate=/etc/ceph/private/server.pem"

2.4. Civetweb Configuration Options

The following Civetweb configuration options can be passed to the embedded web server in the Ceph configuration file for the RADOS Gateway. Each option has a default value and if a value is not specified, then the default value is empty.

OptionDescriptionDefault

access_log_file

Path to a file for access logs. Either full path, or relative to the current working directory. If absent (default), then accesses are not logged.

EMPTY

error_log_file

Path to a file for error logs. Either full path, or relative to the current working directory. If absent (default), then errors are not logged.

EMPTY

num_threads

Number of worker threads. Civetweb handles each incoming connection in a separate thread. Therefore, the value of this option is effectively the number of concurrent HTTP connections Civetweb can handle.

512

request_timeout_ms

Timeout for network read and network write operations, in milliseconds. If a client intends to keep long-running connection, either increase this value or (better) use keep-alive messages.

30000

Important

If you set num_threads, it will overwrite rgw_thread_pool_size. Therefore, either set them both to the same value, or only set rgw_thread_pool_size and do not set num_threads. By default, both variables are set to 512 by ceph-ansible.

The following is an example of the /etc/ceph/ceph.conf file with some of these options set:

...

[client.rgw.node1]
rgw frontends = civetweb request_timeout_ms=30000 error_log_file=/var/log/radosgw/civetweb.error.log access_log_file=/var/log/radosgw/civetweb.access.log

2.5. Using the Beast front end

The Ceph Object Gateway provides CivetWeb and Beast embedded HTTP servers as front ends. The Beast front end uses the Boost.Beast library for HTTP parsing and the Boost.Asio library for asynchronous network I/O. Since CivetWeb is the default front end, to use the Beast front end specify it in the rgw_frontends parameter in the Red Hat Ceph Storage configuration file.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Ceph Object Gateway is installed.

Procedure

  1. Modify the /etc/ceph/ceph.conf configuration file on the administration server:

    1. Add a section entitled [client.rgw.<gateway-node>], replacing <gateway-node> with the short node name of the Ceph Object Gateway node.
    2. Use hostname -s to retrieve the host shortname.
    3. For example, if the gateway node name is gateway-node1, add a section like the following after the [global] section in the /etc/ceph/ceph.conf file:

      [client.rgw.gateway-node1]
      rgw frontends = beast endpoint=192.168.0.100:80
  2. Copy the updated configuration file to the Ceph Object Gateway node and other Ceph nodes.

    # scp /etc/ceph/ceph.conf <ceph-node>:/etc/ceph
  3. Restart the Ceph Object Gateway to enable the Beast front end:

    # systemctl restart ceph-radosgw.target
  4. Ensure that the configured port is open on the node’s firewall. If it is not open, add the port and reload the firewall configuration. For example, on the Ceph Object Gateway node, execute:

    # firewall-cmd --list-all
    # firewall-cmd --zone=public --add-port 80/tcp --permanent
    # firewall-cmd --reload

Additional Resources

2.6. Beast configuration options

The following Beast configuration options can be passed to the embedded web server in the Ceph configuration file for the RADOS Gateway. Each option has a default value. If a value is not specified, the default value is empty.

OptionDescriptionDefault

endpoint and ssl_endpoint

Sets the listening address in the form address[:port] where the address is an IPv4 address string in dotted decimal form, or an IPv6 address in hexadecimal notation surrounded by square brackets. The optional port defaults to 80 for endpoint and 443 for ssl_endpoint. It can be specified multiple times as in endpoint=[::1] endpoint=192.168.0.100:8000.

EMPTY

ssl_certificate

Path to the SSL certificate file used for SSL-enabled endpoints.

EMPTY

ssl_private_key

Optional path to the private key file used for SSL-enabled endpoints. If one is not given the file specified by ssl_certificate is used as the private key.

EMPTY

Example /etc/ceph/ceph.conf file with Beast options using SSL:

...

[client.rgw.node1]
rgw frontends = beast ssl_endpoint=192.168.0.100:443 ssl_certificate=<path to SSL certificate>

Additional Resources

2.7. Add a Wildcard to the DNS

To use Ceph with S3-style subdomains, for example bucket-name.domain-name.com, add a wildcard to the DNS record of the DNS server the ceph-radosgw daemon uses to resolve domain names.

For dnsmasq, add the following address setting with a dot (.) prepended to the host name:

address=/.{hostname-or-fqdn}/{host-ip-address}

For example:

address=/.gateway-node1/192.168.122.75

For bind, add a wildcard to the DNS record. For example:

$TTL    604800
@       IN      SOA     gateway-node1. root.gateway-node1. (
                              2         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
;
@       IN      NS      gateway-node1.
@       IN      A       192.168.122.113
*       IN      CNAME   @

Restart the DNS server and ping the server with a subdomain to ensure that the ceph-radosgw daemon can process the subdomain requests:

ping mybucket.{hostname}

For example:

ping mybucket.gateway-node1

If the DNS server is on the local machine, you may need to modify /etc/resolv.conf by adding a nameserver entry for the local machine.

Finally, specify the host name or address of the DNS server in the appropriate [client.rgw.{instance}] section of the Ceph configuration file using the rgw_dns_name = {hostname} setting. For example:

[client.rgw.rgw1]
...
rgw_dns_name = {hostname}
Note

As a best practice, make changes to the Ceph configuration file at a centralized location such as an admin node or ceph-ansible and redistribute the configuration file as necessary to ensure consistency across the cluster.

Finally, restart the Ceph object gateway so that DNS setting takes effect.

2.8. Adjusting Logging and Debugging Output

Once you finish the setup procedure, check your logging output to ensure it meets your needs. If you encounter issues with your configuration, you can increase logging and debugging messages in the [global] section of your Ceph configuration file and restart the gateway(s) to help troubleshoot any configuration issues. For example:

[global]
#append the following in the global section.
debug ms = 1
debug rgw = 20
debug civetweb = 20

You may also modify these settings at runtime. For example:

# ceph tell osd.0 injectargs --debug_civetweb 10/20

The Ceph log files reside in /var/log/ceph by default.

For general details on logging and debugging, see Logging Configuration Reference chapter of the Configuration Guide for Red Hat Ceph Storage 3. For details on logging specific to the Ceph Object Gateway, see the The Ceph Object Gateway section in the Logging Configuration Reference chapter of this guide.

2.9. S3 API Server-side Encryption

The Ceph Object Gateway supports server-side encryption of uploaded objects for the S3 API. Server-side encryption means that the S3 client sends data over HTTP in its unencrypted form, and the Ceph Object Gateway stores that data in the Ceph Storage Cluster in encrypted form.

Note

Red Hat does NOT support S3 object encryption of SLO(Static Large Object) and DLO(Dynamic Large Object).

Important

To use encryption, client requests MUST send requests over an SSL connection. Red Hat does not support S3 encryption from a client unless the Ceph Object Gateway uses SSL. However, for testing purposes, administrators may disable SSL during testing by setting the rgw_crypt_require_ssl configuration setting to false at runtime, setting it to false in the Ceph configuration file and restarting the gateway instance, or setting it to false in the Ansible configuration files and replaying the Ansible playbooks for the Ceph Object Gateway.

There are two options for the management of encryption keys:

Customer-Provided Keys

When using customer-provided keys, the S3 client passes an encryption key along with each request to read or write encrypted data. It is the customer’s responsibility to manage those keys. Customers must remember which key the Ceph Object Gateway used to encrypt each object.

Ceph Object Gateway implements the customer-provided key behavior in the S3 API according to the Amazon SSE-C specification.

Since the customer handles the key management and the S3 client passes keys to the Ceph Object Gateway, the Ceph Object Gateway requires no special configuration to support this encryption mode.

Key Management Service

When using a key management service, the secure key management service stores the keys and the Ceph Object Gateway retrieves them on demand to serve requests to encrypt or decrypt data.

Ceph Object Gateway implements the key management service behavior in the S3 API according to the Amazon SSE-KMS specification.

Important

Currently, the only tested key management implementation uses OpenStack Barbican. However, using OpenStack Barbican is not fully supported yet. The only way to use it in production is to get a support exception. For more information, please contact technical support.

2.10. Testing the Gateway

To use the REST interfaces, first create an initial Ceph Object Gateway user for the S3 interface. Then, create a subuser for the Swift interface. You then need to verify if the created users are able to access the gateway.

2.10.1. Create an S3 User

To test the gateway, create an S3 user and grant the user access. The man radosgw-admin command provides information on additional command options.

Note

In a multi-site deployment, always create a user on a host in the master zone of the master zone group.

Prerequisites

  • root or sudo access
  • Ceph Object Gateway installed

Procedure

  1. Create an S3 user:

    radosgw-admin user create --uid=name --display-name="First User"

    Replace name with the name of the S3 user, for example:

    [root@master-zone]# radosgw-admin user create --uid="testuser" --display-name="First User"
    {
        "user_id": "testuser",
        "display_name": "First User",
        "email": "",
        "suspended": 0,
        "max_buckets": 1000,
        "auid": 0,
        "subusers": [],
        "keys": [
            {
                "user": "testuser",
                "access_key": "CEP28KDIQXBKU4M15PDC",
                "secret_key": "MARoio8HFc8JxhEilES3dKFVj8tV3NOOYymihTLO"
            }
        ],
        "swift_keys": [],
        "caps": [],
        "op_mask": "read, write, delete",
        "default_placement": "",
        "placement_tags": [],
        "bucket_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        },
        "temp_url_keys": [],
        "type": "rgw"
    }
  2. Verify the output to ensure that the values of access_key and secret_key do not include a JSON escape character (\). These values are needed for access validation, but certain clients cannot handle if the values include JSON escape character. To fix this problem, perform one of the following actions:

    • Remove the JSON escape character.
    • Encapsulate the string in quotes.
    • Regenerate the key and ensure that is does not include a JSON escape character.
    • Specify the key and secret manually.

    Do not remove the forward slash / because it is a valid character.

2.10.2. Create a Swift User

To test the Swift interface, create a Swift subuser. Creating a Swift user is a two step process. The first step is to create the user. The second step is to create the secret key.

Note

In a multi-site deployment, always create a user on a host in the master zone of the master zone group.

Prerequisites

  • root or sudo access
  • Ceph Object Gateway installed

Procedure

  1. Create the Swift user:

    radosgw-admin subuser create --uid=name --subuser=name:swift --access=full

    Replace name with the name of the Swift user, for example:

    [root@master-zone]# radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full
    {
        "user_id": "testuser",
        "display_name": "First User",
        "email": "",
        "suspended": 0,
        "max_buckets": 1000,
        "auid": 0,
        "subusers": [
            {
                "id": "testuser:swift",
                "permissions": "full-control"
            }
        ],
        "keys": [
            {
                "user": "testuser",
                "access_key": "O8JDE41XMI74O185EHKD",
                "secret_key": "i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6"
            }
        ],
        "swift_keys": [
            {
                "user": "testuser:swift",
                "secret_key": "13TLtdEW7bCqgttQgPzxFxziu0AgabtOc6vM8DLA"
            }
        ],
        "caps": [],
        "op_mask": "read, write, delete",
        "default_placement": "",
        "placement_tags": [],
        "bucket_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        },
        "temp_url_keys": [],
        "type": "rgw"
    }
  2. Create the secret key:

    radosgw-admin key create --subuser=name:swift --key-type=swift --gen-secret

    Replace name with the name of the Swift user, for example:

    [root@master-zone]# radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret
    {
        "user_id": "testuser",
        "display_name": "First User",
        "email": "",
        "suspended": 0,
        "max_buckets": 1000,
        "auid": 0,
        "subusers": [
            {
                "id": "testuser:swift",
                "permissions": "full-control"
            }
        ],
        "keys": [
            {
                "user": "testuser",
                "access_key": "O8JDE41XMI74O185EHKD",
                "secret_key": "i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6"
            }
        ],
        "swift_keys": [
            {
                "user": "testuser:swift",
                "secret_key": "a4ioT4jEP653CDcdU8p4OuhruwABBRZmyNUbnSSt"
            }
        ],
        "caps": [],
        "op_mask": "read, write, delete",
        "default_placement": "",
        "placement_tags": [],
        "bucket_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        },
        "temp_url_keys": [],
        "type": "rgw"
    }

2.10.3. Test S3 Access

You need to write and run a Python test script for verifying S3 access. The S3 access test script will connect to the radosgw, create a new bucket and list all buckets. The values for aws_access_key_id and aws_secret_access_key are taken from the values of access_key and secret_key returned by the radosgw_admin command.

Execute the following steps:

  1. Enable the common repository.

    # subscription-manager repos --enable=rhel-7-server-rh-common-rpms
  2. Install the python-boto package.

    sudo yum install python-boto
  3. Create the Python script:

    vi s3test.py
  4. Add the following contents to the file:

    import boto
    import boto.s3.connection
    
    access_key = $access
    secret_key = $secret
    
    boto.config.add_section('s3')
    
    conn = boto.connect_s3(
            aws_access_key_id = access_key,
            aws_secret_access_key = secret_key,
            host = 's3.<zone>.hostname',
            port = <port>,
            is_secure=False,
            calling_format = boto.s3.connection.OrdinaryCallingFormat(),
            )
    
    bucket = conn.create_bucket('my-new-bucket')
    for bucket in conn.get_all_buckets():
    	print "{name}\t{created}".format(
    		name = bucket.name,
    		created = bucket.creation_date,
    )
    1. Replace <zone> with the zone name of the host where you have configured the gateway service. That is, the gateway host. Ensure that the host setting resolves with DNS. Replace <port> with the port number of the gateway.
    2. Replace $access and $secret with the access_key and secret_key values from the Create an S3 User section.
  5. Run the script:

    python s3test.py

    The output will be something like the following:

    my-new-bucket 2015-02-16T17:09:10.000Z

2.10.4. Test Swift Access

Swift access can be verified via the swift command line client. The command man swift will provide more information on available command line options.

To install swift client, execute the following:

sudo yum install python-setuptools
sudo easy_install pip
sudo pip install --upgrade setuptools
sudo pip install --upgrade python-swiftclient

To test swift access, execute the following:

swift -A http://{IP ADDRESS}:{port}/auth/1.0 -U testuser:swift -K '{swift_secret_key}' list

Replace {IP ADDRESS} with the public IP address of the gateway server and {swift_secret_key} with its value from the output of radosgw-admin key create command executed for the swift user. Replace {port} with the port number you are using with Civetweb (e.g., 8080 is the default). If you don’t replace the port, it will default to port 80.

For example:

swift -A http://10.19.143.116:8080/auth/1.0 -U testuser:swift -K '244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF/IA' list

The output should be:

my-new-bucket

2.11. Configuring HAProxy/keepalived

The Ceph Object Gateway allows you to assign many instances of the object gateway to a single zone so that you can scale out as load increases, that is, the same zone group and zone; however, you do not need a federated architecture to use HAProxy/keepalived. Since each object gateway instance has its own IP address, you can use HAProxy and keepalived to balance the load across Ceph Object Gateway servers.

Another use case for HAProxy and keepalived is to terminate HTTPS at the HAProxy server. Red Hat Ceph Storage (RHCS) 1.3.x uses Civetweb, and the implementation in RHCS 1.3.x doesn’t support HTTPS. You can use an HAProxy server to terminate HTTPS at the HAProxy server and use HTTP between the HAProxy server and the Civetweb gateway instances.

2.11.1. HAProxy/keepalived Prerequisites

To set up an HA Proxy with the Ceph Object Gateway, you must have:

  • A running Ceph cluster
  • At least two Ceph Object Gateway servers within the same zone configured to run on port 80. If you follow the simple installation procedure, the gateway instances are in the same zone group and zone by default. If you are using a federated architecture, ensure that the instances are in the same zone group and zone; and,
  • At least two servers for HAProxy and keepalived.
Note

This section assumes that you have at least two Ceph Object Gateway servers running, and that you get a valid response from each of them when running test scripts over port 80.

For a detailed discussion of HAProxy and keepalived, see Load Balancer Administration.

2.11.2. Preparing HAProxy Nodes

The following setup assumes two HAProxy nodes named haproxy and haproxy2 and two Ceph Object Gateway servers named rgw1 and rgw2. You may use any naming convention you prefer. Perform the following procedure on your at least two HAProxy nodes:

  1. Install RHEL 7.x.
  2. Register the nodes.

    [root@haproxy]# subscription-manager register
  3. Enable the RHEL server repository.

    [root@haproxy]# subscription-manager repos --enable=rhel-7-server-rpms
  4. Update the server.

    [root@haproxy]# yum update -y
  5. Install admin tools (e.g., wget, vim, etc.) as needed.
  6. Open port 80.

    [root@haproxy]# firewall-cmd --zone=public --add-port 80/tcp --permanent
    [root@haproxy]# firewall-cmd --reload
  7. For HTTPS, open port 443.

    [root@haproxy]# firewall-cmd --zone=public --add-port 443/tcp --permanent
    [root@haproxy]# firewall-cmd --reload

2.11.3. Installing and Configuring keepalived

Perform the following procedure on your at least two HAProxy nodes:

Prerequisites

  • A minimum of two HAProxy nodes.
  • A minimum of two Object Gateway nodes.

Procedure

  1. Install keepalived:

    [root@haproxy]# yum install -y keepalived
  2. Configure keepalived on both HAProxy nodes:

    [root@haproxy]# vim /etc/keepalived/keepalived.conf

    In the configuration file, there is a script to check the haproxy processes:

    vrrp_script chk_haproxy {
      script "killall -0 haproxy" # check the haproxy process
      interval 2 # every 2 seconds
      weight 2 # add 2 points if OK
    }

    Next, the instance on the master and backup load balancers uses eno1 as the network interface. It also assigns a virtual IP address, that is, 192.168.1.20.

    Master load balancer node

    vrrp_instance RGW {
        state MASTER # might not be necessary. This is on the Master LB node.
        @main interface eno1
        priority 100
        advert_int 1
        interface eno1
        virtual_router_id 50
        @main unicast_src_ip 10.8.128.43 80
        unicast_peer {
               10.8.128.53
               }
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
            192.168.1.20
        }
        track_script {
          chk_haproxy
        }
    }
    virtual_server 192.168.1.20 80 eno1 { #populate correct interface
        delay_loop 6
        lb_algo wlc
        lb_kind dr
        persistence_timeout 600
        protocol TCP
        real_server 10.8.128.43 80 { # ip address of rgw2 on physical interface, haproxy listens here, rgw listens to localhost:8080 or similar
            weight 100
            TCP_CHECK { # perhaps change these to a HTTP/SSL GET?
                connect_timeout 3
            }
        }
        real_server 10.8.128.53 80 { # ip address of rgw3 on physical interface, haproxy listens here, rgw listens to localhost:8080 or similar
            weight 100
            TCP_CHECK { # perhaps change these to a HTTP/SSL GET?
                connect_timeout 3
            }
        }
    }

    Backup load balancer node

    vrrp_instance RGW {
        state BACKUP # might not be necessary?
        priority 99
        advert_int 1
        interface eno1
        virtual_router_id 50
        unicast_src_ip 10.8.128.53 80
        unicast_peer {
               10.8.128.43
               }
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
            192.168.1.20
        }
        track_script {
          chk_haproxy
        }
    }
    virtual_server 192.168.1.20 80 eno1 { #populate correct interface
        delay_loop 6
        lb_algo wlc
        lb_kind dr
        persistence_timeout 600
        protocol TCP
        real_server 10.8.128.43 80 { # ip address of rgw2 on physical interface, haproxy listens here, rgw listens to localhost:8080 or similar
            weight 100
            TCP_CHECK { # perhaps change these to a HTTP/SSL GET?
                connect_timeout 3
            }
        }
        real_server 10.8.128.53 80 { # ip address of rgw3 on physical interface, haproxy listens here, rgw listens to localhost:8080 or similar
            weight 100
            TCP_CHECK { # perhaps change these to a HTTP/SSL GET?
                connect_timeout 3
            }
        }
    }

  3. Enable and start the keepalived service:

    [root@haproxy]# systemctl enable keepalived
    [root@haproxy]# systemctl start keepalived

Additional Resources

2.11.4. Installing and Configuring HAProxy

Perform the following procedure on your at least two HAProxy nodes:

  1. Install haproxy.

    [root@haproxy]# yum install haproxy
  2. Configure haproxy for SELinux and HTTP.

    [root@haproxy]# vim /etc/firewalld/services/haproxy-http.xml

    Add the following lines:

    <?xml version="1.0" encoding="utf-8"?>
    <service>
    <short>HAProxy-HTTP</short>
    <description>HAProxy load-balancer</description>
    <port protocol="tcp" port="80"/>
    </service>

    As root, assign the correct SELinux context and file permissions to the haproxy-http.xml file.

    [root@haproxy]# cd /etc/firewalld/services
    [root@haproxy]# restorecon haproxy-http.xml
    [root@haproxy]# chmod 640 haproxy-http.xml
  3. If you intend to use HTTPS, configure haproxy for SELinux and HTTPS.

    [root@haproxy]# vim /etc/firewalld/services/haproxy-https.xml

    Add the following lines:

    <?xml version="1.0" encoding="utf-8"?>
    <service>
    <short>HAProxy-HTTPS</short>
    <description>HAProxy load-balancer</description>
    <port protocol="tcp" port="443"/>
    </service>

    As root, assign the correct SELinux context and file permissions to the haproxy-https.xml file.

    # cd /etc/firewalld/services
    # restorecon haproxy-https.xml
    # chmod 640 haproxy-https.xml
  4. If you intend to use HTTPS, generate keys for SSL. If you do not have a certificate, you may use a self-signed certificate. To generate a key, see to Generating a New Key and Certificate section in the System Administrator’s Guide for Red Hat Enterprise Linux 7.

    Finally, put the certificate and key into a PEM file.

    [root@haproxy]# cat example.com.crt example.com.key > example.com.pem
    [root@haproxy]# cp example.com.pem /etc/ssl/private/
  5. Configure haproxy.

    [root@haproxy]# vim /etc/haproxy/haproxy.cfg

    The global and defaults may remain unchanged. After the defaults section, you will need to configure frontend and backend sections. For example:

    frontend http_web *:80
        mode http
        default_backend rgw
    
    frontend rgw­-https
      bind *:443 ssl crt /etc/ssl/private/example.com.pem
      default_backend rgw
    
    backend rgw
        balance roundrobin
        mode http
        server  rgw1 10.0.0.71:80 check
        server  rgw2 10.0.0.80:80 check

    For a detailed discussion of HAProxy configuration, refer to HAProxy Configuration.

  6. Enable/start haproxy

    [root@haproxy]# systemctl enable haproxy
    [root@haproxy]# systemctl start haproxy

2.11.5. Testing the HAProxy Configuration

On your HAProxy nodes, check to ensure the virtual IP address from your keepalived configuration appears.

[root@haproxy]# ip addr show

On your calamari node, see if you can reach the gateway nodes via the load balancer configuration. For example:

[root@haproxy]# wget haproxy

This should return the same result as:

[root@haproxy]# wget rgw1

If it returns an index.html file with the following contents:

<?xml version="1.0" encoding="UTF-8"?>
	<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
		<Owner>
			<ID>anonymous</ID>
			<DisplayName></DisplayName>
		</Owner>
		<Buckets>
		</Buckets>
	</ListAllMyBucketsResult>

Then, your configuration is working properly.

2.12. Configuring Gateways for Static Web Hosting

Traditional web hosting involves setting up a web server for each website, which can use resources inefficiently when content does not change dynamically. Ceph Object Gateway can host static web sites in S3 buckets—​that is, sites that do not use server-side services like PHP, servlets, databases, nodejs and the like. This approach is substantially more economical than setting up VMs with web servers for each site.

2.12.1. Static Web Hosting Assumptions

Static web hosting requires at least one running Ceph Storage Cluster, and at least two Ceph Object Gateway instances for static web sites. Red Hat assumes that each zone will have multiple gateway instances load balanced by HAProxy/keepalived.

See Configuring HAProxy/keepalived for additional details on HAProxy/keepalived.

Note

Red Hat DOES NOT support using a Ceph Object Gateway instance to deploy both standard S3/Swift APIs and static web hosting simultaneously.

2.12.2. Static Web Hosting Requirements

Static web hosting functionality uses its own API, so configuring a gateway to use static web sites in S3 buckets requires the following:

  1. S3 static web hosting uses Ceph Object Gateway instances that are separate and distinct from instances used for standard S3/Swift API use cases.
  2. Gateway instances hosting S3 static web sites should have separate, non-overlapping domain names from the standard S3/Swift API gateway instances.
  3. Gateway instances hosting S3 static web sites should use separate public-facing IP addresses from the standard S3/Swift API gateway instances.
  4. Gateway instances hosting S3 static web sites load balance, and if necessary terminate SSL, using HAProxy/keepalived.

2.12.3. Static Web Hosting Gateway Setup

To enable a gateway for static web hosting, edit the Ceph configuration file and add the following settings:

[client.rgw.<STATIC-SITE-HOSTNAME>]
...
rgw_enable_static_website = true
rgw_enable_apis = s3, s3website
rgw_dns_name = objects-zonegroup.domain.com
rgw_dns_s3website_name = objects-website-zonegroup.domain.com
rgw_resolve_cname = true
...

The rgw_enable_static_website setting MUST be true. The rgw_enable_apis setting MUST enable the s3website API. The rgw_dns_name and rgw_dns_s3website_name settings must provide their fully qualified domains. If the site will use canonical name extensions, set rgw_resolve_cname to true.

Important

The FQDNs of rgw_dns_name and rgw_dns_s3website_name MUST NOT overlap.

2.12.4. Static Web Hosting DNS Configuration

The following is an example of assumed DNS settings, where the first two lines specify the domains of the the gateway instance using a standard S3 interface and point to the IPv4 and IPv6 addresses respectively. The third line provides a wildcard CNAME setting for S3 buckets using canonical name extensions. The fourth and fifth lines specify the domains for the gateway instance using the S3 website interface and point to their IPv4 and IPv6 addresses respectively.

objects-zonegroup.domain.com. IN    A 192.0.2.10
objects-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:10
*.objects-zonegroup.domain.com. IN CNAME objects-zonegroup.domain.com.
objects-website-zonegroup.domain.com. IN    A 192.0.2.20
objects-website-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:20
Note

The IP addresses in the first two lines differ from the IP addresses in the fourth and fifth lines.

If using Ceph Object Gateway in a multi-site configuration, consider using a routing solution to route traffic to the gateway closest to the client.

The Amazon Web Service (AWS) requires static web host buckets to match the host name. Ceph provides a few different ways to configure the DNS, and HTTPS will work if the proxy has a matching certificate.

Hostname to a Bucket on a Subdomain

To use AWS-style S3 subdomains, use a wildcard in the DNS entry and can redirect requests to any bucket. A DNS entry might look like the following:

*.objects-website-zonegroup.domain.com. IN CNAME objects-website-zonegroup.domain.com.

Access the bucket name in the following manner:

http://bucket1.objects-website-zonegroup.domain.com

Where the bucket name is bucket1.

Hostname to Non-Matching Bucket

Ceph supports mapping domain names to buckets without including the bucket name in the request, which is unique to Ceph Object Gateway. To use a domain name to access a bucket, map the domain name to the bucket name. A DNS entry might look like the following:

www.example.com. IN CNAME bucket2.objects-website-zonegroup.domain.com.

Where the bucket name is bucket2.

Access the bucket in the following manner:

http://www.example.com

Hostname to Long Bucket with CNAME

AWS typically requires the bucket name to match the domain name. To configure the DNS for static web hosting using CNAME, the DNS entry might look like the following:

www.example.com. IN CNAME www.example.com.objects-website-zonegroup.domain.com.

Access the bucket in the following manner:

http://www.example.com

Hostname to Long Bucket without CNAME

If the DNS name contains other non-CNAME records such as SOA, NS, MX or TXT, the DNS record must map the domain name directly to the IP address. For example:

www.example.com. IN A 192.0.2.20
www.example.com. IN AAAA 2001:DB8::192:0:2:20

Access the bucket in the following manner:

http://www.example.com

2.12.5. Creating a Static Web Hosting Site

To create a static website perform the following steps:

  1. Create an S3 bucket. The bucket name MAY be the same as the website’s domain name. For example, mysite.com may have a bucket name of mysite.com. This is required for AWS, but it is NOT required for Ceph. See DNS Settings for details.
  2. Upload the static website content to the bucket. Contents may include HTML, CSS, client-side JavaScript, images, audio/video content and other downloadable files. A website MUST have an index.html file and MAY have error.html file.
  3. Verify the website’s contents. At this point, only the creator of the bucket will have access to the the contents.
  4. Set permissions on the files so that they are publicly readable.

2.13. Exporting the Namespace to NFS-Ganesha

In Red Hat Ceph Storage 3, the Ceph Object Gateway provides the ability to export S3 object namespaces by using NFS version 3 and NFS version 4.1 for production systems.

Note

The NFS Ganesha feature is not for general use, but rather for migration to an S3 cloud only.

The implementation conforms to Amazon Web Services (AWS) hierarchical namespace conventions which map UNIX-style path names onto S3 buckets and objects. The top level of the attached namespace, which is subordinate to the NFSv4 pseudo root if present, consists of the Ceph Object Gateway S3 buckets, where buckets are represented as NFS directories. Objects within a bucket are presented as NFS file and directory hierarchies, following S3 conventions. Operations to create files and directories are supported.

Note

Creating or deleting hard or soft links IS NOT supported. Performing rename operations on buckets or directories IS NOT supported via NFS, but rename on files IS supported within and between directories, and between a file system and an NFS mount. File rename operations are more expensive when conducted over NFS, as they change the target directory and typically forces a full readdir to refresh it.

Note

Editing files via the NFS mount IS NOT supported.

Note

The Ceph Object Gateway requires applications to write sequentially from offset 0 to the end of a file. Attempting to write out of order causes the upload operation to fail. To work around this issue, use utilities like cp, cat, or rsync when copying files into NFS space. Always mount with the sync option.

The Ceph Object Gateway with NFS is based on an in-process library packaging of the Gateway server and a File System Abstraction Layer (FSAL) namespace driver for the NFS-Ganesha NFS server. At runtime, an instance of the Ceph Object Gateway daemon with NFS combines a full Ceph Object Gateway daemon, albeit without the Civetweb HTTP service, with an NFS-Ganesha instance in a single process. To make use of this feature, deploy NFS-Ganesha version 2.3.2 or later.

Perform the steps in the Before you Start and Configuring an NFS-Ganesha Instance procedures on the host that will contain the NFS-Ganesha (nfs-ganesha-rgw) instance.

Running Multiple NFS Gateways

Each NFS-Ganesha instance acts as a full gateway endpoint, with the current limitation that an NFS-Ganesha instance cannot be configured to export HTTP services. As with ordinary gateway instances, any number of NFS-Ganesha instances can be started, exporting the same or different resources from the cluster. This enables the clustering of NFS-Ganesha instances. However, this does not imply high availability.

When regular gateway instances and NFS-Ganesha instances overlap the same data resources, they will be accessible from both the standard S3 API and through the NFS-Ganesha instance as exported. You can co-locate the NFS-Ganesha instance with a Ceph Object Gateway instance on the same host.

Before you Start

  1. Disable any running kernel NFS service instances on any host that will run NFS-Ganesha before attempting to run NFS-Ganesha. NFS-Ganesha will not start if another NFS instance is running.
  2. As root, enable the Red Hat Ceph Storage 3 Tools repository:

    # subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-els-rpms
  3. Make sure that the rpcbind service is running:

    # systemctl start rpcbind
    Note

    The rpcbind package that provides rpcbind is usually installed by default. If that is not the case, install the package first.

    For details on how NFS uses rpcbind, see the Required Services section in the Storage Administration Guide for Red Hat Enterprise Linux 7.

  4. If the nfs-service service is running, stop and disable it:

    # systemctl stop nfs-server.service
    # systemctl disable nfs-server.service

Configuring an NFS-Ganesha Instance

  1. Install the nfs-ganesha-rgw package:

    # yum install nfs-ganesha-rgw
  2. Copy the Ceph configuration file from a Ceph Monitor node to the /etc/ceph/ directory of the NFS-Ganesha host, and edit it as necessary:

    # scp <mon-host>:/etc/ceph/ceph.conf <nfs-ganesha-rgw-host>:/etc/ceph
    Note

    The Ceph configuration file must contain a valid [client.rgw.{instance-name}] section and corresponding parameters for the various required Gateway configuration variables such as rgw_data, keyring, or rgw_frontends. If exporting Swift containers that do not conform to valid S3 bucket naming requirements, set rgw_relaxed_s3_bucket_names to true in the [client.rgw] section of the Ceph configuration file. For example, if a Swift container name contains underscores, it is not a valid S3 bucket name and will not get synchronized unless rgw_relaxed_s3_bucket_names is set to true. When adding objects and buckets outside of NFS, those objects will appear in the NFS namespace in the time set by rgw_nfs_namespace_expire_secs, which is about 5 minutes by default. Override the default value for rgw_nfs_namespace_expire_secs in the Ceph configuration file to change the refresh rate.

  3. Open the NFS-Ganesha configuration file:

    # vim /etc/ganesha/ganesha.conf
  4. Configure the EXPORT section with an FSAL (File System Abstraction Layer) block. Provide an ID, S3 user ID, S3 access key, and secret. For NFSv4, it should look something like this:

    EXPORT
    {
            Export_ID={numeric-id};
            Path = "/";
            Pseudo = "/";
            Access_Type = RW;
            SecType = "sys";
            NFS_Protocols = 4;
            Transport_Protocols = TCP;
            Squash = No_Root_Squash;
    
            FSAL {
                    Name = RGW;
                    User_Id = {s3-user-id};
                    Access_Key_Id ="{s3-access-key}";
                    Secret_Access_Key = "{s3-secret}";
            }
    }

    The Path option instructs Ganesha where to find the export. For the VFS FSAL, this is the location within the server’s namespace. For other FSALs, it may be the location within the filesystem managed by that FSAL’s namespace. For example, if the Ceph FSAL is used to export an entire CephFS volume, Path would be /.

    The Pseudo option instructs Ganesha where to place the export within NFS v4’s pseudo file system namespace. NFS v4 specifies the server may construct a pseudo namespace that may not correspond to any actual locations of exports, and portions of that pseudo filesystem may exist only within the realm of the NFS server and not correspond to any physical directories. Further, an NFS v4 server places all its exports within a single namespace. It is possible to have a single export exported as the pseudo filesystem root, but it is much more common to have multiple exports placed in the pseudo filesystem. With a traditional VFS, often the Pseudo location is the same as the Path location. Returning to the example CephFS export with / as the Path, if multiple exports are desired, the export would likely have something else as the Pseudo option. For example, /ceph.

    Any EXPORT block which should support NFSv3 should include version 3 in the NFS_Protocols setting. Additionally, NFSv3 is the last major version to support the UDP transport. Early versions of the standard included UDP, but RFC 7530 forbids its use. To enable UDP, include it in the Transport_Protocols setting. For example:

    EXPORT {
    ...
        NFS_Protocols = 3,4;
        Transport_Protocols = UDP,TCP;
    ...
    }

    Setting SecType = sys; allows clients to attach without Kerberos authentication.

    Setting Squash = No_Root_Squash; enables a user to change directory ownership in the NFS mount.

    NFS clients using a conventional OS-native NFS 4.1 client typically see a federated namespace of exported file systems defined by the destination server’s pseudofs root. Any number of these can be Ceph Object Gateway exports.

    Each export has its own tuple of name, User_Id, Access_Key, and Secret_Access_Key and creates a proxy of the object namespace visible to the specified user.

    An export in ganesha.conf can also contain an NFSV4 block. Red Hat Ceph Storage supports the Allow_Numeric_Owners and Only_Numberic_Owners parameters as an alternative to setting up the idmapper program.

    NFSV4 {
        Allow_Numeric_Owners = true;
        Only_Numeric_Owners = true;
    }
  5. Configure an NFS_CORE_PARAM block.

    NFS_CORE_PARAM{
        mount_path_pseudo = true;
    }

    When the mount_path_pseudo configuration setting is set to true, it will make the NFS v3 and NFS v4.x mounts use the same server side path to reach an export, for example:

        mount -o vers=3 <IP ADDRESS>:/export /mnt
        mount -o vers=4 <IP ADDRESS>:/export /mnt
    Path            Pseudo          Tag     Mechanism   Mount
    /export/test1   /export/test1   test1   v3 Pseudo   mount -o vers=3 server:/export/test1
    /export/test1   /export/test1   test1   v3 Tag      mount -o vers=3 server:test1
    /export/test1   /export/test1   test1   v4 Pseudo   mount -o vers=4 server:/export/test1
    /               /export/ceph1   ceph1   v3 Pseudo   mount -o vers=3 server:/export/ceph1
    /               /export/ceph1   ceph1   v3 Tag      mount -o vers=3 server:ceph1
    /               /export/ceph1   ceph1   v4 Pseudo   mount -o vers=4 server:/export/ceph1
    /               /export/ceph2   ceph2   v3 Pseudo   mount -o vers=3 server:/export/ceph2
    /               /export/ceph2   ceph2   v3 Tag      mount -o vers=3 server:ceph2
    /               /export/ceph2   ceph2   v4 Pseudo   mount -o vers=4

    When the mount_path_pseudo configuration setting is set to false, NFS v3 mounts use the Path option and NFS v4.x mounts use the Pseudo option.

    Path            Pseudo          Tag     Mechanism   Mount
    /export/test1   /export/test1   test1   v3 Path     mount -o vers=3 server:/export/test1
    /export/test1   /export/test1   test1   v3 Tag      mount -o vers=3 server:test1
    /export/test1   /export/test1   test1   v4 Pseudo   mount -o vers=4 server:/export/test1
    /               /export/ceph1   ceph1   v3 Path     mount -o vers=3 server:/
    /               /export/ceph1   ceph1   v3 Tag      mount -o vers=3 server:ceph1
    /               /export/ceph1   ceph1   v4 Pseudo   mount -o vers=4 server:/export/ceph1
    /               /export/ceph2   ceph2   v3 Path     not accessible
    /               /export/ceph2   ceph2   v3 Tag      mount -o vers=3 server:ceph2
    /               /export/ceph2   ceph2   v4 Pseudo   mount -o vers=4 server:/export/ceph2
  6. Configure the RGW section. Specify the name of the instance, provide a path to the Ceph configuration file, and specify any initialization arguments:

    RGW {
        name = "client.rgw.{instance-name}";
        ceph_conf = "/etc/ceph/ceph.conf";
        init_args = "--{arg}={arg-value}";
    }
  7. Save the /etc/ganesha/ganesha.conf configuration file.
  8. Enable and start the nfs-ganesha service.

    # systemctl enable nfs-ganesha
    # systemctl start nfs-ganesha
  9. For very large pseudo directories, set the configurable parameter rgw_nfs_s3_fast_attrs to true in the ceph.conf file to make the namespace immutable and accelerated:

    rgw_nfs_s3_fast_attrs= true
  10. Restart the Ceph Object Gateway service from each gateway node

    # systemctl restart ceph-radosgw.target

Configuring NFSv4 clients

To access the namespace, mount the configured NFS-Ganesha export(s) into desired locations in the local POSIX namespace. As noted, this implementation has a few unique restrictions:

  • Only the NFS 4.1 and higher protocol flavors are supported.
  • To enforce write ordering, use the sync mount option.

To mount the NFS-Ganesha exports, add the following entry to the /etc/fstab file on the client host:

<ganesha-host-name>:/ <mount-point> nfs noauto,soft,nfsvers=4.1,sync,proto=tcp 0 0

Specify the NFS-Ganesha host name and the path to the mount point on the client.

Note

To successfully mount the NFS-Ganesha exports, the /sbin/mount.nfs file must exist on the client. The nfs-tools package provides this file. In most cases, the package is installed by default. However, verify that the nfs-tools package is installed on the client and if not, install it.

For additional details on NFS, see the Network File System (NFS) chapter in the Storage Administration Guide for Red Hat Enterprise Linux 7.

Configuring NFSv3 clients

Linux clients can be configured to mount with NFSv3 by supplying nfsvers=3 and noacl as mount options. To use UDP as the transport, add proto=udp to the mount options. However, TCP is the preferred protocol.

<ganesha-host-name>:/ <mount-point> nfs noauto,noacl,soft,nfsvers=3,sync,proto=tcp 0 0
Note

Configure the NFS Ganesha EXPORT block Protocols setting with version 3 and the Transports setting with UDP if the mount will use version 3 with UDP.

Since NFSv3 does not communicate client OPEN and CLOSE operations to file servers, RGW NFS cannot use these operations to mark the beginning and ending of file upload transactions. Instead, RGW NFS attempts to start a new upload when the first write is sent to a file at offset 0, and finalizes the upload when no new writes to the file have been seen for a period of time—​by default, 10 seconds. To change this value, set a value for rgw_nfs_write_completion_interval_s in the RGW section(s) of the Ceph configuration file.