Red Hat Training

A Red Hat training course is available for Red Hat OpenStack Platform

4.4. Configure the Object Storage Service

4.4.1. Create the Object Storage Service Identity Records

Create and configure Identity service records required by the Object Storage service. These entries provide authentication for the Object Storage service, and guide other OpenStack services attempting to locate and access the functionality provided by the Object Storage service.
This procedure assumes that you have already created an administrative user account and a services tenant. For more information, see:
Perform this procedure on the Identity service server, or on any machine onto which you have copied the keystonerc_admin file and on which the keystone command-line utility is installed.

Procedure 4.3. Creating Identity Records for the Object Storage Service

  1. Set up the shell to access keystone as the administrative user:
    # source ~/keystonerc_admin
  2. Create the swift user:
    [(keystone_admin)]# openstack user create --password PASSWORD swift
    +----------+----------------------------------+
    | Field    | Value                            |
    +----------+----------------------------------+
    | email    | None                             |
    | enabled  | True                             |
    | id       | 00916f794cec438ea7f14ee0769e6964 |
    | name     | swift                            |
    | username | swift                            |
    +----------+----------------------------------+
    Replace PASSWORD with a secure password that will be used by the Object Storage service when authenticating with the Identity service.
  3. Link the swift user and the admin role together within the context of the services tenant:
    [(keystone_admin)]# openstack role add --project services --user swift admin
  4. Create the swift Object Storage service entry:
    [(keystone_admin)]# openstack service create --name swift \
        --description "Swift Storage Service" \
        object-store
  5. Create the swift endpoint entry:
    [(keystone_admin)]# openstack endpoint create \
        --publicurl 'http://IP:8080/v1/AUTH_%(tenant_id)s' \
        --adminurl 'http://IP:8080/v1' \
        --internalurl 'http://IP:8080/v1/AUTH_%(tenant_id)s' \
        --region RegionOne \
        swift
    Replace IP with the IP address or fully qualified domain name of the server hosting the Object Storage Proxy service.

4.4.2. Configure the Object Storage Service Storage Nodes

The Object Storage service stores objects on the filesystem, usually on a number of connected physical storage devices. All of the devices that will be used for object storage must be formatted ext4 or XFS, and mounted under the /srv/node/ directory. All of the services that will run on a given node must be enabled, and their ports opened.
Although you can run the proxy service alongside the other services, the proxy service is not covered in this procedure.

Procedure 4.4. Configuring the Object Storage Service Storage Nodes

  1. Format your devices using the ext4 or XFS filesystem. Ensure that xattrs are enabled.
  2. Add your devices to the /etc/fstab file to ensure that they are mounted under /srv/node/ at boot time. Use the blkid command to find your device's unique ID, and mount the device using its unique ID.

    Note

    If using ext4, ensure that extended attributes are enabled by mounting the filesystem with the user_xattr option. (In XFS, extended attributes are enabled by default.)
  3. Configure the firewall to open the TCP ports used by each service running on each node. By default, the account service uses port 6202, the container service uses port 6201, and the object service uses port 6200.
    1. Open the /etc/sysconfig/iptables file in a text editor.
    2. Add an INPUT rule allowing TCP traffic on the ports used by the account, container, and object service. The new rule must appear before any reject-with icmp-host-prohibited rule:
      -A INPUT -p tcp -m multiport --dports 6200,6201,6202,873 -j ACCEPT
    3. Save the changes to the /etc/sysconfig/iptables file.
    4. Restart the iptables service for the firewall changes to take effect:
      # systemctl restart iptables.service
  4. Change the owner of the contents of /srv/node/ to swift:swift:
    # chown -R swift:swift /srv/node/
  5. Set the SELinux context correctly for all directories under /srv/node/:
    # restorecon -R /srv
  6. Add a hash prefix to the /etc/swift/swift.conf file:
    # openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix \
       $(openssl rand -hex 10)
  7. Add a hash suffix to the /etc/swift/swift.conf file:
    # openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix \
       $(openssl rand -hex 10)
  8. Set the IP address that the storage services will listen on. Run the following commands for every service on every node in your Object Storage cluster:
    # openstack-config --set /etc/swift/object-server.conf \
       DEFAULT bind_ip NODE_IP_ADDRESS
    # openstack-config --set /etc/swift/account-server.conf \
       DEFAULT bind_ip NODE_IP_ADDRESS
    # openstack-config --set /etc/swift/container-server.conf \
       DEFAULT bind_ip NODE_IP_ADDRESS
    Replace NODE_IP_ADDRESS with the IP address of the node you are configuring.
  9. Copy /etc/swift/swift.conf from the node you are currently configuring to all of your Object Storage service nodes.

    Important

    The /etc/swift/swift.conf file must be identical on all of your Object Storage service nodes.
  10. Start the services that will run on the node:
    # systemctl start openstack-swift-account.service
    # systemctl start openstack-swift-container.service
    # systemctl start openstack-swift-object.service
  11. Configure the services to start at boot time:
    # systemctl enable openstack-swift-account.service
    # systemctl enable openstack-swift-container.service
    # systemctl enable openstack-swift-object.service

4.4.3. Configure the Object Storage Service Proxy Service

The Object Storage proxy service determines to which node gets and puts are directed.
Although you can run the account, container, and object services alongside the proxy service, only the proxy service is covered in the following procedure.

Note

Because the SSL capability built into the Object Storage service is intended primarily for testing, it is not recommended for use in production. In a production cluster, Red Hat recommends that you use the load balancer to terminate SSL connections.

Procedure 4.5. Configuring the Object Storage Service Proxy Service

  1. Update the configuration file for the proxy server with the correct authentication details for the appropriate service user:
    # openstack-config --set /etc/swift/proxy-server.conf \
          filter:authtoken auth_host IP
    # openstack-config --set /etc/swift/proxy-server.conf \
          filter:authtoken admin_tenant_name services
    # openstack-config --set /etc/swift/proxy-server.conf \
          filter:authtoken admin_user swift
    # openstack-config --set /etc/swift/proxy-server.conf \
          filter:authtoken admin_password PASSWORD
    Replace the following values:
    • Replace IP with the IP address or host name of the Identity server.
    • Replace services with the name of the tenant that was created for the Object Storage service (previous examples set this to services).
    • Replace swift with the name of the service user that was created for the Object Storage service (previous examples set this to swift).
    • Replace PASSWORD with the password associated with the service user.
  2. Start the memcached and openstack-swift-proxy services:
    # systemctl start memcached.service
    # systemctl start openstack-swift-proxy.service
  3. Configure the memcached and openstack-swift-proxy services to start at boot time:
    # systemctl enable memcached.service
    # systemctl enable openstack-swift-proxy.service
  4. Allow incoming connections to the server hosting the Object Storage proxy service. Open the /etc/sysconfig/iptables file in a text editor, and Add an INPUT rule allowing TCP traffic on port 8080. The new rule must appear before any INPUT rules that REJECT traffic: :
    -A INPUT -p tcp -m multiport --dports 8080 -j ACCEPT

    Important

    This rule allows communication from all remote hosts to the system hosting the Swift proxy on port 8080. For information regarding the creation of more restrictive firewall rules, see the Red Hat Enterprise Linux Security Guide:
  5. Restart the iptables service to ensure that the change takes effect:
    # systemctl restart iptables.service

4.4.4. Object Storage Service Rings

Rings determine where data is stored in a cluster of storage nodes. Ring files are generated using the swift-ring-builder tool. Three ring files are required, one each for the object, container, and account services.
Each storage device in a cluster is divided into partitions, with a recommended minimum of 100 partitions per device. Each partition is physically a directory on disk. A configurable number of bits from the MD5 hash of the filesystem path to the partition directory, known as the partition power, is used as a partition index for the device. The partition count of a cluster that has 1000 devices, where each device has 100 partitions on it, is 100,000.
The partition count is used to calculate the partition power, where 2 to the partition power is the partition count. If the partition power is a fraction, it is rounded up. If the partition count is 100,000, the part power is 17 (16.610 rounded up). This can be expressed mathematically as: 2partition power = partition count.

4.4.5. Build Object Storage Service Ring Files

Three ring files need to be created: one to track the objects stored by the Object Storage Service, one to track the containers in which objects are placed, and one to track which accounts can access which containers. The ring files are used to deduce where a particular piece of data is stored.
Ring files are generated using four possible parameters: partition power, replica count, zone, and the amount of time that must pass between partition reassignments.

Table 4.1. Parameters Used when Building Ring Files

Ring File Parameter Description
part_power
2partition power = partition count.
The partition is rounded up after calculation.
replica_count
The number of times that your data will be replicated in the cluster.
min_part_hours
Minimum number of hours before a partition can be moved. This parameter increases availability of data by not moving more than one copy of a given data item within that min_part_hours amount of time.
zone
Used when adding devices to rings (optional). Zones are a flexible abstraction, where each zone should be separated from other zones as possible in your deployment. You can use a zone to represent sites, cabinet, nodes, or even devices.

Procedure 4.6. Building Object Storage Service Ring Files

  1. Build one ring for each service. Provide a builder file, a partition power, a replica count, and the minimum hours between partition reassignment:
    # swift-ring-builder /etc/swift/object.builder create part_power replica_count min_part_hours
    # swift-ring-builder /etc/swift/container.builder create part_power replica_count min_part_hours
    # swift-ring-builder /etc/swift/account.builder create part_power replica_count min_part_hours
  2. When the rings are created, add devices to the account ring:
    # swift-ring-builder /etc/swift/account.builder add zX-SERVICE_IP:6202/dev_mountpt part_count
    Replace the following values:
    • Replace X with the corresponding integer of a specified zone (for example, z1 would correspond to Zone One).
    • Replace SERVICE_IP with the IP on which the account, container, and object services should listen. This IP should match the bind_ip value set during the configuration of the Object Storage service storage nodes.
    • Replace dev_mountpt with the /srv/node subdirectory under which your device is mounted.
    • Replace part_count with the partition count you used to calculate your partition power.

    Note

    Repeat this step for each device (on each node in the cluster) you want added to the ring.
  3. Add each device to both the container and object rings:
    # swift-ring-builder /etc/swift/container.builder add zX-SERVICE_IP:6201/dev_mountpt part_count
    # swift-ring-builder /etc/swift/object.builder add zX-SERVICE_IP:6200/dev_mountpt part_count
    Replace the variables with the same ones used in the previous step.

    Note

    Repeat these commands for each device (on each node in the cluster) you want added to the ring.
  4. Distribute the partitions across the devices in the ring:
    # swift-ring-builder /etc/swift/account.builder rebalance
    # swift-ring-builder /etc/swift/container.builder rebalance
    # swift-ring-builder /etc/swift/object.builder rebalance
  5. Check to see that you now have three ring files in the directory /etc/swift:
    # ls /etc/swift/*gz 
    The files should be listed as follows:
    /etc/swift/account.ring.gz  /etc/swift/container.ring.gz  /etc/swift/object.ring.gz
    
  6. Restart the openstack-swift-proxy service:
    # systemctl restart openstack-swift-proxy.service
  7. Ensure that all files in the /etc/swift/ directory, including those that you have just created, are owned by the root user and the swift group:

    Important

    All mount points must be owned by root; all roots of mounted file systems must be owned by swift. Before running the following command, ensure that all devices are already mounted and owned by root.
    # chown -R root:swift /etc/swift
  8. Copy each ring builder file to each node in the cluster, storing them under /etc/swift/.