4.6. Configure the Object Storage Service

4.6.1. Create the Object Storage Service Identity Records

This section assumes that you have already created an administrator account and services tenant. For more information, refer to:
In this procedure, you will:
  1. Create the swift user, who has the admin role in the services tenant.
  2. Create the swift service entry and assign it an endpoint.
In order to proceed, you should have already performed the following (using the Identity service):
  1. Created an Administrator role named admin (refer to Section 3.8, “Create an Administrator Account” for instructions)
  2. Created the services tenant (refer to Section 3.10, “Create the Services Tenant” for instructions)

Note

The Deploying OpenStack: Learning Environments guide uses one tenant for all service users. For more information, refer to Section 3.10, “Create the Services Tenant”.
You can perform this procedure from your Identity service server or on any machine where you've copied the keystonerc_admin file (which contains administrator credentials) and the keystone command-line utility is installed.

Procedure 4.2. Configuring the Object Storage Service to authenticate through the Identity Service

  1. Set up the shell to access Keystone as the admin user:
    # source ~/keystonerc_admin
  2. Create the swift user and set its password by replacing PASSWORD with your chosen password:
    # keystone user-create --name swift --pass PASSWORD
  3. Add the swift user to the services tenant with the admin role:
    # keystone user-role-add --user swift --role admin --tenant services
  4. Create the swift Object Storage service entry:
    # keystone service-create --name swift --type object-store \
        --description "Swift Storage Service"
  5. Create the swift endpoint entry:
    # keystone endpoint-create \
        --service swift \
        --publicurl "http://IP:8080/v1/AUTH_\$(tenant_id)s" \
        --adminurl "http://IP:8080/v1" \
        --internalurl "http://IP:8080/v1/AUTH_\$(tenant_id)s"
    Replace IP with the IP address or fully qualified domain name of the system hosting the Object Storage Proxy service.
You have configured the Identity service to work with the Object Storage service.

4.6.2. Configure the Object Storage Service Storage Nodes

The Object Storage Service stores objects on the filesystem, usually on a number of connected physical storage devices. All of the devices which will be used for object storage must be formatted ext4 or XFS, and mounted under the /srv/node/ directory. All of the services that will run on a given node must be enabled, and their ports opened.
While you can run the proxy service alongside the other services, the proxy service is not covered in this procedure.

Procedure 4.3. Configuring the Object Storage Service storage nodes

  1. Format your devices using the ext4 or XFS filesystem. Make sure that xattrs are enabled.
  2. Add your devices to the /etc/fstab file to ensure that they are mounted under /srv/node/ at boot time.
    Use the blkid command to find your device's unique ID, and mount the device using its unique ID.

    Note

    If using ext4, ensure that extended attributes are enabled by mounting the filesystem with the user_xattr option. (In XFS, extended attributes are enabled by default.)
  3. For Red Hat Enterprise Linux 6-based systems, configure the firewall to open the TCP ports used by each service running on each node.
    By default, the account service uses port 6002, the container service uses port 6001, and the object service uses port 6000.
    1. Open the /etc/sysconfig/iptables file in a text editor.
    2. Add an INPUT rule allowing TCP traffic on the ports used by the account, container, and object service. The new rule must appear before any reject-with icmp-host-prohibited rule.
      -A INPUT -p tcp -m multiport --dports 6000,6001,6002,873 -j ACCEPT
    3. Save the changes to the /etc/sysconfig/iptables file.
    4. Restart the iptables service for the firewall changes to take effect.
      # service iptables restart
  4. For Red Hat Enterprise Linux 7-based systems, configure the firewall to open the TCP ports used by each service running on each node.
    By default, the account service uses port 6002, the container service uses port 6001, and the object service uses port 6000.
    1. Add rules allowing TCP traffic on the ports used by the account, container, and object service.
      # firewall-cmd --permanent --add-port=6000-6002/tcp
      # firewall-cmd --permanent --add-port=873/tcp
    2. For the change to take immediate effect, add the rules to the runtime mode:
      # firewall-cmd --add-port=6000-6002/tcp
      # firewall-cmd --add-port=873/tcp
  5. Change the owner of the contents of /srv/node/ to swift:swift with the chown command.
    # chown -R swift:swift /srv/node/
  6. Set the SELinux context correctly for all directories under /srv/node/ with the restorcon command.
    # restorecon -R /srv
  7. Use the openstack-config command to add a hash prefix (swift_hash_path_prefix) to your /etc/swift/swift.conf:
    # openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix \
                  $(openssl rand -hex 10)
  8. Use the openstack-config command to add a hash suffix (swift_hash_path_suffix) to your /etc/swift/swift.conf:
    # openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix \
                  $(openssl rand -hex 10)
    These details are required for finding and placing data on all of your nodes. Back /etc/swift/swift.conf up.
  9. Use the openstack-config command to set the IP address your storage services will listen on. Run these commands for every service on every node in your Object Storage cluster.
    # openstack-config --set /etc/swift/object-server.conf \
    			DEFAULT bind_ip node_ip_address
    # openstack-config --set /etc/swift/account-server.conf \
    			DEFAULT bind_ip node_ip_address
    # openstack-config --set /etc/swift/container-server.conf \
    			DEFAULT bind_ip node_ip_address
    The DEFAULT argument specifies the DEFAULT section of the service configuration file. Replace node_ip_address with the IP address of the node you are configuring.
  10. Copy /etc/swift/swift.conf from the node you are currently configuring, to all of your Object Storage Service nodes.

    Important

    The /etc/swift/swift.conf file must be identical on all of your Object Storage Service nodes.
  11. Start the services which will run on your node.
    # service openstack-swift-account start
    # service openstack-swift-container start
    # service openstack-swift-object start
  12. Use the chkconfig command to make sure the services automatically start at boot time.
    # chkconfig openstack-swift-account on
    # chkconfig openstack-swift-container on
    # chkconfig openstack-swift-object on
All of the devices that your node will provide as object storage are formatted and mounted under /srv/node/. Any service running on the node has been enabled, and any ports used by services on the node have been opened.

4.6.3. Configure the Object Storage Service Proxy Service

The Object Storage proxy service determines to which node gets and puts are directed.
Although you can run the account, container, and object services alongside the proxy service, only the proxy service is covered in the following procedure.

Note

Because the SSL capability built into the Object Storage service is intended primarily for testing, it is not recommended for use in production. In a production cluster, Red Hat recommends that you use the load balancer to terminate SSL connections.

Procedure 4.4. Configuring the Object Storage Service proxy service

  1. Update the configuration file for the proxy server with the correct authentication details for the appropriate service user:
    # openstack-config --set /etc/swift/proxy-server.conf \
          filter:authtoken auth_host IP
    # openstack-config --set /etc/swift/proxy-server.conf \
          filter:authtoken admin_tenant_name services
    # openstack-config --set /etc/swift/proxy-server.conf \
          filter:authtoken admin_user swift
    # openstack-config --set /etc/swift/proxy-server.conf \
          filter:authtoken admin_password PASSWORD
    Where:
    • IP - The IP address or host name of the Identity server.
    • services - The name of the tenant that was created for the use of the Object Storage service (previous examples set this to services).
    • swift - The name of the service user that was created for the Object Storage service (previous examples set this to swift).
    • PASSWORD - The password associated with the service user.
  2. Start the memcached and openstack-swift-proxy services using the service command:
    # service memcached start
    # service openstack-swift-proxy start
  3. Enable the memcached and openstack-swift-proxy services permanently using the chkconfig command:
    # chkconfig memcached on
    # chkconfig openstack-swift-proxy on
  4. For Red Hat Enterprise Linux 6-based systems, allow incoming connections to the Swift proxy server by adding this firewall rule to the /etc/sysconfig/iptables configuration file:
    -A INPUT -p tcp -m multiport --dports 8080 -j ACCEPT

    Important

    This rule allows communication from all remote hosts to the system hosting the Swift proxy on port 8080. For information regarding the creation of more restrictive firewall rules refer to the Red Hat Enterprise Linux Security Guide from the following link:
  5. For Red Hat Enterprise Linux 7-based systems, allow incoming connections to the Swift proxy server by adding this firewall rule:
    firewall-cmd --permanent --add-port=8080/tcp

    Important

    This rule allows communication from all remote hosts to the system hosting the Swift proxy on port 8080. For information regarding the creation of more restrictive firewall rules refer to the Red Hat Enterprise Linux Security Guide from the following link:
  6. For the change to take immediate effect, add the rule to the runtime mode:
    firewall-cmd --add-port=8080/tcp
  7. Use the service command to restart the iptables service for the new rule to take effect:
    # service iptables save
    # service iptables restart
The Object Storage Service proxy service is now listening for HTTP put and get requests on port 8080, and directing them to the appropriate nodes.

4.6.4. Object Storage Service Rings

Rings determine where data is stored in a cluster of storage nodes. Ring files are generated using the swift-ring-builder tool. Three ring files are required, one each for the object, container, and account services.
Each storage device in a cluster is divided into partitions, with a recommended minimum of 100 partitions per device. Each partition is physically a directory on disk.
A configurable number of bits from the MD5 hash of the filesystem path to the partition directory, known as the partition power, is used as a partition index for the device. The partition count of a cluster that has 1000 devices, where each device has 100 partitions on it, is 100 000.
The partition count is used to calculate the partition power, where 2 to the partition power is the partition count. When the partition power is a fraction, it is rounded up. If the partition count is 100 000, the part power is 17 (16.610 rounded up).
Expressed mathematically: 2 ^ partition power = partition count.
Ring files are generated using 3 parameters: partition power, replica count, and the amount of time that must pass between partition reassignments.
A fourth parameter, zone, is used when adding devices to rings. Zones are a flexible abstraction, where each zone should be as separated from other zones as possible in your specific deployment. You can use a zone to represent sites, cabinets, nodes, or even devices.

Table 4.2. Parameters used when building ring files

Ring File Parameter Description
part_power
2 ^ partition power = partition count.
The partition is rounded up after calculation.
replica_count
The number of times that your data will be replicated in the cluster.
min_part_hours
Minimum number of hours before a partition can be moved. This parameter increases availability of data by not moving more than one copy of a given data item within that min_part_hours amount of time.

4.6.5. Build Object Storage Service Ring Files

Three ring files need to be created: one to track the objects stored by the Object Storage Service, one to track the containers that objects are placed in, and one to track which accounts can access which containers. The ring files are used to deduce where a particular piece of data is stored.

Procedure 4.5. Building Object Storage service ring files

  1. Use the swift-ring-builder command to build one ring for each service. Provide a builder file, a partition power, a replica count, and the minimum hours between partition re-assignment:
    # swift-ring-builder /etc/swift/object.builder create part_power replica_count min_part_hours
    # swift-ring-builder /etc/swift/container.builder create part_power replica_count min_part_hours
    # swift-ring-builder /etc/swift/account.builder create part_power replica_count min_part_hours
  2. When the rings are created, add devices to the account ring.
    # swift-ring-builder /etc/swift/account.builder add zX-SERVICE_IP:6002/dev_mountpt part_count
    Where:
    • X is the corresponding integer of a specified zone (for example, z1 would correspond to Zone One).
    • SERVICE_IP is the IP on which the account, container, and object services should listen. This IP should match the bind_ip value set during the configuration of the Object Storage service Storage Nodes.
    • dev_mountpt is the /srv/node subdirectory under which your device is mounted.
    • part_count is the partition count you used to calculate your partition power.
    For example, if:
    • The account, container, and object services are configured to listen on 10.64.115.44 on Zone One,
    • Your device is mounted on /srv/node/accounts, and
    • You wish to set a partition count of 100.
    Then run:
    # swift-ring-builder /etc/swift/account.builder add z1-10.64.115.44:6002/accounts 100

    Note

    Repeat this step for each device (on each node in the cluster) you want added to the ring.
  3. In a similar fashion, add devices to both containers and objects rings:
    # swift-ring-builder /etc/swift/container.builder add zX-SERVICE_IP:6001/dev_mountpt part_count
    # swift-ring-builder /etc/swift/object.builder add zX-SERVICE_IP:6000/dev_mountpt part_count
    Replace the variables herein with the same ones used in the previous step.

    Note

    Repeat these commands for each device (on each node in the cluster) you want added to the ring.
  4. Distribute the partitions across the devices in the ring using the swift-ring-builder command's rebalance argument.
    # swift-ring-builder /etc/swift/account.builder rebalance
    # swift-ring-builder /etc/swift/container.builder rebalance
    # swift-ring-builder /etc/swift/object.builder rebalance
  5. Check to see that you now have 3 ring files in the directory /etc/swift. The command:
    # ls /etc/swift/*gz 
    should reveal:
    /etc/swift/account.ring.gz  /etc/swift/container.ring.gz  /etc/swift/object.ring.gz
    
  6. Ensure that all files in the /etc/swift/ directory including those that you have just created are owned by the root user and swift group.

    Important

    All mount points must be owned by root; all roots of mounted file systems must be owned by swift. Before running the following command, ensure that all devices are already mounted and owned by root.
    # chown -R root:swift /etc/swift
  7. Copy each ring builder file to each node in the cluster, storing them under /etc/swift/.
    # scp /etc/swift/*.gz node_ip_address:/etc/swift
You have created rings for each of the services that require them. You have used the builder files to distribute partitions across the nodes in your cluster, and have copied the ring builder files themselves to each node in the cluster.