4.6. Configure the Object Storage Service
4.6.1. Create the Object Storage Service Identity Records
This section assumes that you have already created an administrator account and
services tenant. For more information, refer to:
In this procedure, you will:
- Create the
swiftuser, who has theadminrole in theservicestenant. - Create the
swiftservice entry and assign it an endpoint.
In order to proceed, you should have already performed the following (using the Identity service):
- Created an Administrator role named
admin(refer to Section 3.8, “Create an Administrator Account” for instructions) - Created the
servicestenant (refer to Section 3.10, “Create the Services Tenant” for instructions)
Note
The Deploying OpenStack: Learning Environments guide uses one tenant for all service users. For more information, refer to Section 3.10, “Create the Services Tenant”.
You can perform this procedure from your Identity service server or on any machine where you've copied the
keystonerc_admin file (which contains administrator credentials) and the keystone command-line utility is installed.
Procedure 4.2. Configuring the Object Storage Service to authenticate through the Identity Service
- Set up the shell to access Keystone as the admin user:
#source ~/keystonerc_admin - Create the
swiftuser and set its password by replacing PASSWORD with your chosen password:#keystone user-create --name swift --pass PASSWORD - Add the
swiftuser to theservicestenant with theadminrole:#keystone user-role-add --user swift --role admin --tenant services - Create the
swiftObject Storage service entry:#keystone service-create --name swift --type object-store \--description "Swift Storage Service" - Create the
swiftendpoint entry:#keystone endpoint-create \--service swift \--publicurl "http://IP:8080/v1/AUTH_\$(tenant_id)s" \--adminurl "http://IP:8080/v1" \--internalurl "http://IP:8080/v1/AUTH_\$(tenant_id)s"Replace IP with the IP address or fully qualified domain name of the system hosting the Object Storage Proxy service.
You have configured the Identity service to work with the Object Storage service.
4.6.2. Configure the Object Storage Service Storage Nodes
The Object Storage Service stores objects on the filesystem, usually on a number of connected physical storage devices. All of the devices which will be used for object storage must be formatted
ext4 or XFS, and mounted under the /srv/node/ directory. All of the services that will run on a given node must be enabled, and their ports opened.
While you can run the proxy service alongside the other services, the proxy service is not covered in this procedure.
Procedure 4.3. Configuring the Object Storage Service storage nodes
- Format your devices using the
ext4orXFSfilesystem. Make sure thatxattrs are enabled. - Add your devices to the
/etc/fstabfile to ensure that they are mounted under/srv/node/at boot time.Use theblkidcommand to find your device's unique ID, and mount the device using its unique ID.Note
If usingext4, ensure that extended attributes are enabled by mounting the filesystem with theuser_xattroption. (InXFS, extended attributes are enabled by default.) - For Red Hat Enterprise Linux 6-based systems, configure the firewall to open the TCP ports used by each service running on each node.By default, the account service uses port 6002, the container service uses port 6001, and the object service uses port 6000.
- Open the
/etc/sysconfig/iptablesfile in a text editor. - Add an
INPUTrule allowing TCP traffic on the ports used by the account, container, and object service. The new rule must appear before anyreject-with icmp-host-prohibitedrule.-A INPUT -p tcp -m multiport --dports 6000,6001,6002,873 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptablesfile. - Restart the
iptablesservice for the firewall changes to take effect.#service iptables restart
- For Red Hat Enterprise Linux 7-based systems, configure the firewall to open the TCP ports used by each service running on each node.By default, the account service uses port 6002, the container service uses port 6001, and the object service uses port 6000.
- Add rules allowing TCP traffic on the ports used by the account, container, and object service.
#firewall-cmd --permanent --add-port=6000-6002/tcp#firewall-cmd --permanent --add-port=873/tcp - For the change to take immediate effect, add the rules to the runtime mode:
#firewall-cmd --add-port=6000-6002/tcp#firewall-cmd --add-port=873/tcp
- Change the owner of the contents of
/srv/node/toswift:swiftwith thechowncommand.#chown -R swift:swift /srv/node/ - Set the
SELinuxcontext correctly for all directories under/srv/node/with therestorconcommand.#restorecon -R /srv - Use the
openstack-configcommand to add a hash prefix (swift_hash_path_prefix) to your/etc/swift/swift.conf:#openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix \$(openssl rand -hex 10) - Use the
openstack-configcommand to add a hash suffix (swift_hash_path_suffix) to your/etc/swift/swift.conf:#openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix \$(openssl rand -hex 10)These details are required for finding and placing data on all of your nodes. Back/etc/swift/swift.confup. - Use the
openstack-configcommand to set the IP address your storage services will listen on. Run these commands for every service on every node in your Object Storage cluster.#openstack-config --set /etc/swift/object-server.conf \DEFAULT bind_ip node_ip_address#openstack-config --set /etc/swift/account-server.conf \DEFAULT bind_ip node_ip_address#openstack-config --set /etc/swift/container-server.conf \DEFAULT bind_ip node_ip_addressTheDEFAULTargument specifies theDEFAULTsection of the service configuration file. Replace node_ip_address with the IP address of the node you are configuring. - Copy
/etc/swift/swift.conffrom the node you are currently configuring, to all of your Object Storage Service nodes.Important
The/etc/swift/swift.conffile must be identical on all of your Object Storage Service nodes. - Start the services which will run on your node.
#service openstack-swift-account start#service openstack-swift-container start#service openstack-swift-object start - Use the
chkconfigcommand to make sure the services automatically start at boot time.#chkconfig openstack-swift-account on#chkconfig openstack-swift-container on#chkconfig openstack-swift-object on
All of the devices that your node will provide as object storage are formatted and mounted under
/srv/node/. Any service running on the node has been enabled, and any ports used by services on the node have been opened.
4.6.3. Configure the Object Storage Service Proxy Service
The Object Storage proxy service determines to which node
gets and puts are directed.
Although you can run the account, container, and object services alongside the proxy service, only the proxy service is covered in the following procedure.
Note
Because the SSL capability built into the Object Storage service is intended primarily for testing, it is not recommended for use in production. In a production cluster, Red Hat recommends that you use the load balancer to terminate SSL connections.
Procedure 4.4. Configuring the Object Storage Service proxy service
- Update the configuration file for the proxy server with the correct authentication details for the appropriate service user:
#openstack-config --set /etc/swift/proxy-server.conf \filter:authtoken auth_host IP#openstack-config --set /etc/swift/proxy-server.conf \filter:authtoken admin_tenant_name services#openstack-config --set /etc/swift/proxy-server.conf \filter:authtoken admin_user swift#openstack-config --set /etc/swift/proxy-server.conf \filter:authtoken admin_password PASSWORDWhere:- IP - The IP address or host name of the Identity server.
- services - The name of the tenant that was created for the use of the Object Storage service (previous examples set this to
services). - swift - The name of the service user that was created for the Object Storage service (previous examples set this to
swift). - PASSWORD - The password associated with the service user.
- Start the
memcachedandopenstack-swift-proxyservices using theservicecommand:#service memcached start#service openstack-swift-proxy start - Enable the
memcachedandopenstack-swift-proxyservices permanently using thechkconfigcommand:#chkconfig memcached on#chkconfig openstack-swift-proxy on - For Red Hat Enterprise Linux 6-based systems, allow incoming connections to the Swift proxy server by adding this firewall rule to the
/etc/sysconfig/iptablesconfiguration file:-A INPUT -p tcp -m multiport --dports 8080 -j ACCEPT
Important
This rule allows communication from all remote hosts to the system hosting the Swift proxy on port8080. For information regarding the creation of more restrictive firewall rules refer to the Red Hat Enterprise Linux Security Guide from the following link: - For Red Hat Enterprise Linux 7-based systems, allow incoming connections to the Swift proxy server by adding this firewall rule:
firewall-cmd --permanent --add-port=8080/tcpImportant
This rule allows communication from all remote hosts to the system hosting the Swift proxy on port8080. For information regarding the creation of more restrictive firewall rules refer to the Red Hat Enterprise Linux Security Guide from the following link: - For the change to take immediate effect, add the rule to the runtime mode:
firewall-cmd --add-port=8080/tcp - Use the
servicecommand to restart theiptablesservice for the new rule to take effect:#service iptables save#service iptables restart
The Object Storage Service proxy service is now listening for HTTP put and get requests on port 8080, and directing them to the appropriate nodes.
4.6.4. Object Storage Service Rings
Rings determine where data is stored in a cluster of storage nodes. Ring files are generated using the swift-ring-builder tool. Three ring files are required, one each for the object, container, and account services.
Each storage device in a cluster is divided into partitions, with a recommended minimum of 100 partitions per device. Each partition is physically a directory on disk.
A configurable number of bits from the MD5 hash of the filesystem path to the partition directory, known as the partition power, is used as a partition index for the device. The partition count of a cluster that has 1000 devices, where each device has 100 partitions on it, is 100 000.
The partition count is used to calculate the partition power, where 2 to the partition power is the partition count. When the partition power is a fraction, it is rounded up. If the partition count is 100 000, the part power is 17 (16.610 rounded up).
Expressed mathematically: 2 ^ partition power = partition count.
Ring files are generated using 3 parameters: partition power, replica count, and the amount of time that must pass between partition reassignments.
A fourth parameter, zone, is used when adding devices to rings. Zones are a flexible abstraction, where each zone should be as separated from other zones as possible in your specific deployment. You can use a zone to represent sites, cabinets, nodes, or even devices.
Table 4.2. Parameters used when building ring files
| Ring File Parameter | Description |
|---|---|
|
part_power
|
2 ^ partition power = partition count.
The partition is rounded up after calculation.
|
|
replica_count
|
The number of times that your data will be replicated in the cluster.
|
|
min_part_hours
|
Minimum number of hours before a partition can be moved. This parameter increases availability of data by not moving more than one copy of a given data item within that min_part_hours amount of time.
|
4.6.5. Build Object Storage Service Ring Files
Three ring files need to be created: one to track the objects stored by the Object Storage Service, one to track the containers that objects are placed in, and one to track which accounts can access which containers. The ring files are used to deduce where a particular piece of data is stored.
Procedure 4.5. Building Object Storage service ring files
- Use the
swift-ring-buildercommand to build one ring for each service. Provide a builder file, a partition power, a replica count, and the minimum hours between partition re-assignment:#swift-ring-builder /etc/swift/object.builder create part_power replica_count min_part_hours#swift-ring-builder /etc/swift/container.builder create part_power replica_count min_part_hours#swift-ring-builder /etc/swift/account.builder create part_power replica_count min_part_hours - When the rings are created, add devices to the account ring.
#swift-ring-builder /etc/swift/account.builder add zX-SERVICE_IP:6002/dev_mountpt part_countWhere:- X is the corresponding integer of a specified zone (for example,
z1would correspond to Zone One). - SERVICE_IP is the IP on which the account, container, and object services should listen. This IP should match the
bind_ipvalue set during the configuration of the Object Storage service Storage Nodes. - dev_mountpt is the
/srv/nodesubdirectory under which your device is mounted. - part_count is the partition count you used to calculate your partition power.
For example, if:- The account, container, and object services are configured to listen on
10.64.115.44on Zone One, - Your device is mounted on
/srv/node/accounts, and - You wish to set a partition count of 100.
Then run:#swift-ring-builder /etc/swift/account.builder add z1-10.64.115.44:6002/accounts 100Note
Repeat this step for each device (on each node in the cluster) you want added to the ring. - In a similar fashion, add devices to both containers and objects rings:
#swift-ring-builder /etc/swift/container.builder add zX-SERVICE_IP:6001/dev_mountpt part_count#swift-ring-builder /etc/swift/object.builder add zX-SERVICE_IP:6000/dev_mountpt part_countReplace the variables herein with the same ones used in the previous step.Note
Repeat these commands for each device (on each node in the cluster) you want added to the ring. - Distribute the partitions across the devices in the ring using the
swift-ring-buildercommand'srebalanceargument.#swift-ring-builder /etc/swift/account.builder rebalance#swift-ring-builder /etc/swift/container.builder rebalance#swift-ring-builder /etc/swift/object.builder rebalance - Check to see that you now have 3 ring files in the directory
/etc/swift. The command:#ls /etc/swift/*gzshould reveal:/etc/swift/account.ring.gz /etc/swift/container.ring.gz /etc/swift/object.ring.gz
- Ensure that all files in the
/etc/swift/directory including those that you have just created are owned by therootuser andswiftgroup.Important
All mount points must be owned byroot; all roots of mounted file systems must be owned byswift. Before running the following command, ensure that all devices are already mounted and owned byroot.#chown -R root:swift /etc/swift - Copy each ring builder file to each node in the cluster, storing them under
/etc/swift/.# scp /etc/swift/*.gz node_ip_address:/etc/swift
You have created rings for each of the services that require them. You have used the builder files to distribute partitions across the nodes in your cluster, and have copied the ring builder files themselves to each node in the cluster.