6.5. Managing Object Store
6.5.1. Architecture Overview
- OpenStack Object Storage environment.For detailed information on Object Storage, see OpenStack Object Storage Administration Guide available at: http://docs.openstack.org/admin-guide-cloud/content/ch_admin-openstack-object-storage.html.
- Red Hat Gluster Storage environment.Red Hat Gluster Storage environment consists of bricks that are used to build volumes. For more information on bricks and volumes, see Section 5.4, “Formatting and Mounting Bricks”.
Figure 6.2. Object Store Architecture
# firewall-cmd --get-active-zones
# firewall-cmd --zone=zone_name --add-port=6010/tcp --add-port=6011/tcp --add-port=6012/tcp --add-port=8080/tcp # firewall-cmd --zone=zone_name --add-port=6010/tcp --add-port=6011/tcp --add-port=6012/tcp --add-port=8080/tcp --permanent
onlyif your swift proxy server is configured with SSL. To add the port number, run the following commands:
# firewall-cmd --zone=zone_name --add-port=443/tcp # firewall-cmd --zone=zone_name --add-port=443/tcp --permanent
6.5.2. Components of Object Store
- Authenticate Object Store against an external OpenStack Keystone server.Each Red Hat Gluster Storage volume is mapped to a single account. Each account can have multiple users with different privileges based on the group and role they are assigned to. After authenticating using accountname:username and password, user is issued a token which will be used for all subsequent REST requests.Integration with Keystone
When you integrate Red Hat Gluster Storage Object Store with Keystone authentication, you must ensure that the Swift account name and Red Hat Gluster Storage volume name are the same. It is common that Red Hat Gluster Storage volumes are created before exposing them through the Red Hat Gluster Storage Object Store.When working with Keystone, account names are defined by Keystone as the
tenant id. You must create the Red Hat Gluster Storage volume using the Keystone
tenant idas the name of the volume. This means, you must create the Keystone tenant before creating a Red Hat Gluster Storage Volume.
ImportantRed Hat Gluster Storage does not contain any Keystone server components. It only acts as a Keystone client. After you create a volume for Keystone, ensure to export this volume for accessing it using the object storage interface. For more information on exporting volume, see Section 126.96.36.199, “Exporting the Red Hat Gluster Storage Volumes”.Integration with GSwauth
GSwauth is a Web Server Gateway Interface (WGSI) middleware that uses a Red Hat Gluster Storage Volume itself as its backing store to maintain its metadata. The benefit in this authentication service is to have the metadata available to all proxy servers and saving the data to a Red Hat Gluster Storage volume.To protect the metadata, the Red Hat Gluster Storage volume should only be able to be mounted by the systems running the proxy servers. For more information on mounting volumes, see Chapter 6, Creating Access to Volumes.Integration with TempAuth
You can also use the
TempAuthauthentication service to test Red Hat Gluster Storage Object Store in the data center.
6.5.3. Advantages of using Object Store
- Default object size limit of 1 TiB
- Unified view of data across NAS and Object Storage technologies
- High availability
- Elastic Volume Management
- Object NameObject Store imposes the following constraints on the object name to maintain the compatibility with network file access:
- Object names must not be prefixed or suffixed by a '/' character. For example,
- Object names must not have contiguous multiple '/' characters. For example,
- Account Management
- Object Store does not allow account management even though OpenStack Swift allows the management of accounts. This limitation is because Object Store treats
accountsequivalent to the Red Hat Gluster Storage volumes.
- Object Store does not support account names (i.e. Red Hat Gluster Storage volume names) having an underscore.
- In Object Store, every account must map to a Red Hat Gluster Storage volume.
- Subdirectory ListingHeaders
X-Content-Length: 0can be used to create subdirectory objects under a container, but GET request on a subdirectory would not list all the objects under it.
6.5.5. Swift API Support Matrix
Table 6.10. Supported Features
|Get Account Metadata||Supported|
|Get Container Metadata||Supported|
|Update Container Metadata||Supported|
|Delete Container Metadata||Supported|
|Create/Update an Object||Supported|
|Create Large Object||Supported|
|Get Object Metadata||Supported|
|Add/Update Object Metadata||Supported|
|Temp URL Operations||Supported|
|Cross-Origin Resource Sharing (CORS)||Supported|
- Ensure that the openstack-swift-* and swiftonfile packages have matching version numbers.
# rpm -qa | grep swift openstack-swift-container-1.13.1-6.el7ost.noarch openstack-swift-object-1.13.1-6.el7ost.noarch swiftonfile-1.13.1-6.el7rhgs.noarch openstack-swift-proxy-1.13.1-6.el7ost.noarch openstack-swift-doc-1.13.1-6.el7ost.noarch openstack-swift-1.13.1-6.el7ost.noarch openstack-swift-account-1.13.1-6.el7ost.noarch
- Ensure that SELinux is in permissive mode.
# sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: permissive Mode from config file: permissive Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 28If the
Mode from config filefields are not set to
permissive, run the following commands to set SELinux into permissive mode persistently, and reboot to ensure that the configuration takes effect.
# setenforce 1 # reboot
- Ensure that the gluster-swift services are owned by and run as the
rootuser, not the
swiftuser as in a typical OpenStack installation.
# cd /usr/lib/systemd/system # sed -i s/User=swift/User=root/ openstack-swift-proxy.service openstack-swift-account.service openstack-swift-container.service openstack-swift-object.service openstack-swift-object-expirer.service
- Start the
# service memcached start
- Ensure that the ports for the Object, Container, Account, and Proxy servers are open. Note that the ports used for these servers are configurable. The ports listed in Table 6.11, “Ports required for Red Hat Gluster Storage Object Store” are the default values.
Table 6.11. Ports required for Red Hat Gluster Storage Object Store
Server Port Object Server 6010 Container Server 6011 Account Server 6012 Proxy Server (HTTPS) 443 Proxy Server (HTTP) 8080
6.5.7. Configuring the Object Store
/etc/swiftdirectory would contain both
*.conf-glusterfiles. You must delete the
*.conffiles and create new configuration files based on
*.conf-glustertemplate. Otherwise, inappropriate python packages will be loaded and the component may not work as expected.
.rpmnewextension. You must ensure to delete
.conffiles and folders (account-server, container-server, and object-server) for better understanding of the loaded configuration.
188.8.131.52. Configuring a Proxy Server
etc/swift/proxy-server.confby referencing the template file available at
184.108.40.206.1. Configuring a Proxy Server for HTTPS
- Create self-signed cert for SSL using the following commands:
# cd /etc/swift # openssl req -new -x509 -nodes -out cert.crt -keyout cert.key
- Add the following lines to
bind_port = 443 cert_file = /etc/swift/cert.crt key_file = /etc/swift/cert.key
memcache_serversconfiguration option in the
proxy-server.confand list all memcached servers.
[filter:cache] use = egg:swift#memcache memcache_servers = 192.168.1.20:11211,192.168.1.21:11211,192.168.1.22:11211
220.127.116.11. Configuring the Authentication Service
18.104.22.168.1. Integrating with the Keystone Authentication Service
- To configure Keystone, add
/etc/swift/proxy-server.confpipeline as shown below:
[pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache authtoken keystoneauth proxy-logging proxy-server
- Add the following sections to
/etc/swift/proxy-server.conffile by referencing the example below as a guideline. You must substitute the values according to your setup:
[filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory signing_dir = /etc/swift auth_host = keystone.server.com auth_port = 35357 auth_protocol = http auth_uri = http://keystone.server.com:5000 # if its defined admin_tenant_name = services admin_user = swift admin_password = adminpassword delay_auth_decision = 1 [filter:keystoneauth] use = egg:swift#keystoneauth operator_roles = admin, SwiftOperator is_admin = true cache = swift.cache
Verify that the Red Hat Gluster Storage Object Store has been configured successfully by running the following command:
swift -V 2 -A http://keystone.server.com:5000/v2.0 -U tenant_name:user -K password stat
22.214.171.124.2. Integrating with the GSwauth Authentication Service
Perform the following steps to integrate GSwauth:
- Create and start a Red Hat Gluster Storage volume to store metadata.
# gluster volume create NEW-VOLNAME NEW-BRICK # gluster volume start NEW-VOLNAMEFor example:
# gluster volume create gsmetadata server1:/rhgs/brick1 # gluster volume start gsmetadata
gluster-swift-gen-builderstool with all the volumes to be accessed using the Swift client including
# gluster-swift-gen-builders gsmetadata other volumes
- Edit the
/etc/swift/proxy-server.confpipeline as shown below:
[pipeline:main] pipeline = catch_errors cache gswauth proxy-server
- Add the following section to
/etc/swift/proxy-server.conffile by referencing the example below as a guideline. You must substitute the values according to your setup.
[filter:gswauth] use = egg:gluster_swift#gswauth set log_name = gswauth super_admin_key = gswauthkey metadata_volume = gsmetadata auth_type = sha1 auth_type_salt = swauthsalt
ImportantYou must ensure to secure the
proxy-server.conffile and the
super_admin_keyoption to prevent unprivileged access.
- Restart the proxy server by running the following command:
swift-init proxy restart
You can set the following advanced options for GSwauth WSGI filter:
- default-swift-cluster: The default storage-URL for the newly created accounts. When you attempt to authenticate for the first time, the access token and the storage-URL where data for the given account is stored will be returned.
- token_life: The set default token life. The default value is 86400 (24 hours).
- max_token_life: The maximum token life. You can set a token lifetime when requesting a new token with header
x-auth-token-lifetime. If the passed in value is greater than the
max_token_life, then the
max_token_lifevalue will be used.
GSwauth provides CLI tools to facilitate managing accounts and users. All tools have some options in common:
- -A, --admin-url: The URL to the auth. The default URL is
- -U, --admin-user: The user with administrator rights to perform action. The default user role is
- -K, --admin-key: The key for the user with administrator rights to perform the action. There is no default value.
Prepare the Red Hat Gluster Storage volume for
gswauth to save its metadata by running the following command:
# gswauth-prep [option]
# gswauth-prep -A http://10.20.30.40:8080/auth/ -K gswauthkey
126.96.36.199.2.1. Managing Account Services in GSwauth
Create an account for GSwauth. This account is mapped to a Red Hat Gluster Storage volume.
# gswauth-add-account [option] <account_name>
# gswauth-add-account -K gswauthkey <account_name>
You must ensure that all users pertaining to this account must be deleted before deleting the account. To delete an account:
# gswauth-delete-account [option] <account_name>
# gswauth-delete-account -K gswauthkey test
Sets a service URL for an account. User with
reseller admin role only can set the service URL. This command can be used to change the default storage URL for a given account. All accounts will have the same storage-URL as default value, which is set using
# gswauth-set-account-service [options] <account> <service> <name> <value>
# gswauth-set-account-service -K gswauthkey test storage local http://newhost:8080/v1/AUTH_test
188.8.131.52.2.2. Managing User Services in GSwauth
The following user roles are supported in GSwauth:
- A regular user has no rights. Users must be given both read and write privileges using Swift ACLs.
adminuser is a super-user at the account level. This user can create and delete users for that account. These members will have both write and read privileges to all stored objects in that account.
reseller adminuser is a super-user at the cluster level. This user can create and delete accounts and users and has read and write privileges to all accounts under that cluster.
- GSwauth maintains its own swift account to store all of its metadata on accounts and users. The
.super_adminrole provides access to GSwauth own swift account and has all privileges to act on any other account or user.
Table 6.12. User Role/Group with Allowed Actions
|.super_admin (username)|| |
|.reseller_admin (group)|| |
|.admin (group)|| |
|regular user (type)||No administrative actions.|
You can create an user for an account that does not exist. The account will be created before creating the user.
-rflag to create a
reseller adminuser and
-aflag to create an
adminuser. To change the password or role of the user, you can run the same command with the new option.
# gswauth-add-user [option] <account_name> <user> <password>
# gswauth-add-user -K gswauthkey -a test ana anapwd
Delete a user by running the following command:
# gswauth-delete-user [option] <account_name> <user>
# gwauth-delete-user -K gswauthkey test ana
There are two methods to access data using the Swift client. The first and simple method is by providing the user name and password everytime. The swift client will acquire the token from gswauth.
$ swift -A http://127.0.0.1:8080/auth/v1.0 -U test:ana -K anapwd upload container1 README.md
# curl -v -H 'X-Storage-User: test:ana' -H 'X-Storage-Pass: anapwd' -k http://localhost:8080/auth/v1.0 ... < X-Auth-Token: AUTH_tk7e68ef4698f14c7f95af07ab7b298610 < X-Storage-Url: http://127.0.0.1:8080/v1/AUTH_test ...
$ swift --os-auth-token=AUTH_tk7e68ef4698f14c7f95af07ab7b298610 --os-storage-url=http://127.0.0.1:8080/v1/AUTH_test upload container1 README.md README.md bash-4.2$ bash-4.2$ swift --os-auth-token=AUTH_tk7e68ef4698f14c7f95af07ab7b298610 --os-storage-url=http://127.0.0.1:8080/v1/AUTH_test list container1 README.md
Reseller adminsmust always use the second method to acquire a token to get access to other accounts other than his own. The first method of using the username and password will give them access only to their own accounts.
184.108.40.206.2.3. Managing Accounts and Users Information
You can obtain the accounts and users information including stored password.
# gswauth-list [options] [account] [user]
# gswauth-list -K gswauthkey test ana +----------+ | Groups | +----------+ | test:ana | | test | | .admin | +----------+
- If [account] and [user] are omitted, all the accounts will be listed.
- If [account] is included but not [user], a list of users within that account will be listed.
- If [account] and [user] are included, a list of groups that the user belongs to will be listed.
- If the [user] is .groups, the active groups for that account will be listed.
-poption provides the output in plain text format,
-jprovides the output in JSON format.
You can change the password of the user, account administrator, and reseller_admin roles.
- Change the password of a regular user by running the following command:
# gswauth-add-user -U account1:user1 -K old_passwd account1 user1 new_passwd
- Change the password of an
account administratorby running the following command:
# gswauth-add-user -U account1:admin -K old_passwd -a account1 admin new_passwd
- Change the password of the
reseller_adminby running the following command:
# gswauth-add-user -U account1:radmin -K old_passwd -r account1 radmin new_passwd
.super_admin role can delete the expired tokens.
# gswauth-cleanup-tokens [options]
# gswauth-cleanup-tokens -K gswauthkey --purge test
- -t, --token-life: The expected life of tokens. The token objects modified before the give number of seconds will be checked for expiration (default: 86400).
- --purge: Purges all the tokens for a given account whether the tokens have expired or not.
- --purge-all: Purges all the tokens for all the accounts and users whether the tokens have expired or not.
220.127.116.11.3. Integrating with the TempAuth Authentication Service
cleartextin a single
proxy-server.conffile. In your
/etc/swift/proxy-server.conffile, enable TempAuth in pipeline and add user information in
TempAuthsection by referencing the below example.
[pipeline:main] pipeline = catch_errors healthcheck proxy-logging cache tempauth proxy-logging proxy-server [filter:tempauth] use = egg:swift#tempauth user_admin_admin = admin.admin.reseller_admin user_test_tester = testing .admin user_test_tester2 = testing2
user_accountname_username = password [.admin]
accountnameis the Red Hat Gluster Storage volume used to store objects.
18.104.22.168. Configuring Object Servers
etc/swift/object.server.confby referencing the template file available at
22.214.171.124. Configuring Container Servers
etc/swift/container-server.confby referencing the template file available at
126.96.36.199. Configuring Account Servers
etc/swift/account-server.confby referencing the template file available at
188.8.131.52. Configuring Swift Object and Container Constraints
/etc/swift/swift.confby referencing the template file available at /
184.108.40.206. Configuring Object Expiration
object-expirerdaemon. This is an expected behavior.
220.127.116.11.1. Setting Up Object Expiration
gsexpiringfor managing object expiration. Hence, you must create a Red Hat Gluster Storage volume and name it as
/etc/swift/object.expirer.confby referencing the template file available at
18.104.22.168.2. Using Object Expiration
The X-Delete-At header requires a UNIX epoch timestamp, in integer form. For example, 1418884120 represents Thu, 18 Dec 2014 06:27:31 GMT. By setting the header to a specific epoch time, you indicate when you want the object to expire, not be served, and be deleted completely from the Red Hat Gluster Storage volume. The current time in Epoch notation can be found by running this command:
$ date +%s
- Set the object expiry time during an object PUT with X-Delete-At header using cURL:
# curl -v -X PUT -H 'X-Delete-At: 1392013619' http://127.0.0.1:8080/v1/AUTH_test/container1/object1 -T ./localfileSet the object expiry time during an object PUT with X-Delete-At header using swift client:
# swift --os-auth-token=AUTH_tk99a39aecc3dd4f80b2b1e801d00df846 --os-storage-url=http://127.0.0.1:8080/v1/AUTH_test upload container1 ./localfile --header 'X-Delete-At: 1392013619'
The X-Delete-After header takes an integer number of seconds that represents the amount of time from now when you want the object to be deleted.
- Set the object expiry time with an object PUT with X-Delete-After header using cURL:
# curl -v -X PUT -H 'X-Delete-After: 3600' http://127.0.0.1:8080/v1/AUTH_test/container1/object1 -T ./localfileSet the object expiry time with an object PUT with X-Delete-At header using swift client:
# swift --os-auth-token=AUTH_tk99a39aecc3dd4f80b2b1e801d00df846 --os-storage-url=http://127.0.0.1:8080/v1/AUTH_test upload container1 ./localfile --header 'X-Delete-After: 3600'
22.214.171.124.3. Running Object Expirer Service
/etc/swift/object-expirer.conffile. For every pass it makes, it queries the gsexpiring account for tracker objects. Based on the timestamp and path present in the name of tracker objects, object-expirer deletes the actual object and the corresponding tracker object.
# swift-init object-expirer start
# swift-object-expirer -o -v /etc/swift/object-expirer.conf
126.96.36.199. Exporting the Red Hat Gluster Storage Volumes
Swift on Filecomponent.
# cd /etc/swift # gluster-swift-gen-builders VOLUME [VOLUME...]
# cd /etc/swift # gluster-swift-gen-builders testvol1 testvol2 testvol3
/mnt/gluster-object). The default value can be changed to a different path by changing the
devicesconfigurable option across all account, container, and object configuration files. The path must contain Red Hat Gluster Storage volumes mounted under directories having the same names as volume names. For example, if
devicesoption is set to
/home, it is expected that the volume named
testvol1be mounted at
gluster-swift-gen-builderstool even if it was previously added. The
gluster-swift-gen-builderstool creates new ring files every time it runs successfully.
gluster-swift-gen-buildersonly with the volumes which are required to be accessed using the Swift interface.
testvol2volume, run the following command:
# gluster-swift-gen-builders testvol1 testvol3
188.8.131.52. Starting and Stopping Server
- To start the server, run the following command:
# swift-init main start
- To stop the server, run the following command:
# swift-init main stop
- To restart the server, run the following command:
# swift-init main restart
6.5.8. Starting the Services Automatically
# chkconfig memcached on # chkconfig openstack-swift-proxy on # chkconfig openstack-swift-account on # chkconfig openstack-swift-container on # chkconfig openstack-swift-object on # chkconfig openstack-swift-object-expirer on
# systemctl enable openstack-swift-proxy.service # systemctl enable openstack-swift-account.service # systemctl enable openstack-swift-container.service # systemctl enable openstack-swift-object.service # systemctl enable openstack-swift-object-expirer.service # systemctl enable openstack-swift-object-expirer.service
systemctlcommand may require additional configuration. Refer to https://access.redhat.com/solutions/2043773 for details if you encounter problems.
6.5.9. Working with the Object Store
184.108.40.206. Creating Containers and Objects
220.127.116.11. Creating Subdirectory under Containers
Content-Length: 0. However, the current behavior of Object Store returns
200 OKon a
GETrequest on subdirectory but this does not list all the objects under that subdirectory.