Red Hat Storage 2.1
Installing Red Hat Storage 2.1
Edition 1
Legal Notice
Copyright © 2013-2014 Red Hat Inc.
This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack Logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Abstract
This guide describes the prerequisites and provides step-by-instructions to install Red Hat Storage using different methods.
- Preface
- 1. Introduction
- 2. Obtaining Red Hat Storage
- 3. Planning Red Hat Storage Installation
- 4. Installing Red Hat Storage
- 5. Registration to the Red Hat Network (RHN)
- 6. Upgrading Red Hat Storage
- 6.1. Upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1 using an ISO
- 6.2. Upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1 using yum
- 6.3. Upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1 Using Red Hat Satellite Server
- 6.4. Upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1 in a Red Hat Enterprise Virtualization-Red Hat Storage Environment
- 7. Setting up Software Updates
- 8. Managing the glusterd Service
- 9. Using the Gluster Command Line Interface
- A. Revision History
Red Hat Storage is scale-out network attached storage (NAS) for data center or on-premise private cloud, public cloud, and hybrid cloud environments. It is software-only, open source, and designed to meet unstructured data storage requirements.
This installation guide describes the Red Hat Storage installation process, describes the minimum requirements, and provides step-by-step instructions to install the software and manage your storage environment.
1. Audience
This guide is intended for anyone responsible for installing Red Hat Storage. This guide assumes that you are familiar with the Linux operating system, concepts of file system, and glusterFS concepts.
2. License
The Red Hat Storage End User License Agreement (EULA) is available at http://www.redhat.com/licenses/rhel_rha_eula.html.
3. Document Conventions
This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information.
3.1. Typographic Conventions
Four typographic conventions are used to call attention to specific words and phrases. These conventions, and the circumstances they apply to, are as follows.
Mono-spaced Bold
Used to highlight system input, including shell commands, file names and paths. Also used to highlight keys and key combinations. For example:
To see the contents of the filemy_next_bestselling_novelin your current working directory, enter thecat my_next_bestselling_novelcommand at the shell prompt and press Enter to execute the command.
The above includes a file name, a shell command and a key, all presented in mono-spaced bold and all distinguishable thanks to context.
Key combinations can be distinguished from an individual key by the plus sign that connects each part of a key combination. For example:
Press Enter to execute the command.Press Ctrl+Alt+F2 to switch to a virtual terminal.
The first example highlights a particular key to press. The second example highlights a key combination: a set of three keys pressed simultaneously.
If source code is discussed, class names, methods, functions, variable names and returned values mentioned within a paragraph will be presented as above, in
mono-spaced bold. For example:
File-related classes includefilesystemfor file systems,filefor files, anddirfor directories. Each class has its own associated set of permissions.
Proportional Bold
This denotes words or phrases encountered on a system, including application names; dialog-box text; labeled buttons; check-box and radio-button labels; menu titles and submenu titles. For example:
Choose → → from the main menu bar to launch Mouse Preferences. In the Buttons tab, select the Left-handed mouse check box and click to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).To insert a special character into a gedit file, choose → → from the main menu bar. Next, choose → from the Character Map menu bar, type the name of the character in the Search field and click . The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the button. Now switch back to your document and choose → from the gedit menu bar.
The above text includes application names; system-wide menu names and items; application-specific menu names; and buttons and text found within a GUI interface, all presented in proportional bold and all distinguishable by context.
Mono-spaced Bold Italic or Proportional Bold Italic
Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or variable text. Italics denotes text you do not input literally or displayed text that changes depending on circumstance. For example:
To connect to a remote machine using ssh, typessh username@domain.nameat a shell prompt. If the remote machine isexample.comand your username on that machine is john, typessh john@example.com.Themount -o remount file-systemcommand remounts the named file system. For example, to remount the/homefile system, the command ismount -o remount /home.To see the version of a currently installed package, use therpm -q packagecommand. It will return a result as follows:package-version-release.
Note the words in bold italics above: username, domain.name, file-system, package, version and release. Each word is a placeholder, either for text you enter when issuing a command or for text displayed by the system.
Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and important term. For example:
Publican is a DocBook publishing system.
3.2. Pull-quote Conventions
Terminal output and source code listings are set off visually from the surrounding text.
Output sent to a terminal is set in
mono-spaced roman and presented thus:
books Desktop documentation drafts mss photos stuff svn books_tests Desktop1 downloads images notes scripts svgs
Source-code listings are also set in
mono-spaced roman but add syntax highlighting as follows:
static int kvm_vm_ioctl_deassign_device(struct kvm *kvm,
struct kvm_assigned_pci_dev *assigned_dev)
{
int r = 0;
struct kvm_assigned_dev_kernel *match;
mutex_lock(&kvm->lock);
match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head,
assigned_dev->assigned_dev_id);
if (!match) {
printk(KERN_INFO "%s: device hasn't been assigned before, "
"so cannot be deassigned\n", __func__);
r = -EINVAL;
goto out;
}
kvm_deassign_device(kvm, match);
kvm_free_assigned_device(kvm, match);
out:
mutex_unlock(&kvm->lock);
return r;
}3.3. Notes and Warnings
Finally, we use three visual styles to draw attention to information that might otherwise be overlooked.
Note
Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should have no negative consequences, but you might miss out on a trick that makes your life easier.
Important
Important boxes detail things that are easily missed: configuration changes that only apply to the current session, or services that need restarting before an update will apply. Ignoring a box labeled “Important” will not cause data loss but may cause irritation and frustration.
Warning
Warnings should not be ignored. Ignoring warnings will most likely cause data loss.
4. Getting Help and Giving Feedback
4.1. Do You Need Help?
If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal at http://access.redhat.com. Through the customer portal, you can:
- search or browse through a knowledgebase of technical support articles about Red Hat products.
- submit a support case to Red Hat Global Support Services (GSS).
- access other product documentation.
Red Hat also hosts a large number of electronic mailing lists for discussion of Red Hat software and technology. You can find a list of publicly available mailing lists at https://www.redhat.com/mailman/listinfo. Click on the name of any mailing list to subscribe to that list or to access the list archives.
4.2. We Need Feedback!
If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla: http://bugzilla.redhat.com/ against the product Red Hat Storage.
When submitting a bug report, be sure to mention the manual's identifier: doc-Installation_Guide
If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, please include the section number and some of the surrounding text so we can find it easily.
Chapter 1. Introduction
Red Hat Storage is software only, scale-out storage that provides flexible and affordable unstructured data storage for the enterprise. Red Hat Storage 2.1 provides new opportunities to unify data storage and infrastructure, increase performance, and improve availability and manageability in order to meet a broader set of an organization’s storage challenges and needs.
glusterFS, a key building block of Red Hat Storage, is based on a stackable user space design and can deliver exceptional performance for diverse workloads. glusterFS aggregates various storage servers over network interconnects into one large parallel network file system. The POSIX compatible glusterFS servers, which use XFS file system format to store data on disks, can be accessed using industry standard access protocols including NFS and CIFS.
Red Hat Storage can be deployed in the private cloud or datacenter using Red Hat Storage Server for On-Premise. Red Hat Storage can be installed on commodity servers and storage hardware resulting in a powerful, massively scalable, and highly available NAS environment. Additionally, Red Hat Storage can be deployed in the public cloud using Red Hat Storage Server for Public Cloud, for example, within the Amazon Web Services (AWS) cloud. It delivers all the features and functionality possible in a private cloud or datacenter to the public cloud by providing massively scalable and highly available NAS in the cloud.
Red Hat Storage Server for On-Premise
Red Hat Storage Server for On-Premise enables enterprises to treat physical storage as a virtualized, scalable, and centrally managed pool of storage by using commodity server and storage hardware.
Red Hat Storage Server for Public Cloud
Red Hat Storage Server for Public Cloud packages glusterFS as an Amazon Machine Image (AMI) for deploying scalable NAS in the AWS public cloud. This powerful storage server provides a highly available, scalable, virtualized, and centrally managed pool of storage for Amazon users.
Chapter 2. Obtaining Red Hat Storage
This chapter details the steps to obtain the Red Hat Storage software.
2.1. Obtaining Red Hat Storage Server for On-Premise
Visit the Software & Download Center in the Red Hat Customer Service Portal (https://access.redhat.com/downloads) to obtain the Red Hat Storage Server for On-Premise installation ISO image files. Use a valid Red Hat Subscription to download the full installation files, obtain a free evaluation installation, or follow the links in this page to purchase a new Red Hat Subscription.
To download the Red Hat Storage Server installation files using a Red Hat Subscription or a Red Hat Evaluation Subscription:
- Visit the Red Hat Customer Service Portal at https://access.redhat.com/login and enter your user name and password to log in.
- Click Downloads to visit the Software & Download Center.
- In the Red Hat Storage Server area, click to download the latest version of the software.
2.2. Obtaining Red Hat Storage Server for Public Cloud
Red Hat Storage Server for Public Cloud is pre-integrated, pre-verified, and ready to run the Amazon Machine Image (AMI). This AMI provides a fully POSIX-compatible, highly available, scale-out NAS and object storage solution for the Amazon Web Services (AWS) public cloud infrastructure.
For more information about obtaining access to AMI, see https://access.redhat.com/knowledge/articles/145693.
Chapter 3. Planning Red Hat Storage Installation
This chapter outlines the minimum hardware and software installation requirements for a successful installation, configuration, and operation of a Red Hat Storage Server environment.
3.1. Prerequisites
Ensure that your environment meets the following requirements.
File System Requirements
XFS - Format the back-end file system using XFS for glusterFS bricks. XFS can journal metadata, resulting in faster crash recovery. The XFS file system can also be defragmented and expanded while mounted and active.
Note
Red Hat assists existing Gluster Storage Software Appliance customers using
ext3 or ext4 to upgrade to a supported version of Red Hat Storage using the XFS back-end file system.
Logical Volume Manager
Format glusterFS bricks using XFS on the Logical Volume Manager to prepare for the installation.
Network Time Configuration
- Synchronize time across all Red Hat Storage servers using the Network Time Protocol (NTP) daemon.
3.1.1. Network Time Protocol Setup
Use a remote server over the Network Time Protocol (NTP) to synchronize the system clock. Set the
ntpd daemon to automatically synchronize the time during the boot process as follows:
- Edit the NTP configuration file
/etc/ntp.confusing a text editor such as vim or nano.# nano /etc/ntp.conf
- Add or edit the list of public NTP servers in the
ntp.conffile as follows:server 0.rhel.pool.ntp.org server 1.rhel.pool.ntp.org server 2.rhel.pool.ntp.org
The Red Hat Enterprise Linux 6 version of this file already contains the required information. Edit the contents of this file if customization is required. - Optionally, increase the initial synchronization speed by appending the
iburstdirective to each line:server 0.rhel.pool.ntp.org iburst server 1.rhel.pool.ntp.org iburst server 2.rhel.pool.ntp.org iburst
- After the list of servers is complete, set the required permissions in the same file. Ensure that only
localhosthas unrestricted access:restrict default kod nomodify notrap nopeer noquery restrict -6 default kod nomodify notrap nopeer noquery restrict 127.0.0.1 restrict -6 ::1
- Save all changes, exit the editor, and restart the NTP daemon:
# service ntpd restart
- Ensure that the
ntpddaemon starts at boot time:# chkconfig ntpd on
Use the
ntpdate command for a one-time synchronization of NTP. For more information about this feature, see the Red Hat Enterprise Linux Deployment Guide.
3.2. Hardware Compatibility
Ensure hardware compatibility by referring to the list of supported hardware configurations at https://access.redhat.com/knowledge/articles/66206. Hardware specifications change rapidly and full compatibility is not guaranteed.
Hardware compatibility is especially relevant when an older or customized machine is in use.
3.3. Port Information
Red Hat Storage Server uses the listed ports. Ensure that firewall settings do not prevent access to these ports.
Table 3.1. TCP Port Numbers
| Port Number | Usage |
|---|---|
| 2049 | For glusterFS's NFS exports (nfsd process). |
| 38465 | For NFS mount protocol. |
| 38466 | For NFS mount protocol. |
| 38468 | For NFS's Lock Manager (NLM). |
| 38469 | For NFS's ACL support. |
| 24007 | For glusterd (for management). |
| 49152+ | For brick processes depending on the availability of the ports. The total number of ports required to be open depends on the total number of bricks exported on the machine. |
| 22 | For sshd used by geo-replication. |
| 111 | For rpc port mapper. |
| 445 | For CIFS protocol. |
Table 3.2. TCP Port Numbers used for Object Storage (Swift)
| Port Number | Usage |
|---|---|
| 8080 | For Proxy Server. |
| 6010 | For Object Server. |
| 6011 | For Container Server. |
| 6012 | For Account Server. |
| 443 | For HTTPS request. |
Chapter 4. Installing Red Hat Storage
Red Hat Storage can be installed in a data center using Red Hat Storage Server On-Premise.
This chapter describes the three different methods for installing Red Hat Storage Server: using an ISO image, using a PXE server, or using the Red Hat Satellite Server.
For information on launching Red Hat Storage Server for Public Cloud, see the Red Hat Storage Administration Guide.
Important
While cloning a Red Hat Storage Server installed on a virtual machine, the
/var/lib/glusterd/glusterd.info file will be cloned to the other virtual machines, hence causing all the cloned virtual machines to have the same UUID. Ensure to remove the /var/lib/glusterd/glusterd.info file before the virtual machine is cloned. The file will be automatically created with a UUID on initial start-up of the glusterd daemon on the cloned virtual machines.
4.1. Installing from an ISO Image
To install Red Hat Storage Server from the ISO image:
- Download an ISO image file for Red Hat Storage Server as described in Chapter 2, Obtaining Red Hat Storage.The installation process launches automatically when you boot the system using the ISO image file.Press Enter to begin the installation process.
- The Configure TCP/IP screen displays.To configure your computer to support TCP/IP, accept the default values for Internet Protocol Version 4 (IPv4) and Internet Protocol Version 6 (IPv6) and click OK. Alternatively, you can manually configure network settings for both Internet Protocol Version 4 (IPv4) and Internet Protocol Version 6 (IPv6).
- The Welcome screen displays.Click Next.
- The Language Selection screen displays. Select the preferred language for the installation and the system default and click Next.
- The Keyboard Configuration screen displays. Select the preferred keyboard layout for the installation and the system default and click Next.
- Click Next.
- The Hostname configuration screen displays.Enter the hostname for the computer. You can also configure network interfaces if required. Click Next.
- The Time Zone Configuration screen displays. Set your time zone by selecting the city closest to your computer's physical location.
- The Set Root Password screen displays.The root account's credentials will be used to install packages, upgrade RPMs, and perform most system maintenance. As such, setting up a root account and password is one of the most important steps in the installation process.
Note
The root user (also known as the superuser) has complete access to the entire system. For this reason, you should only log in as the root user to perform system maintenance or administration.The Set Root Password screen prompts you to set a root password for your system. You cannot proceed to the next stage of the installation process without entering a root password.Enter the root password into the Root Password field. The characters you enter will be masked for security reasons. Then, type the same password into the Confirm field to ensure the password is set correctly. After you set the root password, click Next. - The Partitioning Type screen displays.Partitioning allows you to divide your hard drive into isolated sections that each behave as their own hard drive. Partitioning is particularly useful if you run multiple operating systems. If you are unsure how to partition your system, see An Introduction to Disk Partitions in Red Hat Enterprise Linux 6 Installation Guide for more information.In this screen you can choose to create the default partition layout in one of four different ways, or choose to partition storage devices manually to create a custom layout.If you do not feel comfortable partitioning your system, choose one of the first four options. These options allow you to perform an automated installation without having to partition your storage devices yourself. Depending on the option you choose, you can still control what data, if any, is removed from the system. Your options are:
- Use All Space
- Replace Existing Linux System(s)
- Shrink Current System
- Use Free Space
- Create Custom Layout
Choose the preferred partitioning method by clicking the radio button to the left of its description in the dialog box.Click Next once you have made your selection. For more information on disk partitioning, see Disk Partitioning Setup in the Red Hat Enterprise Linux 6 Installation Guide. - The Boot Loader screen displays with the default settings.Click Next.
- The Minimal Selection screen displays.Click Next to retain the default selections and proceed with the installation.
- To customize your package set further, select the Customize now option and click Next. This will take you to the Customizing the Software Selection screen.Click Next to retain the default selections and proceed with the installation.
- The Package Installation screen displays.Red Hat Storage Server reports the progress on the screen as it installs the selected packages in the system.
- On successful completion, the Installation Complete screen displays.
- Click Reboot to reboot the system and complete the installation of Red Hat Storage Server.Ensure that you remove any installation media if it is not automatically ejected upon reboot.Congratulations! Your Red Hat Storage Server installation is now complete.
4.2. Installing from a PXE Server
To boot your computer using a PXE server, you need a properly configured server and a network interface in your computer that supports PXE.
Configure the computer to boot from the network interface. This option is in the BIOS, and may be labeled
Network Boot or Boot Services. Once you properly configure PXE booting, the computer can boot the Red Hat Storage Server installation system without any other media.
To boot a computer from a PXE server:
- Ensure that the network cable is attached. The link indicator light on the network socket should be lit, even if the computer is not switched on.
- Switch on the computer.
- A menu screen appears. Press the number key that corresponds to the preferred option.
If your computer does not boot from the netboot server, ensure that the BIOS is configured so that the computer boots first from the correct network interface. Some BIOS systems specify the network interface as a possible boot device, but do not support the PXE standard. See your hardware documentation for more information.
Important
Check the Security-Enhanced Linux (SELinux) status on the Red Hat Storage Server after installation. You must ensure that SELinux is disabled if it is found to be enforced or permissive. For more information on enabling and disabling SELinux, see https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Security-Enhanced_Linux/sect-Security-Enhanced_Linux-Enabling_and_Disabling_SELinux-Disabling_SELinux.html.
4.3. Installing from Red Hat Satellite Server
Ensure that the firewall settings are configured so that the required ports are open. For a list of port numbers, see Section 3.3, “Port Information”.
Creating the Activation Key
For more information on how to create an activation key, see Activation Keys in the Red Hat Network Satellite Reference Guide.
- In the Details tab of the Activation Keys screen, select
RHEL EUS Server (v.6.4.z for 64-bit x86_64)from the Base Channels drop-down list. - In the Child Channels tab of the Activation Keys screen, select the following child channels:
RHEL EUS Server Scalable File System (v.6.4.z for x86_64) RHN Tools for RHEL EUS (v.6.4.z for 64-bit x86_64) Red Hat Storage Server 2.1 (RHEL 6.4.z for x86_64)
- In the Packages tab of the Activation Keys screen, enter the following package name:
redhat-storage-server
Creating the Kickstart Profile
For more information on creating a kickstart profile, see Kickstart in the Red Hat Network Satellite Reference Guide.
- When creating a kickstart profile, the following
Base ChannelandTreemust be selected.Base Channel: Red Hat Enterprise Linux Server (v.6 for 64-bit x86_64)Tree: ks-rhel-x86_64-server-6-6.4 - Do not associate any child channels with the kickstart profile.
- Associate the previously created activation key with the kickstart profile.
Important
- By default, the kickstart profile chooses
md5as the hash algorithm for user passwords.You must change this algorithm tosha512by providing the following settings in theauthfield of theKickstart Details,Advanced Optionspage of the kickstart profile:--enableshadow --passalgo=sha512
- After creating the kickstart profile, you must change the root password in the Kickstart Details, Advanced Options page of the kickstart profile and add a root password based on the prepared sha512 hash algorithm.
Installing Red Hat Storage Server using the Kickstart Profile
For more information on installing Red Hat Storage Server using a kickstart profile, see Kickstart in Red Hat Network Satellite Reference Guide.
Chapter 5. Registration to the Red Hat Network (RHN)
After you have successfully installed Red Hat Storage, you must register the server to the Red Hat Network and subscribe to the required software channels.
To subscribe to the Red Hat Storage channels using RHN Classic:
- Run the
rhn_registercommand to register the system with Red Hat Network. To complete registration successfully, you will need to supply your Red Hat Network username and password. Follow the on-screen prompts to complete registration of the system.# rhn_register - Subscribe to the required channels.You must subscribe the system to the required channels using either
rhn-channelcommand from the command line or the web interface to the Red Hat Network.- Using the
rhn-channelcommand:Run therhn-channelcommand to subscribe the system to each of the required channels. The commands that need to be run are:# rhn-channel --add --channel=rhel-x86_64-server-6-rhs-2.1 # rhn-channel --add --channel=rhel-x86_64-server-sfs-6.4.z
Run the following command to ensure the system is registered successfully.# rhn-channel -l rhel-x86_64-server-6-rhs-2.1 rhel-x86_64-server-6.4.z rhel-x86_64-server-sfs-6.4.z
- Using the web interface to the Red Hat Network.To add a channel subscription to a system from the web interface:
- Log in to the Red Hat Network (http://rhn.redhat.com).
- Move the mouse pointer over the
Subscriptionslink at the top of the screen, and then click theRegistered Systemslink in the menu that appears. - Select the system to which you are adding channels from the list presented on the screen by clicking the name of the system.
- Click
Alter Channel Subscriptionsin theSubscribed Channelssection of the screen. - Select the channels to be added from the list presented on the screen. Red Hat Storage requires:
RHEL EUS Server Scalable File System (v. 6.4.z for x86_64).- On the same page, expand the node for
Additional Services ChannelsforRed Hat Enterprise Linux 6.4 for x86_64and selectRed Hat Storage Server 2.1 (RHEL 6.4.z for x86_64).
- Click the
Change Subscriptionbutton to finalize the change.After the page refreshes, select the Details tab to verify that your system is subscribed to the appropriate channels.
The system is now registered with the Red Hat Network and subscribed to the channels required for Red Hat Storage installation.
Chapter 6. Upgrading Red Hat Storage
- 6.1. Upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1 using an ISO
- 6.2. Upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1 using yum
- 6.3. Upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1 Using Red Hat Satellite Server
- 6.4. Upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1 in a Red Hat Enterprise Virtualization-Red Hat Storage Environment
This chapter describes the procedure to upgrade to Red Hat Storage 2.1 from Red Hat Storage 2.0.
6.1. Upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1 using an ISO
This method re-images the software in the storage server by keeping the data intact after a backup-restore of the configuration files. This method is quite invasive and should only be used if a local yum repository or an Internet connection to access Red Hat Network is not available.
The preferable method to upgrade is using the
yum command. For more information, refer to Section 6.2, “Upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1 using yum”.
Note
- Ensure that you perform the steps listed in this section on all the servers.
- In the case of a geo-replication set-up, perform the steps listed in this section on all the master and slave servers.
- You cannot access data during the upgrade process, and a downtime should be scheduled with applications, clients, and other end-users.
- Get the volume information and peer status using the following commands:
# gluster volume infoThe command displays the volume information similar to the following:Volume Name: volname Type: Distributed-Replicate Volume ID: d6274441-65bc-49f4-a705-fc180c96a072 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: server1:/rhs/brick1/brick1 Brick2: server2:/rhs/brick1/brick2 Brick3: server3:/rhs/brick1/brick3 Brick4: server4:/rhs/brick1/brick4 Options Reconfigured: geo-replication.indexing: on
# gluster peer statusThe command displays the peer status information similar to the following:# gluster peer status Number of Peers: 3 Hostname: server2 Port: 24007 Uuid: 2dde2c42-1616-4109-b782-dd37185702d8 State: Peer in Cluster (Connected) Hostname: server3 Port: 24007 Uuid: 4224e2ac-8f72-4ef2-a01d-09ff46fb9414 State: Peer in Cluster (Connected) Hostname: server4 Port: 24007 Uuid: 10ae22d5-761c-4b2e-ad0c-7e6bd3f919dc State: Peer in Cluster (Connected)
Note
Make a note of this information to compare with the output after upgrading. - In case of a geo-replication set-up, stop the geo-replication session using the following command:
# gluster volume geo-replication master_volname slave_node::slave_volname stop
- In case of an object store set-up, turn off object store using the following commands:
# service gluster-swift-proxy stop # service gluster-swift-account stop # service gluster-swift-container stop # service gluster-swift-object stop
- Stop all the gluster volumes using the following command:
# gluster volume stop volname
- Stop the
glusterdservices on all the nodes using the following command:# service glusterd stop
- If there are any gluster processes still running, terminate the process using
kill. - Ensure all gluster processes are stopped using the following command:
# pgrep gluster
- Back up the following configuration directory and files on the backup directory:
/var/lib/glusterd,/etc/swift,/etc/samba,/etc/ctdb,/etc/glusterfs.Ensure that the backup directory is not the operating system partition.# cp -a /var/lib/glusterd /backup-disk/ # cp -a /etc/swift /backup-disk/ # cp -a /etc/samba /backup-disk/ # cp -a /etc/ctdb /backup-disk/ # cp -a /etc/glusterfs /backup-disk/
Also, back up any other files or configuration files that you might require to restore later. You can create a backup of everything in/etc/. - Locate and unmount the data disk partition that contains the bricks using the following command:
# mount | grep backend-disk # umount /dev/device
For example, use thegluster volume infocommand to display the backend-disk information:Volume Name: volname Type: Distributed-Replicate Volume ID: d6274441-65bc-49f4-a705-fc180c96a072 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: server1:/rhs/brick1/brick1 Brick2: server2:/rhs/brick1/brick2 Brick3: server3:/rhs/brick1/brick3 Brick4: server4:/rhs/brick1/brick4 Options Reconfigured: geo-replication.indexing: on
In the above example, the backend-disk is mounted at /rhs/brick1# findmnt /rhs/brick1 TARGET SOURCE FSTYPE OPTIONS /rhs/brick1 /dev/mapper/glustervg-brick1 xfs rw,relatime,attr2,delaylog,no # umount /rhs/brick1
- Insert the DVD with Red Hat Storage 2.1 ISO and reboot the machine. The installation starts automatically. You must install Red Hat Storage on the system with the same network credentials, IP address, and host name.
Warning
During installation, while creating a custom layout, ensure that you choose Create Custom Layout to proceed with installation. If you choose Replace Existing Linux System(s), it formats all disks on the system and erases existing data.Select Create Custom Layout. Click Next. - Select the disk on which to install Red Hat Storage. Click Next.For Red Hat Storage to install successfully, you must select the same disk that contained the operating system data previously.
Warning
While selecting your disk, do not select the disks containing bricks. - After installation, ensure that the host name and IP address of the machine is the same as before.
Warning
If the IP address and host name are not the same as before, you will not be able to access the data present in your earlier environment. - After installation, the system automatically starts
glusterd. Stop the gluster service using the following command:# service glusterd stop Stopping glusterd: [OK]
- Add entries to
/etc/fstabto mount data disks at the same path as before.Note
Ensure that the mount points exist in your trusted storage pool environment. - Mount all data disks using the following command:
# mount -a
- Back up the latest
glusterdusing the following command:# cp -a /var/lib/glusterd /var/lib/glusterd-backup
- Copy
/var/lib/glusterdand/etc/glusterfsfrom your backup disk to the OS disk.# cp -a /backup-disk/glusterd/* /var/lib/glusterd # cp -a /backup-disk/glusterfs/* /etc/glusterfs
Note
Do not restore the swift, samba and ctdb configuration files from the backup disk. However, any changes in swift, samba, and ctdb must be applied separately in the new configuration files from the backup taken earlier. - Copy back the latest hooks scripts to
/var/lib/glusterd/hooks.# cp -a /var/lib/glusterd-backup/hooks /var/lib/glusterd
- Ensure you restore any other files from the backup that was created earlier.
- You must restart the
glusterdmanagement daemon using the following commands:# glusterd --xlator-option *.upgrade=yes -N # service glusterd start Starting glusterd: [OK]
- Start the volume using the following command:
# gluster volume start volname force volume start: volname : success
Note
Repeat the above steps on all the servers in your trusted storage pool environment. - In case you have a pure replica volume (1*n) where n is the replica count, perform the following additional steps:
- Run the
fix-layoutcommand on the volume using the following command:# gluster volume rebalance volname fix-layout start
- Wait for the
fix-layoutcommand to complete. You can check the status for completion using the following command:# gluster volume rebalance volname status
- Stop the volume using the following command:
# gluster volume stop volname
- Force start the volume using the following command:
# gluster volume start volname force
- In case of an Object Store set-up, any configuration files that were edited should be renamed to end with a
.rpmsavefile extension, and other unedited files should be removed. - Re-configure the Object Store. For information on configuring Object Store, refer to Section 18.5 in Chapter 18. Managing Object Store of the Red Hat Storage Administration Guide.
- Get the volume information and peer status of the created volume using the following commands:
# gluster volume info # gluster peer status
Ensure that the output of these commands has the same values that they had before you started the upgrade.Note
In Red Hat Storage 2.1, thegluster peer statusoutput does not display the port number. - Verify the upgrade.
- If all servers in the trusted storage pool are not upgraded, the
gluster peer statuscommand displays the peers as disconnected or rejected.The command displays the peer status information similar to the following:# gluster peer status Number of Peers: 3 Hostname: server2 Uuid: 2dde2c42-1616-4109-b782-dd37185702d8 State: Peer Rejected (Connected) Hostname: server3 Uuid: 4224e2ac-8f72-4ef2-a01d-09ff46fb9414 State: Peer in Cluster (Connected) Hostname: server4 Uuid: 10ae22d5-761c-4b2e-ad0c-7e6bd3f919dc State: Peer Rejected (Disconnected)
- If all systems in the trusted storage pool are upgraded, the
gluster peer statuscommand displays peers as connected.The command displays the peer status information similar to the following:# gluster peer status Number of Peers: 3 Hostname: server2 Uuid: 2dde2c42-1616-4109-b782-dd37185702d8 State: Peer in Cluster (Connected) Hostname: server3 Uuid: 4224e2ac-8f72-4ef2-a01d-09ff46fb9414 State: Peer in Cluster (Connected) Hostname: server4 Uuid: 10ae22d5-761c-4b2e-ad0c-7e6bd3f919dc State: Peer in Cluster (Connected)
- If all the volumes in the trusted storage pool are started, the
gluster volume infocommand displays the volume status as started.Volume Name: volname Type: Distributed-Replicate Volume ID: d6274441-65bc-49f4-a705-fc180c96a072 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: server1:/rhs/brick1/brick1 Brick2: server2:/rhs/brick1/brick2 Brick3: server3:/rhs/brick1/brick3 Brick4: server4:/rhs/brick1/brick4 Options Reconfigured: geo-replication.indexing: on
- If you have a geo-replication setup, re-establish the geo-replication session between the master and slave using the following steps:
- Run the following commands on any one of the master nodes:
# cd /usr/share/glusterfs/scripts/ # sh generate-gfid-file.sh localhost:${master-vol} $PWD/get-gfid.sh /tmp/tmp.atyEmKyCjo/upgrade-gfid-values.txt # scp /tmp/tmp.atyEmKyCjo/upgrade-gfid-values.txt root@${slavehost}:/tmp/ - Run the following commands on a slave node:
# cd /usr/share/glusterfs/scripts/ # sh slave-upgrade.sh localhost:${slave-vol} /tmp/tmp.atyEmKyCjo/upgrade-gfid-values.txt $PWD/gsync-sync-gfidNote
If the SSH connection for your setup requires a password, you will be prompted for a password for all machines where the bricks are residing. - Re-create and start the geo-replication sessions.For information on creating and starting geo-replication sessions, refer to Managing Geo-replication inthe Red Hat Storage Administration Guide.
Note
It is recommended to add the child channel of Red Hat Enterprise Linux 6 containing the native client, so that you can refresh the clients and get access to all the new features in Red Hat Storage 2.1. For more information, refer to the Upgrading Native Client section in the Red Hat Storage Administration Guide. - Remount the volume to the client and verify for data consistency. If the gluster volume information and gluster peer status information matches with the information collected before migration, you have successfully upgraded your environment to Red Hat Storage 2.1.
6.2. Upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1 using yum
Pre-Upgrade Steps:
- Unmount the clients using the following command:
umount mount-point
- Stop the volumes using the following command:
gluster volume stop volname
- Unmount the data partition(s) on the servers using the following command:
umount mount-point
- To verify if the volume status is stopped, use the following command:
# gluster volume info
If there is more than one volume, stop all of the volumes. - Stop the
glusterdservices on all the servers using the following command:# service glusterd stop
yum Upgrade Steps:
Important
- You can upgrade to Red Hat Storage 2.1 only from Red Hat Storage 2.0 Update 6 release. If your current version is lower than Update 6, then upgrade it to Update 6 before upgrading to Red Hat Storage 2.1.
- Upgrade the servers before upgrading the clients.
- If the current version is lower than Update 6, first upgrade to Update 6 using the following command:
# yum update
- Change the base channel and the child channels using the following command:
# cd /usr/lib/glusterfs/.unsupported # python rhs_upgrade.py --username=username --password=password --rhs-version=2.1
Note
If you receive the following error while adding subscriptions, remove the old EUS entitlements before runningrhs_upgrade.py.Error Message: Insufficient Software Channel Entitlements: cfid9170 Red Hat Enterprise Linux Server (v. 6) Extended Update Support note: One of the above Software Channel Entitlement(s) are required to provides access to: cid17799 RHEL EUS Server (v. 6.4.z for 64-bit x86_64) Error Class Code: 70To remove the old EUS entitlements, log on to http://rhn.redhat.com, locate your system and delete it.You can also use alternate methods to change the channels as mentioned in Chapter 6, Upgrading Red Hat Storage - To upgrade the server from Red Hat Storage 2.0 to 2.1, use the following command:
# yum update
The server is now upgraded from Red Hat Enterprise Linux 6.2 to Red Hat Enterprise Linux 6.4Note
It is recommended to add the child channel of Red Hat Enterprise Linux 6 that contains the native client to refresh the clients and access the new features in Red Hat Storage 2.1. For more information, refer to Installing Native Client in the Red Hat Storage Administration Guide. - Reboot the servers. This is required as the kernel is updated to the latest version.
- After the
yumupdate, start theglusterdservice on all machines using the following command:# service glusterd start
- Start the volume on one of the servers, using the following command:
# gluster volume start volname force
- In case you have a pure replica volume (1*n) where n is the replica count and is less than or equal to 3, perform the following additional steps:
- Run the
fix-layoutcommand on the volume using the following command:# gluster volume rebalance volname fix-layout start
- Wait for the
fix-layoutcommand to complete. You can check the status for completion using the following command:# gluster volume rebalance volname status
- Stop the volume using the following command:
# gluster volume stop volname
- Force start the volume using the following command:
# gluster volume start volname force
- If you have a geo-replication setup, then re-establish the geo-replication session between the master and slave using the following steps:
- Run the following commands on any one of the master nodes:
# cd /usr/share/glusterfs/scripts/ # sh generate-gfid-file.sh localhost:${master-vol} $PWD/get-gfid.sh /tmp/tmp.atyEmKyCjo/upgrade-gfid-values.txt # scp /tmp/tmp.atyEmKyCjo/upgrade-gfid-values.txt root@${slavehost}:/tmp/ - Run the following commands on a slave node:
# cd /usr/share/glusterfs/scripts/ # sh slave-upgrade.sh localhost:${slave-vol} /tmp/tmp.atyEmKyCjo/upgrade-gfid-values.txt $PWD/gsync-sync-gfidNote
If the SSH connection for your setup requires a password, you will be prompted for a password for all machines where the bricks are residing. - Re-create and start the geo-replication sessions.For information on creating and starting geo-replication session, refer to Managing Geo-replication in the Red Hat Storage Administration Guide.
Note
It is recommended to add the child channel of Red Hat Enterprise Linux 6 that contains the native client so that you can refresh the clients and get access to all the new features in Red Hat Storage 2.1. For more information refer, Upgrading Native Client in the Red Hat Storage Administration Guide.
6.3. Upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1 Using Red Hat Satellite Server
- Unmount all the clients using the following command:
umount mount-name
- Stop the volumes using the following command:
# gluster volume stop volname
- Unmount the data partition(s) on the servers using the following command:
umount mount-point
- Ensure that the Red Hat Storage 2.0 server is updated to Red Hat Enterprise Linux 6.2.z and Red Hat Storage 2.0 Update 6, by running the following command:
# yum update
You must register the system to the Red Hat Storage 2.0 channels in Red Hat Network. - Change into the
/usr/share/rhn/directory and get the SSL certificate using the following command:# wget https://Red Hat Satellite server URL/pub/RHN-ORG-TRUSTED-SSL-CERT
- Edit the
/etc/sysconfig/rhn/up2datefile with the following information:serverURL=https://Red Hat Satellite server URL/XMLRPC noSSLServerURL=http://Red Hat Satellite server URL/XMLRPC sslCACert=/usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT
- Create an Activation Key at the Red Hat Satellite Server, and associate it with the following channels. For more information, refer to Section 4.3, “Installing from Red Hat Satellite Server”
Base Channel: RHEL EUS Server (v.6.4.z for 64-bit x86_64) Child channels: RHEL EUS Server Scalable File System (v.6.4.z for x86_64) Red Hat Storage Server 2.1 (RHEL 6.4.z for x86_64)
- On the updated Red Hat Storage 2.0 Update 6 server, run the following command:
# rhnreg_ks --username username --password passowrd --force --activationkey Activation Key ID
This uses the prepared Activation Key and registers the system to Red Hat Storage 2.1 channels on the Red Hat Satellite Server. - Verify if the channel subscriptions have changed to the following:
# rhn-channel -l rhel-x86_64-server-6-rhs-2.1 rhel-x86_64-server-6.4.z rhel-x86_64-server-sfs-6.4.z
- Run the following command to upgrade to Red Hat Enterprise Linux 6.4 and Red Hat Storage 2.1.
# yum update
- Reboot, and run volume and data integrity checks.
6.4. Upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1 in a Red Hat Enterprise Virtualization-Red Hat Storage Environment
This section describes the upgrade methods for a Red Hat Storage and Red Hat Enterprise Virtualization integrated environment. You can upgrade Red Hat Storage 2.0 to Red Hat Storage 2.1 using an ISO or
yum.
6.4.1. Upgrading using an ISO
- Using Red Hat Enterprise Virtualization Manager, stop all the virtual machine instances.The Red Hat Storage volume on the instances will be stopped during the upgrade.
Note
Ensure you stop the volume, as rolling upgrade is not supported in Red Hat Storage. - Using Red Hat Enterprise Virtualization Manager, move the data domain of the data center to Maintenance mode.
- Using Red Hat Enterprise Virtualization Manager, stop the volume (the volume used for data domain) containing Red Hat Storage nodes in the data center.
- Using Red Hat Enterprise Virtualization Manager, move all Red Hat Storage nodes to Maintenance mode.
- Perform the ISO Upgrade as mentioned in Section 6.1, “Upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1 using an ISO ”.
- Re-install the Red Hat Storage nodes from Red Hat Enterprise Virtualization Manager.
Note
- Re-installation for the Red Hat Storage nodes should be done from Red Hat Enterprise Virtualization Manager. The newly upgraded Red Hat Storage 2.1 nodes lose their network configuration and other configurations, such as iptables configuration, done earlier while adding the nodes to Red Hat Enterprise Virtualization Manager. Re-install the Red Hat Storage nodes to have the bootstrapping done.
- You can re-configure the Red Hat Storage nodes using the option provided under Action Items, as shown in Figure 6.4, “Red Hat Storage Node before Upgrade ”, and perform bootstrapping.
- Perform the steps above in all Red Hat Storage nodes.
- Start the volume once all the nodes are shown in Up status in Red Hat Enterprise Virtualization Manager.
- Upgrade the native client bits for Red Hat Enterprise Linux 6.4, if Red Hat Enterprise Linux 6.4 is used as hypervisor.
Note
If Red Hat Enterprise Virtualization Hypervisor is used as hypervisor, then install the suitable build of Red Hat Enterprise Virtualization Hypervisor containing the latest native client. - Using Red Hat Enterprise Virtualization Manager, activate the data domain and start all the virtual machine instances in the data center.
6.4.2. Upgrading using yum
- Using Red Hat Enterprise Virtualization Manager, stop all virtual machine instances in the data center.
- Using Red Hat Enterprise Virtualization Manager, move the data domain backed by gluster volume to Maintenance mode.
- Using Red Hat Enterprise Virtualization Manager, move all Red Hat Storage nodes to Maintenance mode.
- Perform
yumupdate as mentioned in Section 6.2, “Upgrading from Red Hat Storage 2.0 to Red Hat Storage 2.1 using yum”. - Once the Red Hat Storage nodes are rebooted and up, them using Red Hat Enterprise Virtualization Manager.
Note
Re-installation of Red Hat Storage nodes is required, as the network configurations and bootstrapping configurations done prior to upgrade are preserved, unlike ISO upgrade. - Using Red Hat Enterprise Virtualization Manager, start the volume.
- Upgrade the native client on Red Hat Enterprise Linux 6.4, in case Red Hat Enterprise Linux 6.4 is used as hypervisor.
Note
If Red Hat Enterprise Virtualization Hypervisor is used as hypervisor, reinstall Red Hat Enterprise Virtualization Hypervisor containing the latest version of Red Hat Storage native client. - Activate the data domain and start all the virtual machine instances.
Chapter 7. Setting up Software Updates
Red Hat strongly recommends you update your Red Hat Storage software regularly with the latest security patches and upgrades. Associate your system with a content server to update existing content or to install new content. This ensures that your system is up-to-date with security updates and upgrades.
To keep your Red Hat Storage system up-to-date, associate the system with the RHN or your locally-managed content service. This ensures your system automatically stays up-to-date with security patches and bug fixes.
Note
Asynchronous errata update releases of Red Hat Storage includes all fixes that were released asynchronously since the last release as a cumulative update.
If you have a distributed volume then you should opt for an offline upgrade. Red Hat Storage supports in service software upgrade from Red Hat Storage 2.1 only for replicate and distributed-replicate volume. For more information about in service software upgrade, see Section 7.2, “In Service Software Upgrade from Red Hat Storage 2.1”
Important
Offline upgrade results in a downtime as the volume is offline during upgrade.
To update Red Hat Storage in the offline mode, execute the following steps:
- Stop all applications that accessed using gluster volume and unmount the volume using the following command:
# umount <mount-point>
- Stop all the volumes in the trusted storage pool using the following command:
# gluster volume stop <volname>
- Perform upgrade on all the Red Hat Storage nodes in the trusted storage pool using the following command:
# yum update
- Reboot the Red Hat Storage nodes.
- Start all the volumes in the trusted storage pool using the following command:
# gluster volume start <volname>
- Upgrade the native client on the client side using the following command:
# yum update
- Mount the volume.
In service software upgrade refers to the ability to progressively update a Red Hat Storage Server cluster with a new version of the software without taking the volumes hosted on the cluster offline. In most cases normal I/O operations on the volume can continue even when the cluster is being updated under most circumstances. This method of updating the storage cluster is only supported for replicated and distributed-replicated volumes.
In service software upgrade procedure is supported only for upgrading from Red Hat Storage 2.1 to Red Hat Storage 2.1 Update 1 and subsequent updates. In case you are using Red Hat Storage 2.0, then upgrade to Red Hat Storage 2.1 before proceeding with the following steps.
Ensure you perform the following steps based on the set-up before proceeding with the in service software upgrade process.
The following are the upgrade requirements to upgrade from Red Hat Storage 2.1 to Red Hat Storage 2.1 Update 1 and subsequent updates:
- In service software upgrade is supported only for nodes with replicate and distributed replicate volumes.
- When quorum is enabled, make sure that bringing one node down does not violate quorum. Add dummy peers to make sure the quorum will not be violated until the completion of rolling upgrade using the following command:
# gluster peer probe DummyNodeName
For Example 1When the quorum percentage is set to the default value (>50%), for a plain replicate volume with two nodes and one brick on each machine, a dummy node which does not contain any bricks must be added to the trusted storage pool to provide high availability of the volume using the command mentioned above.For Example 2In a three node cluster, if the quorum percentage is set to 77%, then bringing down one node would violate quorum. In this scenario, you have to add two dummy nodes to meet quorum. - In a Red Hat Storage and Red Hat Enterprise Virtualization integrated environment, disable client-side quorum, if enabled, using the following command:
# gluster volume reset <vol-name> cluster.quorum-type
Important
There are possibilities to run into split-brain issues, if client-side quorum is disabled. Also, if the client-side quorum is disabled during in Service Software upgrade, you must enable the client-side quorum after the upgrade process is completed, using the following command:# gluster volume set VOLNAME cluster.quorum-type auto
- Ensure the Red Hat Storage server is registered to the required channels:
rhel-x86_64-server-6-rhs-2.1 rhel-x86_64-server-6.4.z rhel-x86_64-server-sfs-6.4.z
To subscribe to the channels, run the following command:# rhn-channel --add --channel=<channel>
The following lists some of the restrictions for in service software upgrade:
- Do not perform in service software upgrade when the I/O or load is high on the Red Hat Storage server.
- Do not perform any volume operations on the Red Hat Storage server
- Do not change the hardware configurations.
- Do not run mixed versions of Red Hat Storage for an extended period of time. For example, do not have a mixed environment of Red Hat Storage 2.1 and Red Hat Storage 2.1 Update 1 for a prolonged time.
- Do not combine different upgrade methods.
If you are using the tech-preview version of Quota, then execute the following steps:
- Disable quota using the following command:
# gluster volume quota volname disable
- If there are any auxiliary mounts on the Red Hat Storage server, unmount those using the following command:
# umount mount-point
To configure the repo to upgrade using ISO, execute the following steps:
Note
Upgrading Red Hat Storage using ISO can be performed only from the previous release.
- Mount the ISO image file under any directory using the following command:
mount -o loop <ISO image file> <mount-point>
For example:mount -o loop RHSS-2.1U1-RC3-20131122.0-RHS-x86_64-DVD1.iso /mnt
- Set the repo options in a file in the following location:
/etc/yum.repos.d/<file_name.repo>
- Add the following information to the repo file:
[local] name=local baseurl=file:///mnt enabled=1 gpgcheck=0
Before proceeding with the in service software upgrade, prepare and monitor the following processes:
- Check the peer status using the following commad:
# gluster peer status
For example:# gluster peer status Number of Peers: 2 Hostname: 10.70.42.237 Uuid: 04320de4-dc17-48ff-9ec1-557499120a43 State: Peer in Cluster (Connected) Hostname: 10.70.43.148 Uuid: 58fc51ab-3ad9-41e1-bb6a-3efd4591c297 State: Peer in Cluster (Connected)
- Check the volume status using the following command:
# gluster volume status
For example:# gluster volume status Status of volume: r2 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.43.198:/brick/r2_0 49152 Y 32259 Brick 10.70.42.237:/brick/r2_1 49152 Y 25266 Brick 10.70.43.148:/brick/r2_2 49154 Y 2857 Brick 10.70.43.198:/brick/r2_3 49153 Y 32270 NFS Server on localhost 2049 Y 25280 Self-heal Daemon on localhost N/A Y 25284 NFS Server on 10.70.43.148 2049 Y 2871 Self-heal Daemon on 10.70.43.148 N/A Y 2875 NFS Server on 10.70.43.198 2049 Y 32284 Self-heal Daemon on 10.70.43.198 N/A Y 32288 Task Status of Volume r2 ------------------------------------------------------------------------------ There are no active volume tasks
- Check the rebalance status using the following command:
# gluster volume rebalance r2 status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- --------- -------- --------- ------ -------- -------------- 10.70.43.198 0 0Bytes 99 0 0 completed 1.00 10.70.43.148 49 196Bytes 100 0 0 completed 3.00
- Ensure that there are no pending self-heals before proceeding with in service software upgrade using the following command:
# gluster volume heal volname info
The following example shows a self-heal completion:# gluster volume heal drvol info Gathering list of entries to be healed on volume drvol has been successful Brick 10.70.37.51:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.78:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.51:/rhs/brick2/dir2 Number of entries: 0 Brick 10.70.37.78:/rhs/brick2/dir2 Number of entries: 0
In service software upgrade will impact the following services. Ensure you take the required precautionary measures.
SWIFT
ReST requests that are in transition will fail during in service software upgrade. Hence it is recommended to stop all swift services before in service software upgrade using the following commands:
# service openstack-swift-proxy stop # service openstack-swift-account stop # service openstack-swift-container stop # service openstack-swift-object stop
NFS
When you NFS mount a volume, any new or outstanding file operations on that file system will hang uninterruptedly during in service software upgrade until the server is upgraded.
Samba
Ongoing I/O on Samba shares will stop, as the Samba shares will be temporarily unavailable during the in service software upgrade. Hence, it is recommended to stop the smb service before any upgrade activity using the following command:
# service smb stop
Distribute Volume
In service software upgrade is not supported for distributed volume. In case you have a distributed volume in the cluster, stop that volume using the following command:
# gluster volume stop <VOLNAME>
Start the volume after in service software upgrade is complete using the following command:
# gluster volume start <VOLNAME>
Virtual Machine Store
The virtual machine images are likely to be modified constantly. The virtual machine listed in the output of the volume heal command does not imply that the self-heal of the virtual machine is incomplete. It could mean that the modifications on the virtual machine are happening constantly.
Hence, if you are using a gluster volume for storing virtual machine images (Red Hat Enterprise Linux, Red Hat Enterprise Virtualization and Red Hat OpenStack), then it is recommended to power-off all virtual machine instances before in-service software upgrade.
The following steps have to be performed on each node of the replica pair:
Note
- Ensure that you go through Section 7.2.2, “ Upgrade Process with Service Impact” and are aware of all the services that are impacted during an upgrade process before proceeding with any upgrade activity.
- If you have a Geo-replication environment, then to upgrade to Red Hat Storage 2.1 update 1, see Section 7.2.4.1, “In Service Software Upgrade for a Geo-Replication Setup”.
- If you have a CTDB environment, then to upgrade to Red Hat Storage 2.1 Update 1, see Section 7.2.4.2, “In Service Software Upgrade for a CTDB Setup”.
- Back up the following configuration directory and files on the backup directory:
/var/lib/glusterd, /etc/swift, /etc/samba, /etc/ctdb, /etc/glusterfs# cp -a /var/lib/glusterd /backup-disk/ # cp -a /etc/swift /backup-disk/ # cp -a /etc/samba /backup-disk/ # cp -a /etc/ctdb /backup-disk/ # cp -a /etc/glusterfs /backup-disk/
- Stop the gluster services on the storage server using the following commands:
# service glusterd stop # pkill glusterfs # pkill glusterfsd
- Update the server using the following command:
# yum update
- To ensure Red Hat Storage Server node is healthy after reboot and so that it can then be joined back to the cluster, it is recommended that you disable glusterd during boot using the following command:
# chkconfig glusterd off
- Reboot the server.
- Start the
glusterdservice using the following command:# service glusterd start
- To automatically start the
glusterddaemon every time the system boots, run the following command:# chkconfig glusterd on
- To verify if you have upgraded to the latest version of the Red Hat Storage server execute the following command:
# gluster --version
- Ensure that all the bricks are online. To check the status execute the following command:
# gluster volume status
For example:# gluster volume status Status of volume: r2 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.43.198:/brick/r2_0 49152 Y 32259 Brick 10.70.42.237:/brick/r2_1 49152 Y 25266 Brick 10.70.43.148:/brick/r2_2 49154 Y 2857 Brick 10.70.43.198:/brick/r2_3 49153 Y 32270 NFS Server on localhost 2049 Y 25280 Self-heal Daemon on localhost N/A Y 25284 NFS Server on 10.70.43.148 2049 Y 2871 Self-heal Daemon on 10.70.43.148 N/A Y 2875 NFS Server on 10.70.43.198 2049 Y 32284 Self-heal Daemon on 10.70.43.198 N/A Y 32288 Task Status of Volume r2 ------------------------------------------------------------------------------ There are no active volume tasks
- Ensure self-heal is complete on the replica using the following command:
# gluster volume heal volname info
The following example shows self heal completion:# gluster volume heal drvol info Gathering list of entries to be healed on volume drvol has been successful Brick 10.70.37.51:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.78:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.51:/rhs/brick2/dir2 Number of entries: 0 Brick 10.70.37.78:/rhs/brick2/dir2 Number of entries: 0
- If you were previously using the tech-preview version of quota, then, execute the following steps:
- Mount the volume using the following command:
# mount -t glusterfs server:VOLNAME mountpoint
- Execute the following script to ensure the older extended attributes maintained by the tech-preview version of quota are cleared:
# /usr/libexec/glusterfs/quota/quota-remove-xattr.sh <mountpoint>
- Repeat the above steps on the other node of the replica pair.
Note
In case of a distributed-replicate setup, repeat the above steps on all the replica pairs.
The following sections describe the in service software upgrade steps for a geo-replication and a CTDB setup.
The following steps have to be performed on each slave and master node:
- To upgrade from Red Hat Storage 2.1 to Red Hat Storage 2.1 Update 1 or Update 2, see Section 7.2.4.1.1, “In Service Software Upgrade from Red Hat Storage 2.1 to Red Hat Storage 2.1 Update 1 or Update 2” .
- To upgrade from Red Hat Storage 2.1 Update 1 to Red Hat Storage 2.1 Update 2, see Section 7.2.4.1.2, “In Service Software Upgrade from Red Hat Storage 2.1 Update 1 to Red Hat Storage 2.1 Update 2”.
- To upgrade from Red Hat Storage 2.1 Update 2 to Red Hat Storage 2.1 Update 4, see Section 7.2.4.1.3, “Geo-replication Upgrade from Red Hat Storage 2.1 Update 2 to Red Hat Storage 2.1 Update 4 or Later”
Important
- Ensure you always upgrade the slave node first before upgrading the corresponding master node.After upgrading the slave node, the geo-replication status shows one of the node as faulty. This is the corresponding master node that has to be upgraded.
- The next slave node that has to be upgraded should be the replica pair of the slave node that was upgraded earlier.
Steps to Upgrade Slave Node
The following steps have to be performed to upgrade the slave node:
- Stop the gluster services and kill all the
gsyncprocesses on the storage server using the following commands:ps -aef | grep gluster | grep gsync | awk '{print $2}' | xargs kill -9 pkill glusterfsd pkill glusterfs pkill glusterd - Remove the
gsyncdconfiguration files using the following command:rm -f /var/lib/glusterd/geo-replication/gsyncd_template.conf find /var/lib/glusterd/geo-replication/ -type f -name "gsyncd.conf" | xargs rm -f
- To confirm if the
gsyncd.conffile is removed, run the following command:find /var/lib/glusterd/geo-replication/ -type f -name "gsyncd.conf"
The above command should return failure. If it displays file name(s), then remove them too. - Upgrade the node using the following command:
yum update -y
- Reboot the node.
Steps to Upgrade the Master Node
The following steps have to be performed to upgrade the master node:
- Stop the gluster services and kill all the
gsyncprocesses on the storage server using the following commands:ps -aef | grep gluster | grep gsync | awk '{print $2}' | xargs kill -9 pkill glusterfsd pkill glusterfs pkill glusterd - Remove the
gsyncdconfiguration files using the following command:rm -f /var/lib/glusterd/geo-replication/gsyncd_template.conf find /var/lib/glusterd/geo-replication/ -type f -name "gsyncd.conf" | xargs rm -f
- To confirm if the
gsyncd.conffile is removed, run the following command:find /var/lib/glusterd/geo-replication/ -type f -name "gsyncd.conf"
The above command should return failure. If it displays file name(s), then remove them too. - Set the
stimexattr on the export brick. With the new geo-replication, the xattr mark is changed fromxtimetostime. To set the new xattr execute the following steps:- Find the correct xattr with
trusted.glusterfs.<UUID>.<UUID>.xtimeas the key, using the following command.getfattr -d -m . </path/to/brick/> 2>/dev/null | egrep "trusted\.glusterfs\.[a-fA-F0-9\-]+\.[a-fA-F0-9\-]+\.xtime"
For example:# getfattr -d -m . /rhs/bricks/brick2/ 2>/dev/null | egrep "trusted\.glusterfs\.[a-fA-F0-9\-]+\.[a-fA-F0-9\-]+\.xtime" trusted.glusterfs.2976a7bd-66d7-4eb1-b836-70ec026d5049.ee836754-0e43-49c6-864d-6c63adc867c6.xtime=0sUoxjIQAGL3Y=
If the above command returns nothing, it means that there is no xtime mark to update. You can then omit the next steps. - Replace the
xtimewithstimein the key and set the same value on the brick using the following command:# setfattr -n "trusted.glusterfs.<UUID>.<UUID>.stime" -v <same_value_as_above_output> </path/to/brick>
For example:# setfattr -n "trusted.glusterfs.2976a7bd-66d7-4eb1-b836-70ec026d5049.ee836754-0e43-49c6-864d-6c63adc867c6.stime" -v "0sUoxjIQAGL3Y=" /rhs/bricks/brick2/
- To confirm if the xattr setting is correct, run the following command:
# getfattr -d -m . /rhs/bricks/brick2/ 2>/dev/null | egrep "trusted\.glusterfs\.[a-fA-F0-9\-]+\.[a-fA-F0-9\-]+\.[x|s]time"
For example:# getfattr -d -m . /rhs/bricks/brick2/ 2>/dev/null | egrep "trusted\.glusterfs\.[a-fA-F0-9\-]+\.[a-fA-F0-9\-]+\.[x|s]time" trusted.glusterfs.2976a7bd-66d7-4eb1-b836-70ec026d5049.ee836754-0e43-49c6-864d-6c63adc867c6.stime=0sUoxjIQAGL3Y= trusted.glusterfs.2976a7bd-66d7-4eb1-b836-70ec026d5049.ee836754-0e43-49c6-864d-6c63adc867c6.xtime=0sUoxjIQAGL3Y=
In the above example, there are two keys with the same value. The output after xattr set should look similar to the above output.
Note
You can use the following single command to do all of the above. But this will return failure if the xattr is not present. Which means that the xattr is not present and hence there is no need to update the key.# setfattr -n `getfattr -d -m . <abs_path_to_brick> 2> /dev/null | egrep "trusted\.glusterfs\.[a-fA-F0-9\-]+\.[a-fA-F0-9\-]+\.xtime" | awk -F= '{print $1}' | sed 's/xtime/stime/'` -v `getfattr -d -m . <abs_path_to_brick> 2> /dev/null | egrep "trusted\.glusterfs\.[a-fA-F0-9\-]+\.[a-fA-F0-9\-]+\.xtime" | sed 's/^[^=]*=//'` <abs_path_to_brick> - Upgrade the node using the following command:
# yum update -y
- Reboot the node using the following node:
# reboot
- After reboot remove the config file using the following command:
# find /var/lib/glusterd/geo-replication/ -type f -name "gsyncd.conf" | xargs rm -f
- In the node where there is a passwordless ssh connection to the slave node (This is the node from where you have created the geo-replication session using geo-rep create) run the following commands:
gluster volume geo-replication <master> <slave> create force gluster volume geo-replication <master> <slave> start force
Note
- During the upgrade process, nodes with older gluster version are displayed only with the status of the nodes with the older version of glusterfs and the nodes with the new gluster version are displayed only with the new version of glusterfs. They won't be displayed together until the upgrade of the whole cluster is complete.
- During this upgrade process, the status of geo-replication will go into faulty/defunct with back trace. That is expected since there are configuration changes between the two versions of glusterfs.
The following steps have to be performed on each slave and master node:
Important
- Ensure you always upgrade the slave node first before upgrading the corresponding master node.After upgrading the slave node, the geo-replication status shows one of the node as faulty. This is the corresponding master node that has to be upgraded.
- The next slave node that has to be upgraded should be the replica pair of the slave node that was upgraded earlier.
Steps to Upgrade Slave Node
The following steps have to be performed to upgrade the slave node:
- Stop the gluster services and kill all the
gsyncprocesses on the storage server using the following commands:ps -aef | grep gluster | grep gsync | awk '{print $2}' | xargs kill -9 pkill glusterfsd pkill glusterfs pkill glusterd - Remove the
gsyncdconfiguration files using the following command:rm -f /var/lib/glusterd/geo-replication/gsyncd_template.conf find /var/lib/glusterd/geo-replication/ -type f -name "gsyncd.conf" | xargs rm -f
- To confirm if the
gsyncd.conffile is removed, run the following command:find /var/lib/glusterd/geo-replication/ -type f -name "gsyncd.conf"
The above command should return failure. If it displays file name(s), then remove them too. - Upgrade the node using the following command:
yum update -y
- Reboot the node.
Steps to Upgrade the Master Node
The following steps have to be performed to upgrade the master node:
- Stop the gluster services and kill all the
gsyncprocesses on the storage server using the following commands:ps -aef | grep gluster | grep gsync | awk '{print $2}' | xargs kill -9 pkill glusterfsd pkill glusterfs pkill glusterd - Remove the
gsyncdconfiguration files using the following command:rm -f /var/lib/glusterd/geo-replication/gsyncd_template.conf find /var/lib/glusterd/geo-replication/ -type f -name "gsyncd.conf" | xargs rm -f
- To confirm if the
gsyncd.conffile is removed, run the following command:find /var/lib/glusterd/geo-replication/ -type f -name "gsyncd.conf"
The above command should return failure. If it displays file name(s), then remove them too. - Upgrade the node using the following command:
# yum update -y
- Reboot the node using the following node:
# reboot
- After reboot, kill any stale gsync processes which are running using the following command:
ps -aef | grep gluster | grep gsync | awk '{print $2}' | xargs kill -9 - Remove the config file using the following command:
# find /var/lib/glusterd/geo-replication/ -type f -name "gsyncd.conf" | xargs rm -f
- In the node where there is a passwordless ssh connection to the slave node (This is the node from where you have created the geo-replication session using geo-rep create) run the following commands:
gluster volume geo-replication <master> <slave> create force gluster volume geo-replication <master> <slave> start force
Note
- During this upgrade process, the status of geo-replication will go into faulty/defunct with back trace. That is expected since there are configuration changes between the two versions of glusterfs.
The following steps must be performed to upgrade the node:
- The geo-replication session must be stopped before you can start the upgrade process using the following command:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
- Upgrade all the master and slave nodes of the trusted storage pool using the following command:
yum update -y
Reboot the node. - After upgrade, in the slave volume set the
hash-range-gfidoptiononusing the following command:gluster volume set SLAVE_VOL cluster.hash-range-gfid on
- Start the geo-replication session using the following command:
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
The following steps have to be performed on each node of the replica pair
- To ensure that the CTDB does not start automatically after a reboot run the following command:
# chkconfig ctdb off
- Stop the CTDB service on the RHS node using the following command:
# service ctdb stop
- Stop the gluster services on the storage server using the following commands:
# service glusterd stop # pkill glusterfs # pkill glusterfsd
- In
/etc/fstab, comment out the line containing the volume used for CTDB service as shown in the following example:# HostName:/volname /gluster/lock glusterfs defaults,transport=tcp 0 0
- Update the server using the following command:
# yum update
- To ensure the
glusterdservice does not start automatically after reboot, execute the following command:# chkconfig glusterd off
- Reboot the server.
- Update the META=all with the gluster volume information in the following scripts:
/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
- In
/etc/fstab, uncomment the line containing the volume used for CTDB service as shown in the following example:HostName:/volname /gluster/lock glusterfs defaults,transport=tcp 0 0
- To automatically start the
glusterddaemon every time the system boots, run the following command:# chkconfig glusterd on
- To automatically start the ctdb daemon every time the system boots, run the following command:
# chkconfig ctdb on
- Start the
glusterdservice using the following command:# service glusterd start
- Mount the CTDB volume by running the following command:
# mount -a
- Start the CTDB service using the following command:
# service ctdb start
- To verify if CTDB is running successfully, execute the following commands:
# ctdb status # ctdb ip # ctdb ping -n all
To verify if you have upgraded to the latest version of the RHS server execute the following command:
# gluster --version
All the clients must be of same version. Red Hat strongly recommends you to upgrade the servers before you upgrading the clients. For more information regarding installing and upgrading native client refer Section 9.2 Native Client in the Red Hat Storage Administration Guide.
Important
You can rollback to the pre-upgrade state of Red Hat Storage Server if you encounter an issue during/after an in service software upgrade.
The following lists some of the restrictions for in service software rollback:
- Do not perform in service software rollback to the previous upgrade version when the I/O or load is high on the Red Hat Storage server.
- Do not perform any volume operations on the Red Hat Storage server
- Do not change the hardware configurations.
- Do not run mixed versions of Red Hat Storage for an extended period of time. For example, do not have a mixed environment of Red Hat Storage 2.1 and Red Hat Storage 2.1 Update 1 for a prolonged time.
In service software rollback will impact the following services. Ensure you take the required precautionary measures.
SWIFT
ReST requests that are in transition will fail during in service software rollback. Hence it is recommended to stop all swift services before in service software rollback using the following commands:
# service openstack-swift-proxy stop # service openstack-swift-account stop # service openstack-swift-container stop # service openstack-swift-object stop
NFS
When you NFS mount a volume, any new or outstanding file operations on that file system will hang uninterruptedly during in service software rollback until the server is rolled back..
Samba
Ongoing I/O on Samba shares will stop, as the Samba shares will be temporarily unavailable during the in service software rollback.
Distribute Volume
In service software rollback is not supported for distributed volume. In case you have a distributed volume in the cluster, stop that volume using the following command:
# gluster volume stop <VOLNAME>
Start the volume after in service software rollback is complete using the following command:
# gluster volume start <VOLNAME>
Virtual Machine Store
The virtual machine images are likely to be modified constantly. The virtual machine listed in the output of the volume heal command does not imply that the self-heal of the virtual machine is incomplete. It could mean that the modifications on the virtual machine are happening constantly.
Hence, if you are using a gluster volume for storing virtual machine images (Red Hat Enterprise Linux, Red Hat Enterprise Virtualization and Red Hat OpenStack), then it is recommended to power-off all virtual machine instances before in service software rollback.
The following steps have to be performed on each node of the replica pair
- Back up the following configuration directory and files on the backup directory:
/var/lib/glusterd# cp -a /var/lib/glusterd /backup-disk/
- Stop the gluster services on the storage server using the following commands:
# service glusterd stop # pkill glusterfs # pkill glusterfsd
- Find the transaction ID of the last successful yum transaction that you want to rollback to using the following command:
# yum history listFor example:# yum history list Loaded plugins: product-id, rhnplugin, security, subscription-manager ID | Login user | Date and time | Action(s) | Altered ------------------------------------------------------------------------------- 2 | root <root> | 2014-02-12 16:16 | Update | 5 EE 1 | System <unset> | 2014-02-12 15:57 | Install | 569 - To rollback to the previous yum transaction, use the transaction ID from the previous step in the following command:
# yum history rollback <transaction_ID> - To ensure Red Hat Storage Server node is healthy after reboot and so that it can then be joined back to the cluster, it is recommended that you disable glusterd during boot using the following command:
# chkconfig glusterd off
- Reboot the server.
- Restore
/var/lib/glusterdfrom the backup directory. - Start the
glusterdservice using the following command:# service glusterd start
- To automatically start the
glusterddaemon every time the system boots, run the following command:# chkconfig glusterd on
- To verify if you have rolled back to the required version of the Red Hat Storage server execute the following command:
# gluster --version
- Ensure that all the bricks are online. To check the status execute the following command:
# gluster volume status
For example:# gluster volume status Status of volume: r2 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.43.198:/brick/r2_0 49152 Y 32259 Brick 10.70.42.237:/brick/r2_1 49152 Y 25266 Brick 10.70.43.148:/brick/r2_2 49154 Y 2857 Brick 10.70.43.198:/brick/r2_3 49153 Y 32270 NFS Server on localhost 2049 Y 25280 Self-heal Daemon on localhost N/A Y 25284 NFS Server on 10.70.43.148 2049 Y 2871 Self-heal Daemon on 10.70.43.148 N/A Y 2875 NFS Server on 10.70.43.198 2049 Y 32284 Self-heal Daemon on 10.70.43.198 N/A Y 32288 Task Status of Volume r2 ------------------------------------------------------------------------------ There are no active volume tasks
- Ensure self-heal is complete on the replica using the following command:
# gluster volume heal volname info
The following example shows self heal completion:# gluster volume heal drvol info Gathering list of entries to be healed on volume drvol has been successful Brick 10.70.37.51:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.78:/rhs/brick1/dir1 Number of entries: 0 Brick 10.70.37.51:/rhs/brick2/dir2 Number of entries: 0 Brick 10.70.37.78:/rhs/brick2/dir2 Number of entries: 0
- Repeat the above steps on the nodes on which upgrade has failed.
Chapter 8. Managing the glusterd Service
After installing Red Hat Storage, the
glusterd service automatically starts on all the servers in the trusted storage pool. The service can be manually started and stopped using the glusterd service commands.
Use Red Hat Storage to dynamically change the configuration of glusterFS volumes without restarting servers or remounting volumes on clients. The glusterFS daemon
glusterd also offers elastic volume management.
Use the
gluster CLI commands to decouple logical storage volumes from physical hardware. This allows the user to grow, shrink, and migrate storage volumes without any application downtime. As storage is added to the cluster, the volumes are distributed across the cluster. This distribution ensures that the cluster is always available despite changes to the underlying hardware.
8.1. Manually Starting and Stopping glusterd
Use the following instructions to manually start and stop the
glusterd service.
- Manually start
glusterdas follows:# /etc/init.d/glusterd start
or# service glusterd start
- Manually stop
glusterdas follows:# /etc/init.d/glusterd stop
or# service glusterd stop
Chapter 9. Using the Gluster Command Line Interface
The Gluster command line interface (CLI) simplifies configuration and management of the storage environment. The Gluster CLI is similar to the LVM (Logical Volume Manager) CLI or the ZFS CLI, but operates across multiple storage servers. The Gluster CLI can be used when volumes are mounted (active) and not mounted (inactive). Red Hat Storage automatically synchronizes volume configuration information across all servers.
Use the Gluster CLI to create new volumes, start and stop existing volumes, add bricks to volumes, remove bricks from volumes, and change translator settings. Additionally, the Gluster CLI commands can create automation scripts and use the commands as an API to allow integration with third-party applications.
Note
Appending
--mode=script to any CLI command ensures that the command executes without confirmation prompts.
Running the Gluster CLI
Run the Gluster CLI on any Red Hat Storage Server by either invoking the commands or running the Gluster CLI in interactive mode. The gluster command can be remotely used via SSH.
Run commands directly as follows, after replacing COMMAND with the required command:
# gluster peer COMMAND
The following is an example using the
status command:
# gluster peer status
Gluster CLI Interactive Mode
Alternatively, run the Gluster CLI in interactive mode using the following command:
# gluster
If successful, the prompt changes to the following:
gluster>
When the prompt appears, execute gluster commands from the CLI prompt as follows:
gluster> COMMAND
As an example, replace the COMMAND with a command such as
status to view the status of the peer server:
- Start Gluster CLI's interactive mode:
# gluster
- Request the peer server status:
gluster> status
- The peer server status displays.
The following is another example, replacing the COMMAND with a command such as
help to view the gluster help options.
- Start Gluster CLI's interactive mode:
# gluster
- Request the help options:
gluster> help
- A list of gluster commands and options displays.
| Revision History | |||
|---|---|---|---|
| Revision 2.1-46 | Thu Nov 13 2014 | ||
| |||
| Revision 2.1-45 | Thu Sep 18 2014 | ||
| |||
| Revision 2.1-42 | Fri Sep 05 2014 | ||
| |||
| Revision 2.1-39 | Mon Apr 14 2014 | ||
| |||
| Revision 2.1-29 | Tue Feb 25 2014 | ||
| |||
| Revision 2.1-26 | Thu Feb 13 2014 | ||
| |||
| Revision 2.1-25 | Tue Feb 11 2014 | ||
| |||
| Revision 2.1-22 | Mon Dec 09 2013 | ||
| |||
| Revision 2.1-21 | Wed Nov 27 2013 | ||
| |||
| Revision 2.1-14 | Wed Nov 13 2013 | ||
| |||
| Revision 2.1-12 | Thu Nov 7 2013 | ||
| |||
| Revision 2.1-11 | Wed Oct 23 2013 | ||
| |||
| Revision 2.1-6 | Mon Sept 23 2013 | ||
| |||
| Revision 2.1-1 | Mon Sept 16 2013 | ||
| |||


















