Red Hat Storage 2.1

Administration Guide

Configuring and Managing Red Hat Storage Server

Edition 1

Divya Muntimadugu

Red Hat Engineering Content Services

Bhavana Mohanraj

Red Hat Engineering Content Services

Anjana Suparna Sriram

Red Hat Engineering Content Services

Legal Notice

Copyright © 2013-2014 Red Hat Inc.
This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack Logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.

Abstract

Red Hat Storage Administration Guide describes the configuration and management of Red Hat Storage Server for On-Premise and Public Cloud.
Preface
1. Audience
2. License
3. Document Conventions
3.1. Typographic Conventions
3.2. Pull-quote Conventions
3.3. Notes and Warnings
4. Getting Help and Giving Feedback
4.1. Do You Need Help?
4.2. We Need Feedback
I. Introduction
1. Platform Introduction
1.1. About Red Hat Storage
1.2. About glusterFS
1.3. On-premise Installation
1.4. Public Cloud Installation
2. Red Hat Storage Architecture
2.1. Red Hat Storage Server for On-premise Architecture
2.2. Red Hat Storage Server for Public Cloud Architecture
3. Key Features
3.1. Elasticity
3.2. No Metadata with the Elastic Hashing Algorithm
3.3. Scalability
3.4. High Availability and Flexibility
3.5. Flexibility
3.6. No Application Rewrites
3.7. Simple Management
3.8. Modular, Stackable Design
4. Use Case Examples
4.1. Use Case 1: Using Red Hat Storage for Data Archival
4.1.1. Key Features of Red Hat Storage Server for Nearline Storage and Archival
4.2. Use Case 2: Using Red Hat Storage for High Performance Computing
4.2.1. Key Features of Red Hat Storage Server for High Performance Computing
4.3. Use Case 3: Using Red Hat Storage for Content Clouds
4.3.1. Key Features of Red Hat Storage Server for Content Clouds
5. Storage Concepts
II. Red Hat Storage Administration On-Premise
6. The glusterd Service
6.1. Starting and Stopping the glusterd service
7. Trusted Storage Pools
7.1. Adding Servers to the Trusted Storage Pool
7.2. Removing Servers from the Trusted Storage Pool
8. Red Hat Storage Volumes
8.1. Formatting and Mounting Bricks
8.2. Encrypted Disk
8.3. Creating Distributed Volumes
8.4. Creating Replicated Volumes
8.5. Creating Distributed Replicated Volumes
8.6. Creating Striped Volumes
8.7. Creating Distributed Striped Volumes
8.8. Creating Striped Replicated Volumes
8.9. Creating Distributed Striped Replicated Volumes
8.10. Starting Volumes
9. Accessing Data - Setting Up Clients
9.1. Securing Red Hat Storage Client Access
9.2. Native Client
9.2.1. Installing Native Client
9.2.2. Upgrading Native Client
9.2.3. Mounting Red Hat Storage Volumes
9.3. NFS
9.3.1. Using NFS to Mount Red Hat Storage Volumes
9.3.2. Troubleshooting NFS
9.3.3. NFS Ganesha
9.4. SMB
9.4.1. Automatic and Manual Exporting
9.4.2. Mounting Volumes using SMB
9.5. Configuring Automated IP Failover for NFS and SMB
9.5.1. Setting Up CTDB
9.5.2. Starting and Verifying your Configuration
9.6. POSIX Access Control Lists
9.6.1. Setting POSIX ACLs
9.6.2. Retrieving POSIX ACLs
9.6.3. Removing POSIX ACLs
9.6.4. Samba and ACLs
10. Managing Red Hat Storage Volumes
10.1. Configuring Volume Options
10.2. Expanding Volumes
10.3. Shrinking Volumes
10.3.1. Stopping a remove-brick Operation
10.4. Migrating Volumes
10.4.1. Replacing a Subvolume on a Distribute or Distribute-replicate Volume
10.4.2. Replacing an Old Brick with a New Brick on a Replicate or Distribute-replicate Volume
10.4.3. Replacing an Old Brick with a New Brick on a Distribute Volume
10.5. Replacing Hosts
10.5.1. Replacing a Host Machine with a Different IP Address
10.5.2. Replacing a Host Machine with the Same IP Address
10.6. Rebalancing Volumes
10.6.1. Displaying Status of a Rebalance Operation
10.6.2. Stopping a Rebalance Operation
10.7. Stopping Volumes
10.8. Deleting Volumes
10.9. Managing Split-brain
10.9.1. Preventing Split-brain
10.9.2. Recovering from File Split-brain
10.9.3. Triggering Self-Healing on Replicated Volumes
10.10. Non Uniform File Allocation (NUFA)
11. Configuring Red Hat Storage for Enhancing Performance
11.1. Brick Configuration
11.2. Network
11.3. RAM on the Nodes
11.4. Number of Clients
11.5. Replication
11.6. Hardware RAID
12. Managing Geo-replication
12.1. About Geo-replication
12.2. Replicated Volumes vs Geo-replication
12.3. Preparing to Deploy Geo-replication
12.3.1. Exploring Geo-replication Deployment Scenarios
12.3.2. Geo-replication Deployment Overview
12.3.3. Prerequisites
12.3.4. Configuring the Environment and Creating a Geo-replication Session
12.4. Starting Geo-replication
12.4.1. Starting a Geo-replication Session
12.4.2. Verifying a Successful Geo-replication Deployment
12.4.3. Displaying Geo-replication Status Information
12.4.4. Configuring a Geo-replication Session
12.4.5. Stopping a Geo-replication Session
12.4.6. Deleting a Geo-replication Session
12.5. Starting Geo-replication on a Newly Added Brick
12.5.1. Starting Geo-replication for a New Brick on a New Node
12.5.2. Starting Geo-replication for a New Brick on an Existing Node
12.6. Disaster Recovery
12.6.1. Promoting a Slave to Master
12.6.2. Failover and Failback
12.7. Example - Setting up Cascading Geo-replication
12.8. Recommended Practices
12.9. Troubleshooting Geo-replication
12.9.1. Locating Log Files
12.9.2. Synchronization Is Not Complete
12.9.3. Issues with File Synchronization
12.9.4. Geo-replication Status is Often Faulty
12.9.5. Intermediate Master is in a Faulty State
12.9.6. Remote gsyncd Not Found
13. Managing Directory Quotas
13.1. Enabling Quotas
13.2. Setting Limits
13.3. Setting the Default Soft Limit
13.4. Displaying Quota Limit Information
13.4.1. Displaying Quota Limit Information Using the df Utility
13.5. Setting Timeout
13.6. Setting Alert Time
13.7. Removing Disk Limits
13.8. Disabling Quotas
14. Monitoring Your Red Hat Storage Workload
14.1. Running the Volume Profile Command
14.1.1. Start Profiling
14.1.2. Displaying the I/O Information
14.1.3. Stop Profiling
14.2. Running the Volume Top Command
14.2.1. Viewing Open File Descriptor Count and Maximum File Descriptor Count
14.2.2. Viewing Highest File Read Calls
14.2.3. Viewing Highest File Write Calls
14.2.4. Viewing Highest Open Calls on a Directory
14.2.5. Viewing Highest Read Calls on a Directory
14.2.6. Viewing Read Performance
14.2.7. Viewing Write Performance
14.3. Listing Volumes
14.4. Displaying Volume Information
14.5. Performing Statedump on a Volume
14.6. Displaying Volume Status
15. Managing Red Hat Storage Volume Life-Cycle Extensions
15.1. Location of Scripts
15.2. Prepackaged Scripts
III. Red Hat Storage Administration on Public Cloud
16. Launching Red Hat Storage Server for Public Cloud
16.1. Launching Red Hat Storage Instances
16.2. Verifying that Red Hat Storage Instance is Running
17. Provisioning Storage
18. Stopping and Restarting Red Hat Storage Instance
IV. Data Access with Other Interfaces
19. Managing Object Store
19.1. Architecture Overview
19.2. Components of Object Storage
19.3. Advantages of using Object Store
19.4. Limitations
19.5. Prerequisites
19.6. Configuring the Object Store
19.6.1. Configuring a Proxy Server
19.6.2. Configuring the Authentication Service
19.6.3. Configuring an Object Server
19.6.4. Configuring a Container Server
19.6.5. Configuring an Account Server
19.6.6. Configuring Swift Object and Container Constrains
19.6.7. Exporting the Red Hat Storage Volumes
19.6.8. Starting and Stopping Server
19.7. Starting the Services Automatically
19.8. Working with the Object Store
19.8.1. Creating Containers and Objects
19.8.2. Creating Subdirectory under Containers
19.8.3. Working with Swift ACLs
V. Appendices
20. Troubleshooting
20.1. Managing Red Hat Storage Logs
20.1.1. Rotating Logs
20.2. Troubleshooting File Locks
A. Revision History

Preface

This guide introduces Red Hat Storage, describes the minimum requirements, and provides step-by-step instructions to install the software and manage the storage environment.

1. Audience

The Red Hat Storage Administration Guide is intended for system and storage administrators who need to configure, administer, and manage Red Hat Storage Server. To use this guide, you must be familiar with basic Linux operating system concepts and administrative procedures, concepts of the file system, and storage concepts.
See the Red Hat Storage Installation Guide for detailed instructions regarding installing the platform.

2. License

The Red Hat Storage End User License Agreement (EULA) is available at http://www.redhat.com/licenses/rhel_rha_eula.html.

3. Document Conventions

This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information.

3.1. Typographic Conventions

Four typographic conventions are used to call attention to specific words and phrases. These conventions, and the circumstances they apply to, are as follows.
Mono-spaced Bold
Used to highlight system input, including shell commands, file names and paths. Also used to highlight keys and key combinations. For example:
To see the contents of the file my_next_bestselling_novel in your current working directory, enter the cat my_next_bestselling_novel command at the shell prompt and press Enter to execute the command.
The above includes a file name, a shell command and a key, all presented in mono-spaced bold and all distinguishable thanks to context.
Key combinations can be distinguished from an individual key by the plus sign that connects each part of a key combination. For example:
Press Enter to execute the command.
Press Ctrl+Alt+F2 to switch to a virtual terminal.
The first example highlights a particular key to press. The second example highlights a key combination: a set of three keys pressed simultaneously.
If source code is discussed, class names, methods, functions, variable names and returned values mentioned within a paragraph will be presented as above, in mono-spaced bold. For example:
File-related classes include filesystem for file systems, file for files, and dir for directories. Each class has its own associated set of permissions.
Proportional Bold
This denotes words or phrases encountered on a system, including application names; dialog-box text; labeled buttons; check-box and radio-button labels; menu titles and submenu titles. For example:
Choose SystemPreferencesMouse from the main menu bar to launch Mouse Preferences. In the Buttons tab, select the Left-handed mouse check box and click Close to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).
To insert a special character into a gedit file, choose ApplicationsAccessoriesCharacter Map from the main menu bar. Next, choose SearchFind… from the Character Map menu bar, type the name of the character in the Search field and click Next. The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the Copy button. Now switch back to your document and choose EditPaste from the gedit menu bar.
The above text includes application names; system-wide menu names and items; application-specific menu names; and buttons and text found within a GUI interface, all presented in proportional bold and all distinguishable by context.
Mono-spaced Bold Italic or Proportional Bold Italic
Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or variable text. Italics denotes text you do not input literally or displayed text that changes depending on circumstance. For example:
To connect to a remote machine using ssh, type ssh username@domain.name at a shell prompt. If the remote machine is example.com and your username on that machine is john, type ssh john@example.com.
The mount -o remount file-system command remounts the named file system. For example, to remount the /home file system, the command is mount -o remount /home.
To see the version of a currently installed package, use the rpm -q package command. It will return a result as follows: package-version-release.
Note the words in bold italics above: username, domain.name, file-system, package, version and release. Each word is a placeholder, either for text you enter when issuing a command or for text displayed by the system.
Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and important term. For example:
Publican is a DocBook publishing system.

3.2. Pull-quote Conventions

Terminal output and source code listings are set off visually from the surrounding text.
Output sent to a terminal is set in mono-spaced roman and presented thus:
books        Desktop   documentation  drafts  mss    photos   stuff  svn
books_tests  Desktop1  downloads      images  notes  scripts  svgs
Source-code listings are also set in mono-spaced roman but add syntax highlighting as follows:
static int kvm_vm_ioctl_deassign_device(struct kvm *kvm,
                 struct kvm_assigned_pci_dev *assigned_dev)
{
         int r = 0;
         struct kvm_assigned_dev_kernel *match;

         mutex_lock(&kvm->lock);

         match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head,
                                       assigned_dev->assigned_dev_id);
         if (!match) {
                 printk(KERN_INFO "%s: device hasn't been assigned before, "
                   "so cannot be deassigned\n", __func__);
                 r = -EINVAL;
                 goto out;
         }

         kvm_deassign_device(kvm, match);

         kvm_free_assigned_device(kvm, match);

out:
         mutex_unlock(&kvm->lock);
         return r;
}

3.3. Notes and Warnings

Finally, we use three visual styles to draw attention to information that might otherwise be overlooked.

Note

Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should have no negative consequences, but you might miss out on a trick that makes your life easier.

Important

Important boxes detail things that are easily missed: configuration changes that only apply to the current session, or services that need restarting before an update will apply. Ignoring a box labeled “Important” will not cause data loss but may cause irritation and frustration.

Warning

Warnings should not be ignored. Ignoring warnings will most likely cause data loss.

4. Getting Help and Giving Feedback

4.1. Do You Need Help?

If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal at http://access.redhat.com. From the Customer Portal, you can:
  • Search or browse through a knowledge base of technical support articles about Red Hat products.
  • Submit a support case to Red Hat Global Support Services (GSS).
  • Access other product documentation.
Red Hat also hosts a large number of electronic mailing lists for discussion of Red Hat software and technology. You can find a list of publicly available mailing lists at https://www.redhat.com/mailman/listinfo. Click the name of any mailing list to subscribe to that list or to access the list archives.

4.2. We Need Feedback

If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you. Please submit a report in Bugzilla: http://bugzilla.redhat.com/ against the product Red Hat Storage.
When submitting a bug report, be sure to mention the manual's identifier: doc-Administration_Guide
If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, please include the section number and some of the surrounding text so we can find it easily.

Part I. Introduction

Chapter 1. Platform Introduction

1.1. About Red Hat Storage

Red Hat Storage is a software-only, scale-out storage solution that provides flexible and agile unstructured data storage for the enterprise.
Red Hat Storage provides new opportunities to unify data storage and infrastructure, increase performance, and improve availability and manageability in order to meet a broader set of an organization’s storage challenges and needs.
The product can be installed and managed on-premise, or in a public cloud.

1.2. About glusterFS

glusterFS aggregates various storage servers over network interconnects into one large parallel network file system. Based on a stackable user space design, it delivers exceptional performance for diverse workloads and is a key building block of Red Hat Storage.
The POSIX compatible glusterFS servers, which use XFS file system format to store data on disks, can be accessed using industry-standard access protocols including Network File System (NFS) and Server Message Block (SMB) (also known as CIFS).

1.3. On-premise Installation

Red Hat Storage Server for On-Premise allows physical storage to be utilized as a virtualized, scalable, and centrally managed pool of storage.
Red Hat Storage can be installed on commodity servers and storage hardware resulting in a powerful, massively scalable, and highly available NAS environment.
See Part II, “Red Hat Storage Administration On-Premise” for detailed information about this deployment type.

1.4. Public Cloud Installation

Red Hat Storage Server for Public Cloud packages glusterFS as an Amazon Machine Image (AMI) for deploying scalable NAS in the Amazon Web Services (AWS) public cloud. This powerful storage server provides all the features of On-Premise deployment, within a highly available, scalable, virtualized, and centrally managed pool of NAS storage hosted off-premise.
Additionally, Red Hat Storage can be deployed in the public cloud using Red Hat Storage Server for Public Cloud, for example, within the Amazon Web Services (AWS) cloud. It delivers all the features and functionality possible in a private cloud or datacenter to the public cloud by providing massively scalable and high available NAS in the cloud.
See Part III, “Red Hat Storage Administration on Public Cloud” for detailed information about this deployment type.

Chapter 2. Red Hat Storage Architecture

This chapter provides an overview of Red Hat Storage architecture.
At the core of the Red Hat Storage design is a completely new method of architecting storage. The result is a system that has immense scalability, is highly resilient, and offers extraordinary performance.
In a scale-out system, one of the biggest challenges is keeping track of the logical and physical locations of data and metadata. Most distributed systems solve this problem by creating a metadata server to track the location of data and metadata. As traditional systems add more files, more servers, or more disks, the central metadata server becomes a performance bottleneck, as well as a central point of failure.
Unlike other traditional storage solutions, Red Hat Storage does not need a metadata server, and locates files algorithmically using an elastic hashing algorithm. This no-metadata server architecture ensures better performance, linear scalability, and reliability.
Red Hat Storage Architecture

Figure 2.1. Red Hat Storage Architecture

2.1. Red Hat Storage Server for On-premise Architecture

Red Hat Storage Server for On-premise enables enterprises to treat physical storage as a virtualized, scalable, and centrally managed storage pool by using commodity storage hardware.
It supports multi-tenancy by partitioning users or groups into logical volumes on shared storage. It enables users to eliminate, decrease, or manage their dependence on high-cost, monolithic and difficult-to-deploy storage arrays.
You can add capacity in a matter of minutes across a wide variety of workloads without affecting performance. Storage can also be centrally managed across a variety of workloads, thus increasing storage efficiency.
Red Hat Storage Server for On-premise Architecture

Figure 2.2. Red Hat Storage Server for On-premise Architecture

Red Hat Storage Server for On-premise is based on glusterFS, an open source distributed file system with a modular, stackable design, and a unique no-metadata server architecture. This no-metadata server architecture ensures better performance, linear scalability, and reliability.

2.2. Red Hat Storage Server for Public Cloud Architecture

Red Hat Storage Server for Public Cloud packages glusterFS as an Amazon Machine Image (AMI) for deploying scalable network attached storage (NAS) in the AWS public cloud. This powerful storage server provides a highly available, scalable, virtualized, and centrally managed pool of storage for Amazon users. Red Hat Storage Server for Public Cloud provides highly available storage within AWS. Synchronous n-way replication across AWS Availability Zones provides high availability within an AWS Region. Asynchronous geo-replication provides continuous data replication to ensure high availability across AWS regions. The glusterFS global namespace capability aggregates disk and memory resources into a unified storage volume that is abstracted from the physical hardware.
Red Hat Storage Server for Public Cloud is the only high availability (HA) storage solution available for AWS. It simplifies the task of managing unstructured file data whether you have few terabytes of storage or multiple petabytes. This unique HA solution is enabled by the synchronous file replication capability built into glusterFS.
Red Hat Storage Server for Public Cloud Architecture

Figure 2.3. Red Hat Storage Server for Public Cloud Architecture

Chapter 3. Key Features

This chapter lists the key features of Red Hat Storage.

3.1. Elasticity

Storage volumes are abstracted from the underlying hardware and can grow, shrink, or be migrated across physical systems as necessary. Storage system servers can be added or removed as needed with data rebalanced across the trusted storage pool. Data is always online and there is no application downtime. File system configuration changes are accepted at runtime and propagated throughout the trusted storage pool, allowing changes to be made dynamically for performance tuning, or as workloads fluctuate.

3.2. No Metadata with the Elastic Hashing Algorithm

Unlike other storage systems with a distributed file system, Red Hat Storage does not create, store, or use a separate metadata index. Instead, Red Hat Storage places and locates files algorithmically. All storage node servers in the trusted storage pool can locate any piece of data without looking it up in an index or querying another server. Red Hat Storage uses an elastic hashing algorithm to locate data in the storage pool, removing a common source of I/O bottlenecks and single point of failure. Data access is fully parallelized and performance scales linearly.

3.3. Scalability

Red Hat Storage is designed to scale for both performance and capacity. Aggregating the disk, CPU, and I/O resources of large numbers of commodity hardware can create one large and high-performing storage pool with Red Hat Storage. More capacity can be added by adding more disks, while performance can be improved by deploying disks between more server nodes.

3.4. High Availability and Flexibility

Synchronous n-way file replication ensures high data availability and local recovery. Asynchronous geo-replication ensures resilience across data centers and regions. Both synchronous n-way and geo-replication asynchronous data replication are supported in the private cloud, data center, public cloud, and hybrid cloud environments. Within the AWS cloud, Red Hat Storage supports n-way synchronous replication across Availability Zones and asynchronous geo-replication across AWS Regions. In fact, Red Hat Storage is the only way to ensure high availability for NAS storage within the AWS infrastructure.

3.5. Flexibility

Red Hat Storage runs in the user space, eliminating the need for complex kernel patches or dependencies. You can also reconfigure storage performance characteristics to meet changing storage needs.

3.6. No Application Rewrites

Red Hat Storage servers are POSIX compatible and use XFS file system format to store data on disks, which can be accessed using industry-standard access protocols including NFS and SMB. Thus, there is no need to rewrite applications when moving data to the cloud as is the case with traditional cloud-based object storage solutions.

3.7. Simple Management

Red Hat Storage allows you to build a scale-out storage system that is highly secure within minutes. It provides a very simple, single command for storage management. It also includes performance monitoring and analysis tools like Top and Profile. Top provides visibility into workload patterns, while Profile provides performance statistics over a user-defined time period for metrics including latency and amount of data read or written.

3.8. Modular, Stackable Design

Users can configure and tune Red Hat Storage Servers to deliver high performance for a wide range of workloads. This stackable design allows users to combine modules as needed depending on storage requirements and workload profiles.

Chapter 4. Use Case Examples

This chapter provides use case examples that describe various deployment environments using Red Hat Storage.

Note

The use cases described in this chapter attempt to show common situations particular to a specific use of Red Hat Storage. Actual needs and requirements will vary from the use cases presented in this chapter.

4.1. Use Case 1: Using Red Hat Storage for Data Archival

Enterprises are facing an explosion of data which is being driven by various applications such as virtualization, collaboration, business intelligence, data warehousing, e-mail, ERP/CRM, and media. Data retention requirements add to the problem, as retaining multiple versions of data for prolonged periods of time requires more storage.
Using Red Hat Storage Server for data archival addresses the challenges associated with rapid archive growth, highly distributed users, and siloed storage pools. Red Hat Storage Server provides an open source, scalable network storage solution which is designed to work seamlessly with industry standard x86 servers. This provides freedom of choice to customers by allowing them to deploy cost-effective, scalable, and highly available storage, without compromising on scale or performance.
Red Hat Storage Server can be deployed in private, public, or hybrid cloud environments. It is optimized for storage intensive enterprise workloads, including high-performance computing, nearline storage, and rich media content delivery.
Use Case 1: Red Hat Storage for Data Archival and Nearline Storage

Figure 4.1. Use Case 1: Red Hat Storage for Data Archival and Nearline Storage

4.1.1. Key Features of Red Hat Storage Server for Nearline Storage and Archival

  • Elastic Scalability
    Storage volumes are abstracted from the hardware, allowing each volume to be managed independently. Volumes can be expanded or shrunk by adding or removing systems from the storage pool, or by adding or removing storage from individual systems in the pool. Data remains available during these changes, with no downtime or application interruption.
  • Compatibility
    Red Hat Storage Server has native POSIX compatibility, and also supports SMB, NFS, and HTTP protocols. As a result, Red Hat Storage Server is compatible with industry standard storage management and backup software.
  • High Availability
    In the event of hardware failure, automatic replication ensures high levels of data protection and resiliency. Red Hat Storage Server has self-healing capabilities, which restores data to a correct state following recovery.
  • Unified Global Namespace
    A unified global namespace aggregates disk and memory resources into a common pool, simplifying management of the storage environment and eliminating data silos. Namespaces can be expanded or shrunk dynamically, with no interruption to client access.
  • Efficient Data Access
    Red Hat Storage Server provides fast and efficient random access, enabling quick data recovery.

4.2. Use Case 2: Using Red Hat Storage for High Performance Computing

High Performance Computing (HPC) has gained popularity in recent years, especially in data-intensive industries such as financial services, energy, and life sciences. These industries continue to face several challenges as they use HPC to gain an edge in their respective industry.
The key challenges that enterprises face are scalability and performance. Red Hat Storage Server provides an open source, scalable network storage solution which is designed to work seamlessly with industry standard x86 servers.
Red Hat Storage Server is built on Red Hat Enterprise Linux, and provides freedom of choice to customers by allowing them to deploy cost-effective, scalable, and highly available storage without compromising on scale or performance. Red Hat Storage Server can be deployed in private, public, or hybrid cloud environments, and is optimized for high performance and high bandwidth computing workloads.
Use Case 2: Using Red Hat Storage for High Performance Computing

Figure 4.2. Use Case 2: Using Red Hat Storage for High Performance Computing

4.2.1. Key Features of Red Hat Storage Server for High Performance Computing

  • Petabyte Scalability
    Red Hat Storage Server’s fully distributed architecture and advanced file management algorithms allow it to support multi-petabyte repositories.
  • High Performance with no bottlenecks
    Red Hat Storage Server enables fast file access by spreading files evenly throughout the system without a centralized metadata server. Nodes can access storage nodes directly, and as a result, hot spots, choke points, and other I/O bottlenecks are eliminated. Data contention is reduced, and there is no single point of failure.
  • Elastic Scalability
    Storage volumes are abstracted from the hardware, allowing each volume to be managed independently. Volumes can be expanded or shrunk by adding or removing systems from the storage pool, or by adding or removing storage from individual systems in the pool. Data can be migrated within the system to rebalance capacity, or to add and remove systems without downtime, allowing HPC environments to scale seamlessly.
  • Infiniband Support
    Red Hat Storage Server supports IP over Infiniband (IPoIB). Using Infiniband as a back-end interconnect for the storage pool is recommended, as it provides additional options for maximizing performance. Using RDMA as a mount protocol for its native client is a technology preview feature.
  • Compatibility
    Red Hat Storage Server has native POSIX compatibility, and also supports SMB, NFS, and HTTP protocols. Red Hat Storage Server supports most existing applications with no code changes required.

4.3. Use Case 3: Using Red Hat Storage for Content Clouds

Advances in technology has empowered consumers and enterprise users to create and consume content at an astounding rate. The resulting deluge of digital content has created challenges for traditionally media-intensive industries, such as entertainment and internet services, as well as the large number of non-media organizations that are seeking competitive advantage through content. Media repositories of 100 TB are becoming increasingly common, and multi-petabyte repositories are a reality for many organizations. The cost and complexity of building and running content storage systems based on traditional NAS and SAN technologies is often not feasible.
Red Hat Storage Server provides an open source, scalable network storage solution which is designed to work seamlessly with industry standard x86 servers. Red Hat Storage Server can be deployed in private, public, or hybrid cloud environments. It is optimized for storage intensive enterprise workloads, including high-performance computing, nearline storage, and rich media content delivery.
Use Case 3: Using Red Hat Storage for Content Clouds

Figure 4.3. Use Case 3: Using Red Hat Storage for Content Clouds

4.3.1. Key Features of Red Hat Storage Server for Content Clouds

  • Elasticity
    Storage volumes are abstracted from the hardware, allowing each volume to be managed independently. Volumes can be expanded or shrunk by adding or removing systems from the storage pool, or by adding or removing storage from individual systems in the pool. Data can be migrated within the system to rebalance capacity, or to add and remove systems without downtime, allowing environments to scale seamlessly.
  • Petabyte Scalability
    Red Hat Storage Server’s fully distributed architecture and advanced file management algorithms allow it to support multi-petabyte repositories.
  • High Performance
    Red Hat Storage Server enables fast file access by spreading files evenly throughout the system without a centralized metadata server. Nodes can access storage nodes directly, and as a result, hot spots, choke points, and other I/O bottlenecks are eliminated. Data contention is reduced, and there is no single point of failure.
  • Compatibility
    Red Hat Storage Server has native POSIX compatibility, and also supports SMB, NFS, and HTTP protocols. Red Hat Storage Server supports most existing applications with no code changes required.
  • Reliability
    In the event of hardware failure, automatic replication ensures high levels of data protection and resiliency. Red Hat Storage Server has self-healing capabilities, which restores data to a correct state following recovery.

Chapter 5. Storage Concepts

This chapter defines common terms relating to file systems and storage used throughout the Red Hat Storage Administration Guide.
Brick
The glusterFS basic unit of storage, represented by an export directory on a server in the trusted storage pool. A brick is expressed by combining a server with an export directory in the following format:
SERVER:EXPORT
For example:
myhostname:/exports/myexportdir/
Block Storage
Block special files, or block devices, correspond to devices through which the system moves data in the form of blocks. These device nodes often represent addressable devices such as hard disks, CD-ROM drives, or memory regions. Red Hat Storage supports the XFS file system with extended attributes.
Cluster
A trusted pool of linked computers working together, resembling a single computing resource. In Red Hat Storage, a cluster is also referred to as a trusted storage pool.
Client
The machine that mounts a volume (this may also be a server).
Distributed File System
A file system that allows multiple clients to concurrently access data which is spread across servers/bricks in a trusted storage pool. Data sharing among multiple locations is fundamental to all distributed file systems.
File System
A method of storing and organizing computer files. A file system organizes files into a database for the storage, manipulation, and retrieval by the computer's operating system.
Source: Wikipedia
FUSE
Filesystem in User space (FUSE) is a loadable kernel module for Unix-like operating systems that lets non-privileged users create their own file systems without editing kernel code. This is achieved by running file system code in user space while the FUSE module provides only a "bridge" to the kernel interfaces.
Source: Wikipedia
Geo-Replication
Geo-replication provides a continuous, asynchronous, and incremental replication service from one site to another over Local Area Networks (LAN), Wide Area Networks (WAN), and the Internet.
glusterd
glusterd is the glusterFS management daemon which must run on all servers in the trusted storage pool.
Metadata
Metadata is data providing information about other pieces of data.
N-way Replication
Local synchronous data replication that is typically deployed across campus or Amazon Web Services Availability Zones.
Namespace
An abstract container or environment that is created to hold a logical grouping of unique identifiers or symbols. Each Red Hat Storage trusted storage pool exposes a single namespace as a POSIX mount point which contains every file in the trusted storage pool.
Petabyte
A petabyte is a unit of information equal to one quadrillion bytes, or 1000 terabytes. The unit symbol for the petabyte is PB. The prefix peta- (P) indicates a power of 1000:
1 PB = 1,000,000,000,000,000 B = 1000^5 B = 10^15 B.
The term "pebibyte" (PiB), using a binary prefix, is used for the corresponding power of 1024.
Source: Wikipedia
POSIX
Portable Operating System Interface (for Unix) (POSIX) is the name of a family of related standards specified by the IEEE to define the application programming interface (API), as well as shell and utilities interfaces, for software that is compatible with variants of the UNIX operating system. Red Hat Storage exports a fully POSIX compatible file system.
RAID
Redundant Array of Independent Disks (RAID) is a technology that provides increased storage reliability through redundancy. It combines multiple low-cost, less-reliable disk drives components into a logical unit where all drives in the array are interdependent.
RRDNS
Round Robin Domain Name Service (RRDNS) is a method to distribute load across application servers. RRDNS is implemented by creating multiple A records with the same name and different IP addresses in the zone file of a DNS server.
Server
The machine (virtual or bare metal) that hosts the file system in which data is stored.
Scale-Up Storage
Increases the capacity of the storage device in a single dimension. For example, adding additional CPUs, RAM, and disk capacity to a single computer in a trusted storage pool.
Scale-Out Storage
Increases the capability of a storage device in single dimension. For example, adding more systems of the same size, or adding servers to a trusted storage pool that increases CPU, disk capacity, and throughput for the trusted storage pool.
Subvolume
A brick after being processed by at least one translator.
Translator
A translator connects to one or more subvolumes, does something with them, and offers a subvolume connection.
Trusted Storage Pool
A storage pool is a trusted network of storage servers. When you start the first server, the storage pool consists of only that server.
User Space
Applications running in user space do not directly interact with hardware, instead using the kernel to moderate access. User space applications are generally more portable than applications in kernel space. glusterFS is a user space application.
Virtual File System (VFS)
VFS is a kernel software layer that handles all system calls related to the standard Linux file system. It provides a common interface to several kinds of file systems.
Volfile
Volfile is a configuration file used by the glusterFS process. Volfile is usually located at /var/lib/glusterd/vols/VOLNAME.
Volume
A volume is a logical collection of bricks. Most of the Red Hat Storage management operations happen on the volume.

Part II. Red Hat Storage Administration On-Premise

Table of Contents

6. The glusterd Service
6.1. Starting and Stopping the glusterd service
7. Trusted Storage Pools
7.1. Adding Servers to the Trusted Storage Pool
7.2. Removing Servers from the Trusted Storage Pool
8. Red Hat Storage Volumes
8.1. Formatting and Mounting Bricks
8.2. Encrypted Disk
8.3. Creating Distributed Volumes
8.4. Creating Replicated Volumes
8.5. Creating Distributed Replicated Volumes
8.6. Creating Striped Volumes
8.7. Creating Distributed Striped Volumes
8.8. Creating Striped Replicated Volumes
8.9. Creating Distributed Striped Replicated Volumes
8.10. Starting Volumes
9. Accessing Data - Setting Up Clients
9.1. Securing Red Hat Storage Client Access
9.2. Native Client
9.2.1. Installing Native Client
9.2.2. Upgrading Native Client
9.2.3. Mounting Red Hat Storage Volumes
9.3. NFS
9.3.1. Using NFS to Mount Red Hat Storage Volumes
9.3.2. Troubleshooting NFS
9.3.3. NFS Ganesha
9.4. SMB
9.4.1. Automatic and Manual Exporting
9.4.2. Mounting Volumes using SMB
9.5. Configuring Automated IP Failover for NFS and SMB
9.5.1. Setting Up CTDB
9.5.2. Starting and Verifying your Configuration
9.6. POSIX Access Control Lists
9.6.1. Setting POSIX ACLs
9.6.2. Retrieving POSIX ACLs
9.6.3. Removing POSIX ACLs
9.6.4. Samba and ACLs
10. Managing Red Hat Storage Volumes
10.1. Configuring Volume Options
10.2. Expanding Volumes
10.3. Shrinking Volumes
10.3.1. Stopping a remove-brick Operation
10.4. Migrating Volumes
10.4.1. Replacing a Subvolume on a Distribute or Distribute-replicate Volume
10.4.2. Replacing an Old Brick with a New Brick on a Replicate or Distribute-replicate Volume
10.4.3. Replacing an Old Brick with a New Brick on a Distribute Volume
10.5. Replacing Hosts
10.5.1. Replacing a Host Machine with a Different IP Address
10.5.2. Replacing a Host Machine with the Same IP Address
10.6. Rebalancing Volumes
10.6.1. Displaying Status of a Rebalance Operation
10.6.2. Stopping a Rebalance Operation
10.7. Stopping Volumes
10.8. Deleting Volumes
10.9. Managing Split-brain
10.9.1. Preventing Split-brain
10.9.2. Recovering from File Split-brain
10.9.3. Triggering Self-Healing on Replicated Volumes
10.10. Non Uniform File Allocation (NUFA)
11. Configuring Red Hat Storage for Enhancing Performance
11.1. Brick Configuration
11.2. Network
11.3. RAM on the Nodes
11.4. Number of Clients
11.5. Replication
11.6. Hardware RAID
12. Managing Geo-replication
12.1. About Geo-replication
12.2. Replicated Volumes vs Geo-replication
12.3. Preparing to Deploy Geo-replication
12.3.1. Exploring Geo-replication Deployment Scenarios
12.3.2. Geo-replication Deployment Overview
12.3.3. Prerequisites
12.3.4. Configuring the Environment and Creating a Geo-replication Session
12.4. Starting Geo-replication
12.4.1. Starting a Geo-replication Session
12.4.2. Verifying a Successful Geo-replication Deployment
12.4.3. Displaying Geo-replication Status Information
12.4.4. Configuring a Geo-replication Session
12.4.5. Stopping a Geo-replication Session
12.4.6. Deleting a Geo-replication Session
12.5. Starting Geo-replication on a Newly Added Brick
12.5.1. Starting Geo-replication for a New Brick on a New Node
12.5.2. Starting Geo-replication for a New Brick on an Existing Node
12.6. Disaster Recovery
12.6.1. Promoting a Slave to Master
12.6.2. Failover and Failback
12.7. Example - Setting up Cascading Geo-replication
12.8. Recommended Practices
12.9. Troubleshooting Geo-replication
12.9.1. Locating Log Files
12.9.2. Synchronization Is Not Complete
12.9.3. Issues with File Synchronization
12.9.4. Geo-replication Status is Often Faulty
12.9.5. Intermediate Master is in a Faulty State
12.9.6. Remote gsyncd Not Found
13. Managing Directory Quotas
13.1. Enabling Quotas
13.2. Setting Limits
13.3. Setting the Default Soft Limit
13.4. Displaying Quota Limit Information
13.4.1. Displaying Quota Limit Information Using the df Utility
13.5. Setting Timeout
13.6. Setting Alert Time
13.7. Removing Disk Limits
13.8. Disabling Quotas
14. Monitoring Your Red Hat Storage Workload
14.1. Running the Volume Profile Command
14.1.1. Start Profiling
14.1.2. Displaying the I/O Information
14.1.3. Stop Profiling
14.2. Running the Volume Top Command
14.2.1. Viewing Open File Descriptor Count and Maximum File Descriptor Count
14.2.2. Viewing Highest File Read Calls
14.2.3. Viewing Highest File Write Calls
14.2.4. Viewing Highest Open Calls on a Directory
14.2.5. Viewing Highest Read Calls on a Directory
14.2.6. Viewing Read Performance
14.2.7. Viewing Write Performance
14.3. Listing Volumes
14.4. Displaying Volume Information
14.5. Performing Statedump on a Volume
14.6. Displaying Volume Status
15. Managing Red Hat Storage Volume Life-Cycle Extensions
15.1. Location of Scripts
15.2. Prepackaged Scripts

Chapter 6. The glusterd Service

The Red Hat Storage glusterFS daemon glusterd enables dynamic configuration changes to Red Hat Storage volumes, without needing to restart servers or remount storage volumes on clients.
Using the glusterd command line, logical storage volumes can be decoupled from physical hardware. Decoupling allows storage volumes to be grown, resized, and shrunk, without application or server downtime.
Regardless of changes made to the underlying hardware, the trusted storage pool is always available while changes to the underlying hardware are made. As storage is added to the trusted storage pool, volumes are rebalanced across the pool to accommodate the added storage capacity.

6.1. Starting and Stopping the glusterd service

The glusterd service is started automatically on all servers in the trusted storage pool. The service can also be manually started and stopped as required.
  • Run the following command to start glusterd manually.
    # service glusterd start 
  • Run the following command to stop glusterd manually.
    # service glusterd stop

Chapter 7. Trusted Storage Pools

A storage pool is a network of storage servers.
When the first server starts, the storage pool consists of that server alone. Adding additional storage servers to the storage pool is achieved using the probe command from a running, trusted storage server.

7.1. Adding Servers to the Trusted Storage Pool

The gluster peer probe [server] command is used to add servers to the trusted server pool.

Adding Three Servers to a Trusted Storage Pool

Create a trusted storage pool consisting of three storage servers, which comprise a volume.

Prerequisites

  • The glusterd service must be running on all storage servers requiring addition to the trusted storage pool. See Chapter 6, The glusterd Service for service start and stop commands.
  • Server1, the trusted storage server, is started.
  • The host names of the target servers must be resolvable by DNS.
  1. Run gluster peer probe [server] from Server 1 to add additional servers to the trusted storage pool.

    Note

    Self-probing Server1 will result in an error because it is part of the trusted storage pool by default.
    # gluster peer probe server2
    Probe successful
    
    # gluster peer probe server3
    Probe successful
    
    # gluster peer probe server4
    Probe successful
    
  2. Verify the peer status from all servers using the following command:
    # gluster peer status
    Number of Peers: 3
    
    Hostname: server2
    Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5
    State: Peer in Cluster (Connected)
    
    Hostname: server3
    Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7
    State: Peer in Cluster (Connected)
    
    Hostname: server4
    Uuid: 3e0caba-9df7-4f66-8e5d-cbc348f29ff7
    State: Peer in Cluster (Connected)

7.2. Removing Servers from the Trusted Storage Pool

Run gluster peer detach server to remove a server from the storage pool.

Removing One Server from the Trusted Storage Pool

Remove one server from the Trusted Storage Pool, and check the peer status of the storage pool.

Prerequisites

  • The glusterd service must be running on the server targeted for removal from the storage pool. See Chapter 6, The glusterd Service for service start and stop commands.
  • The host names of the target servers must be resolvable by DNS.
  1. Run gluster peer detach [server] to remove the server from the trusted storage pool.
    # gluster peer detach server4
    Detach successful
  2. Verify the peer status from all servers using the following command:
    # gluster peer status
    Number of Peers: 2
    
    Hostname: server2
    Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5
    State: Peer in Cluster (Connected)
    
    Hostname: server3
    Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7
    

Chapter 8. Red Hat Storage Volumes

A Red Hat Storage volume is a logical collection of bricks, where each brick is an export directory on a server in the trusted storage pool. Most of the Red Hat Storage Server management operations are performed on the volume.

Warning

Red Hat does not support writing data directly into the bricks. Read and write data only through the Native Client, or through NFS or SMB mounts.

Note

Red Hat Storage supports IP over Infiniband (IPoIB). Install Infiniband packages on all Red Hat Storage servers and clients to support this feature. Run the yum groupinstall "Infiniband Support" to install Infiniband packages:
Red Hat Storage support for RDMA over Infiniband is a technology preview feature.

Volume Types

Distributed
Distributes files across bricks in the volume.
Use this volume type where scaling and redundancy requirements are not important, or provided by other hardware or software layers.
See Section 8.3, “Creating Distributed Volumes” for additional information about this volume type.
Replicated
Replicates files across bricks in the volume.
Use this volume type in environments where high-availability and high-reliability are critical.
See Section 8.4, “Creating Replicated Volumes ” for additional information about this volume type.
Distributed Replicated
Distributes files across replicated bricks in the volume.
Use this volume type in environments where high-reliability and scalability are critical. This volume type offers improved read performance in most environments.
See Section 8.5, “Creating Distributed Replicated Volumes ” for additional information about this volume type.

Important

Striped, Striped-Replicated, Distributed-Striped, and Distributed-Striped-Replicated volume types are under technology preview. Technology Preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.

Striped
Stripes data across bricks in the volume.
Use this volume type only in high-concurrency environments where accessing very large files is required.
See Section 8.6, “Creating Striped Volumes” for additional information about this volume type.
Striped Replicated
Stripes data across replicated bricks in the trusted storage pool.
Use this volume type only in highly-concurrent environments, where there is parallel access to very large files, and performance is critical.
This volume type is supported for Map Reduce workloads only. See Section 8.8, “Creating Striped Replicated Volumes ” for additional information about this volume type, and restriction.
Distributed Striped
Stripes data across two or more nodes in the trusted storage pool.
Use this volume type where storage must be scalable, and in high-concurrency environments where accessing very large files is critical.
See Section 8.7, “Creating Distributed Striped Volumes ” for additional information about this volume type.
Distributed Striped Replicated
Distributes striped data across replicated bricks in the trusted storage pool.
Use this volume type only in highly-concurrent environments where performance, and parallel access to very large files is critical.
This volume type is supported for Map Reduce workloads only. See Section 8.9, “Creating Distributed Striped Replicated Volumes ” for additional information about this volume type.

8.1. Formatting and Mounting Bricks

To create a Red Hat Storage volume, specify the bricks that comprise the volume. After creating the volume, the volume must be started before it can be mounted.

Important

Red Hat supports formatting a Logical Volume using the XFS file system on the bricks. Bricks created by formatting a raw disk partition with the XFS file system is not supported by Red Hat.

Formatting and Mounting Bricks

Format bricks using the supported XFS configuration, mount the bricks, and verify the bricks are mounted correctly.

Important

Allocate 15% - 20% of free space to take advantage of Red Hat Storage volume Snapshotting feature. This feature is planned for a future release of Red Hat Storage.
  1. Run # mkfs.xfs -i size=512 DEVICE to format the bricks to the supported XFS file system format. The inode size is set to 512 bytes to accommodate for the extended attributes used by Red Hat Storage.
  2. Run # blkid DEVICE to obtain the Universally Unique Identifier (UUID) of the device.
  3. Run # mkdir /mountpoint to create a directory to link the brick to.
  4. Add an entry in /etc/fstab using the obtained UUID from the blkid command:
    UUID=uuid    /mountpoint      xfs     defaults   1  2
  5. Run # mount /mountpoint to mount the brick.
  6. Run the df -h command to verify the brick is successfully mounted:
    # df -h
    /dev/vg_bricks/lv_exp1   16G  1.2G   15G   7% /exp1

Using Subdirectory as the Brick for Volume

You can create an XFS file system, mount them and point them as bricks while creating Red Hat Storage volume. If the mount point is unavailable, the data will be directly written to the root file system in the unmounted directory.
For example, the /exp directory is the mounted file system and is used as the brick for volume creation. However, for some reason, if the mount point becomes unavailable, any write will continue to happen in the /exp directory, but now this will be under root file system.
To overcome this issue, you can perform the below procedure.
During Red Hat Storage setup, create an XFS file system and mount it. After mounting it, create a subdirectory and use this subdirectory as the brick for volume creation. Here, the XFS file system is mounted as /bricks. After the file system is available, create a directory called /bricks/bricksrv1 and use it for volume creation. This approach has the following advantages:
  • When the /bricks file system is unavailable, there is no longer /bricks/bricksrv2 directory available in the system. Hence, there will be no data loss by writing to a different location.
  • This does not require any additional file system for nesting.
Perform the following to use subdirectories as bricks for creating a volume:
  1. Run the pvcreate command to initialize the partition.
    The following example initializes the partition /dev/sdb1 as an LVM (Logical Volume Manager) physical volume for later use as part of an LVM logical volume
    # pvcreate /dev/sdb1
  2. Run the following command to create the volume group:
    # vgcreate datavg /dev/sdb1
  3. Create the logical volume lvol1 from the volume group datavg.
    # lvcreate -l <number of extents> --name lvol1 datavg
  4. Create an XFS file system on the logical volume.
    # mkfs -t xfs -i size=512 /dev/mapper/datavg-lvol1
  5. Create /bricks mount point using mkdir command.
    # mkdir /bricks
  6. Mount the XFS file system.
    # mount -t xfs /dev/datavg/lvol1 /bricks
  7. Create the bricksrv1 subdirectory in the mounted file system.
    # mkdir /bricks/bricksrv1
    Repeat the above steps on all nodes.
  8. Create the Red Hat Storage volume using the subdirectories as bricks.
    # gluster volume create distdata01 ad-rhs-srv1:/bricks/bricksrv1 ad-rhs-srv2:/bricks/bricksrv2
  9. Start the Red Hat Storage volume.
    # gluster volume start distdata01
  10. Verify the status of the volume.
    # gluster  volume status distdata01

Reusing a Brick from a Deleted Volume

Bricks can be reused from deleted volumes, however some steps are required to make the brick reusable.
Brick with a File System Suitable for Reformatting (Optimal Method)
Run # mkfs.xfs -f -i size=512 device to reformat the brick to supported requirements, and make it available for immediate reuse in a new volume.

Note

All data will be erased when the brick is reformatted.
File System on a Parent of a Brick Directory
If the file system cannot be reformatted, remove the whole brick directory and create it again.

Procedure 8.1. Cleaning An Unusable Brick

If the file system associated with the brick cannot be reformatted, and the brick directory cannot be removed, perform the following steps:
  1. Delete all previously existing data in the brick, including the .glusterfs subdirectory.
  2. Run # setfattr -x trusted.glusterfs.volume-id brick and # setfattr -x trusted.gfid brick to remove the attributes from the root of the brick.
  3. Run # getfattr -d -m . brick to examine the attributes set on the volume. Take note of the attributes.
  4. Run # setfattr -x attribute brick to remove the attributes relating to the glusterFS file system.
    The trusted.glusterfs.dht attribute for a distributed volume is one such example of attributes that need to be removed.

8.2. Encrypted Disk

Red Hat Storage provides the ability to create bricks on encrypted devices to restrict data access. Encrypted bricks can be used to create Red Hat Storage volumes.
For information on creating encrypted disk, refer to the Disk Encryption Appendix of the Red Hat Enterprise Linux 6 Installation Guide.

8.3. Creating Distributed Volumes

This type of volume spreads files across the bricks in the volume.
Illustration of a distributed volume consisting of two servers. Two files are shown on the server1 brick, and one file is shown on the server2 brick. The distributed volume is set to a single mount point.

Figure 8.1. Illustration of a Distributed Volume

Warning

Distributed volumes can suffer significant data loss during a disk or server failure because directory contents are spread randomly across the bricks in the volume.
Use distributed volumes where scalable storage and redundancy is either not important, or is provided by other hardware or software layers.

Create a Distributed Volume

Use gluster volume create to create different types of volumes, and gluster volume info to verify successful volume creation.

Pre-requisites

  1. Run the gluster volume create command to create the distributed volume.
    The syntax is gluster volume create NEW-VOLNAME [transport tcp | rdma | tcp,rdma] NEW-BRICK...
    The default value for transport is tcp. Other options can be passed such as auth.allow or auth.reject. See Section 10.1, “Configuring Volume Options” for a full list of parameters.

    Example 8.1. Distributed Volume with Two Storage Servers

    # gluster volume create test-volume server1:/exp1 server2:/exp2 
    Creation of test-volume has been successful
    Please start the volume to access data.

    Example 8.2. Distributed Volume over InfiniBand with Four Servers

    # gluster volume create test-volume transport rdma server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
    Creation of test-volume has been successful
    Please start the volume to access data.
  2. Run # gluster volume start VOLNAME to start the volume.
    # gluster volume start test-volume
    Starting test-volume has been successful
  3. Run gluster volume info command to optionally display the volume information.
    # gluster volume info
    Volume Name: test-volume
    Type: Distribute
    Status: Created
    Number of Bricks: 2
    Transport-type: tcp
    Bricks:
    Brick1: server1:/exp1
    Brick2: server2:/exp2

8.4. Creating Replicated Volumes

Important

Creating replicated volume with rep_count > 2 is under technology preview. Technology Preview features are not fully supported under Red Hat service-level agreements (SLAs), may not be functionally complete, and are not intended for production use.
Tech Preview features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.
As Red Hat considers making future iterations of Technology Preview features generally available, we will provide commercially reasonable efforts to resolve any reported issues that customers experience when using these features.
This type of volume creates copies of files across multiple bricks in the volume. Use replicated volumes in environments where high-availability and high-reliability are critical.

Note

The number of bricks should be equal to the replica count for a replicated volume. To protect against server and disk failures, it is recommended that the bricks of the volume are from different servers.
Illustration of a Replicated Volume

Figure 8.2. Illustration of a Replicated Volume

Create a Replicated Volume

Use gluster volume create to create different types of volumes, and gluster volume info to verify successful volume creation.

Pre-requisites

  1. Run the gluster volume create command to create the replicated volume.
    The syntax is # gluster volume create NEW-VOLNAME [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK...
    The default value for transport is tcp. Other options can be passed such as auth.allow or auth.reject. See Section 10.1, “Configuring Volume Options” for a full list of parameters.

    Example 8.3. Replicated Volume with Two Storage Servers

    The order in which bricks are specified determines how bricks are mirrored with each other. For example, first n bricks, where n is the replica count. In this scenario, the first two bricks specified mirror each other. If more bricks were specified, the next two bricks in sequence would mirror each other.
    # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
    Creation of test-volume has been successful
    Please start the volume to access data.
  2. Run # gluster volume start VOLNAME to start the volume.
    # gluster volume start test-volume
    Starting test-volume has been successful
  3. Run gluster volume info command to optionally display the volume information.
    # gluster volume info
    Volume Name: test-volume
    Type: Replicate
    Status: Created
    Number of Bricks: 2
    Transport-type: tcp
    Bricks:
    Brick1: server1:/exp1
    Brick2: server2:/exp2

8.5. Creating Distributed Replicated Volumes

Important

Creating distributed-replicated volume with rep_count > 2 is under technology preview. Technology Preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process. As Red Hat considers making future iterations of Technology Preview features generally available, we will provide commercially reasonable efforts to resolve any reported issues that customers experience when using these features.
Use distributed replicated volumes in environments where the requirement to scale storage, and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments.

Note

The number of bricks should be a multiple of the replica count for a distributed replicated volume. Also, the order in which bricks are specified has a great effect on data protection. Each replica_count consecutive bricks in the list you give will form a replica set, with all replica sets combined into a distribute set. To make sure that replica-set members are not placed on the same node, list the first brick on every server, then the second brick on every server in the same order, and so on.
Illustration of a Distributed Replicated Volume

Figure 8.3. Illustration of a Distributed Replicated Volume

Create a Distributed Replicated Volume

Use gluster volume create to create a distributed replicated volume, and gluster volume info to verify successful volume creation.

Pre-requisites

  1. Run the gluster volume create command to create the distributed replicated volume.
    The syntax is # gluster volume create NEW-VOLNAME [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK...
    The default value for transport is tcp. Other options can be passed such as auth.allow or auth.reject. See Section 10.1, “Configuring Volume Options” for a full list of parameters.

    Example 8.4. Four Node Distributed Replicated Volume with a Two-way Mirror

    The order in which bricks are specified determines how bricks are mirrored with each other. For example, first n bricks, where n is the replica COUNT. In this scenario, the first two bricks specified mirror each other. If more bricks were specified, the next two bricks in sequence would mirror each other.
    # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
    Creation of test-volume has been successful
    Please start the volume to access data.

    Example 8.5. Six Node Distributed Replicated Volume with a Two-way Mirror

    The order in which bricks are specified determines how bricks are mirrored with each other. For example, first n bricks, where n is the replica COUNT. In this scenario, the first two bricks specified mirror each other. If more bricks were specified, the next two bricks in sequence would mirror each other.
    # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6
    Creation of test-volume has been successful
    Please start the volume to access data.
  2. Run # gluster volume start VOLNAME to start the volume.
    # gluster volume start test-volume
    Starting test-volume has been successful
  3. Run gluster volume info command to optionally display the volume information.

8.6. Creating Striped Volumes

Important

Striped volume is a technology preview feature. Technology Preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.
This type of volume stripes data across bricks in the volume. Use striped volumes only in high concurrency environments, accessing very large files.

Note

The number of bricks should equal the stripe count for a striped volume.
Illustration of a Striped Volume

Figure 8.4. Illustration of a Striped Volume

Create a Striped Volume

Use gluster volume create to create a striped volume, and gluster volume info to verify successful volume creation.

Pre-requisites

  1. Run the gluster volume create command to create the striped volume.
    The syntax is # gluster volume create NEW-VOLNAME [stripe COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK...
    The default value for transport is tcp. Other options can be passed such as auth.allow or auth.reject. See Section 10.1, “Configuring Volume Options” for a full list of parameters.

    Example 8.6. Striped Volume Across Two Servers

    # gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2
    Creation of test-volume has been successful
    Please start the volume to access data.
  2. Run # gluster volume start VOLNAME to start the volume.
    # gluster volume start test-volume
    Starting test-volume has been successful
  3. Run gluster volume info command to optionally display the volume information.

8.7. Creating Distributed Striped Volumes

Important

Distributed-Striped volume is a technology preview feature. Technology Preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.
This type of volume stripes files across two or more nodes in the trusted storage pool. Use distributed striped volumes when the requirement is to scale storage, and in high concurrency environments where accessing very large files is critical.

Note

The number of bricks should be a multiple of the stripe count for a distributed striped volume.
Illustration of a Distributed Striped Volume

Figure 8.5. Illustration of a Distributed Striped Volume

Create a Distributed Striped Volume

Use gluster volume create to create a distributed striped volume, and gluster volume info to verify successful volume creation.

Pre-requisites

  1. Run the gluster volume create command to create the distributed striped volume.
    The syntax is # gluster volume create NEW-VOLNAME [stripe COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK...
    The default value for transport is tcp. Other options can be passed such as auth.allow or auth.reject. See Section 10.1, “Configuring Volume Options” for a full list of parameters.

    Example 8.7. Distributed Striped Volume Across Two Storage Servers

    # gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server1:/exp2 server2:/exp3 server2:/exp4
    Creation of test-volume has been successful
    Please start the volume to access data.
  2. Run # gluster volume start VOLNAME to start the volume.
    # gluster volume start test-volume
    Starting test-volume has been successful
  3. Run gluster volume info command to optionally display the volume information.

8.8. Creating Striped Replicated Volumes

Important

Striped-Replicated volume is a technology preview feature. Technology Preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.

Note

The number of bricks should be a multiple of the replicate count and stripe count for a striped replicated volume.
This type of volume stripes data across replicated bricks in the trusted storage pool. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.
Illustration of a Striped Replicated Volume

Figure 8.6. Illustration of a Striped Replicated Volume

Create a Striped Replicated Volume

Use gluster volume create to create a striped replicated volume, and gluster volume info to verify successful volume creation.

Pre-requisites

  1. Run the gluster volume create command to create the striped replicated volume.
    The syntax is # gluster volume create NEW-VOLNAME [stripe COUNT] [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK...
    The default value for transport is tcp. Other options can be passed such as auth.allow or auth.reject. See Section 10.1, “Configuring Volume Options” for a full list of parameters.

    Example 8.8. Striped Replicated Volume Across Four Servers

    The order in which bricks are specified determines how bricks are mirrored with each other. For example, first n bricks, where n is the replica COUNT. In this scenario, the first two bricks specified mirror each other. If more bricks were specified, the next two bricks in sequence would mirror each other.
    # gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp3 server3:/exp2 server4:/exp4
    Creation of test-volume has been successful
    Please start the volume to access data.

    Example 8.9. Striped Replicated Volume Across Six Servers

    The order in which bricks are specified determines how bricks are mirrored with each other. For example, first n bricks, where n is the replica COUNT. In this scenario, the first two bricks specified mirror each other. If more bricks were specified, the next two bricks in sequence would mirror each other.
    # gluster volume create test-volume stripe 3 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6
    Creation of test-volume has been successful
    Please start the volume to access data.
  2. Run # gluster volume start VOLNAME to start the volume.
    # gluster volume start test-volume
    Starting test-volume has been successful
  3. Run gluster volume info command to optionally display the volume information.

8.9. Creating Distributed Striped Replicated Volumes

Important

Distributed-Striped-Replicated volume is a technology preview feature. Technology Preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.
This type of volume distributes striped data across replicated bricks in the trusted storage pool. Use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, this volume type is supported only for Map Reduce workloads.

Note

The number of bricks should be a multiple of the stripe count and replica count.
Illustration of a Distributed Striped Replicated Volume

Figure 8.7. Illustration of a Distributed Striped Replicated Volume

Create a Distributed Striped Replicated Volume

Use gluster volume create to create a distributed striped replicated volume, and gluster volume info to verify successful volume creation.

Pre-requisites

  1. Run the gluster volume create command to create the distributed striped replicated volume.
    The syntax is # gluster volume create NEW-VOLNAME [stripe COUNT] [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK...
    The default value for transport is tcp. Other options can be passed such as auth.allow or auth.reject. See Section 10.1, “Configuring Volume Options” for a full list of parameters.

    Example 8.10. Distributed Replicated Striped Volume Across Four Servers

    The order in which bricks are specified determines how bricks are mirrored with each other. For example, first n bricks, where n is the replica COUNT. In this scenario, the first two bricks specified mirror each other. If more bricks were specified, the next two bricks in sequence would mirror each other.
    # gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server1:/exp2 server2:/exp3 server2:/exp4 server3:/exp5 server3:/exp6 server4:/exp7 server4:/exp8
    Creation of test-volume has been successful
    Please start the volume to access data.
  2. Run # gluster volume start VOLNAME to start the volume.
    # gluster volume start test-volume
    Starting test-volume has been successful
  3. Run gluster volume info command to optionally display the volume information.

8.10. Starting Volumes

Volumes must be started before they can be mounted.
To start a volume, run # gluster volume start VOLNAME
For example, to start test-volume:
# gluster volume start test-volume
Starting test-volume has been successful

Chapter 9. Accessing Data - Setting Up Clients

Red Hat Storage volumes can be accessed using a number of technologies:
Cross Protocol Data Access
Although a Red Hat Storage trusted pool can be configured to support multiple protocols simultaneously, a single volume cannot be freely accessed by different protocols due to differences in locking semantics. The table below defines which protocols can safely access the same volume concurrently.

Table 9.1. Cross Protocol Data Access Matrix

  SMB NFS Native Client Object
SMB Yes No No No
NFS No Yes Yes Yes
Native Client No Yes Yes Yes
Object No Yes Yes Yes

9.1. Securing Red Hat Storage Client Access

Red Hat Storage Server uses the listed ports. Ensure that firewall settings do not prevent access to these ports.

Table 9.2. TCP Port Numbers

Port Number Usage
22 For sshd used by geo-replication.
111 For rpc port mapper.
139 For netbios service.
445 For CIFS protocol.
965 For NLM.
2049 For glusterFS's NFS exports (nfsd process).
24007 For glusterd (for management).
24009 - 24108 For client communication with Red Hat Storage 2.0.
38465 For NFS mount protocol.
38466 For NFS mount protocol.
38468 For NFS's Lock Manager (NLM).
38469 For NFS's ACL support.
39543 For oVirt (Red Hat Storage-Console).
49152 - 49251 For client communication with Red Hat Storage 2.1 and for brick processes depending on the availability of the ports. The total number of ports required to be open depends on the total number of bricks exported on the machine.
55863 For oVirt (Red Hat Storage-Console).

Table 9.3. TCP Port Numbers used for Object Storage (Swift)

Port Number Usage
443 For HTTPS request.
6010 For Object Server.
6011 For Container Server.
6012 For Account Server.
8080 For Proxy Server.

Table 9.4. TCP Port Numbers for Nagios Monitoring

Port Number Usage
80 For HTTP protocol (required only if Nagios server is running on a Red Hat Storage node).
443 For HTTPS protocol (required only for Nagios server).
5667 For NSCA service (required only if Nagios server is running on a Red Hat Storage node).
5666 For NRPE service (required in all Red Hat Storage nodes).

Table 9.5. UDP Port Numbers

Port Number Usage
111 For RPC Bind.
963 For NFS's Lock Manager (NLM).

9.2. Native Client

Native Client is a FUSE-based client running in user space. Native Client is the recommended method for accessing Red Hat Storage volumes when high concurrency and high write performance is required.
This section introduces Native Client and explains how to install the software on client machines. This section also describes how to mount Red Hat Storage volumes on clients (both manually and automatically) and how to verify that the Red Hat Storage volume has mounted successfully.

Table 9.6. Red Hat Storage Server Support Matrix

Red Hat Enterprise Linux version Red Hat Storage Server version Native client version
6.5 3.0 3.0, 2.1*
6.6 3.0.2 3.0, 2.1*

Note

*If an existing Red Hat Storage 2.1 cluster is upgraded to Red Hat Storage 3.0, older 2.1 based clients can mount the new 3.0 volumes, however, clients must be upgraded to Red Hat Storage 3.0 to run rebalance. For more information, see Section 9.2.3, “Mounting Red Hat Storage Volumes”

9.2.1. Installing Native Client

After installing the client operating system, register the target system to Red Hat Network and subscribe to the Red Hat Enterprise Linux Server channel.

Important

All clients must be of the same version. Red Hat strongly recommends upgrading the servers before upgrading the clients.

Use the Command Line to Register, and Subscribe a System.

Register the system using the command line, and subscribe to the correct channels.

Prerequisites

  • Know the user name and password of the Red Hat Network (RHN) account with Red Hat Storage entitlements.
  1. Run the rhn_register command to register the system with Red Hat Network.
    # rhn_register
  2. In the Operating System Release Version screen, select All available updates and follow the prompts to register the system to the standard base channel of the respective Red Hat Enterprise Linux Server version.
  3. Run the rhn-channel --add --channel command to subscribe the system to the correct Red Hat Storage Native Client channel:
    • For Red Hat Enterprise Linux 7.x clients using Red Hat Satellite Server:
      # rhn-channel --add --channel= rhel-x86_64-server-rh-common-7
    • For Red Hat Enterprise Linux 6.x clients:
      # rhn-channel --add --channel=rhel-x86_64-server-rhsclient-6
    • For Red Hat Enterprise Linux 5.x clients:
      # rhn-channel --add --channel=rhel-x86_64-server-rhsclient-5
  4. Execute the following commands, for Red Hat Enterprise Linux clients using Subscription Manager.
    1. Run the following command and enter your Red Hat Network user name and password to register the system with the Red Hat Network.
      # subscription-manager register --auto-attach
    2. Run the following command to enable the channels required to install Red Hat Storage Native Client:
      • For Red Hat Enterprise Linux 7.x clients:
        # subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-rh-common-rpms
      • For Red Hat Enterprise Linux 6.1 and later clients:
        # subscription-manager repos --enable=rhel-6-server-rpms --enable=rhel-6-server-rhs-client-1-rpms
      • For Red Hat Enterprise Linux 5.7 and later clients:
        # subscription-manager repos --enable=rhel-5-server-rpms --enable=rhel-5-server-rhs-client-1-rpms
      For more information, see Section 3.2 Registering from the Command Line in the Red Hat Subscription Management guide.
  5. Run the following command to verify if the system is subscribed to the required channels.
    # # yum repolist

Use the Web Interface to Register, and Subscribe a System.

Register the system using the web interface, and subscribe to the correct channels.

Prerequisites

  • Know the user name and password of the Red Hat Network (RHN) account with Red Hat Storage entitlements.
  1. Log on to Red Hat Network (http://rhn.redhat.com).
  2. Move the mouse cursor over the Subscriptions link at the top of the screen, and then click the Registered Systems link.
  3. Click the name of the system to which the Red Hat Storage Native Client channel must be appended.
  4. Click Alter Channel Subscriptions in the Subscribed Channels section of the screen.
  5. Expand the node for Additional Services Channels for Red Hat Enterprise Linux 6 for x86_64 or for Red Hat Enterprise Linux 5 for x86_64 depending on the client platform.
  6. Click the Change Subscriptions button to finalize the changes.
    When the page refreshes, select the Details tab to verify the system is subscribed to the appropriate channels.

Install Native Client Packages

Install Native Client packages from Red Hat Network
  1. Run the yum install command to install the native client RPM packages.
    # yum install glusterfs glusterfs-fuse
  2. For Red Hat Enterprise 5.x client systems, run the modprobe command to load FUSE modules before mounting Red Hat Storage volumes.
    # modprobe fuse
    For more information on loading modules at boot time, see https://access.redhat.com/knowledge/solutions/47028 .

9.2.2. Upgrading Native Client

Before updating the Native Client, subscribe the clients to the channels mentioned in Use the Command Line to Register, and Subscribe a System.
Run the yum update command to upgrade the native client:
# yum update glusterfs glusterfs-fuse

9.2.3. Mounting Red Hat Storage Volumes

After installing Native Client, the Red Hat Storage volumes must be mounted to access data. Two methods are available:
After mounting a volume, test the mounted volume using the procedure described in Section 9.2.3.4, “Testing Mounted Volumes”.

Note

  • When a new volume is created in Red Hat Storage 3.0, it cannot be accessed by an older (Red Hat Storage 2.1.x) clients, because the readdir-ahead translator is enabled by default for the newly created Red Hat Storage 3.0 volumes. This makes it incompatible with older clients. In order to resolve this issue, disable readdir-ahead in the newly created volume using the following command:
    # gluster volume set volname readdir-ahead off
  • Server names selected during volume creation should be resolvable in the client machine. Use appropriate /etc/hosts entries, or a DNS server to resolve server names to IP addresses.

9.2.3.1. Mount Commands and Options

The following options are available when using the mount -t glusterfs command. All options must be separated with commas.
# mount -t glusterfs -o backup-volfile-servers=volfile_server2:volfile_server3:.... ..:volfile_serverN,log-level=WARNING,log-file=/var/log/gluster.log server1:/test-volume /mnt/glusterfs
backup-volfile-servers=<volfile_server2>:<volfile_server3>:...:<volfile_serverN>
List of the backup volfile servers to mount the client. If this option is specified while mounting the fuse client, when the first volfile server fails, the servers specified in backup-volfile-servers option are used as volfile servers to mount the client until the mount is successful.

Note

This option was earlier specified as backupvolfile-server which is no longer valid.
log-level
Logs only specified level or higher severity messages in the log-file.
log-file
Logs the messages in the specified file.
ro
Mounts the file system as read only.
acl
Enables POSIX Access Control List on mount.
background-qlen=n
Enables FUSE to handle n number of requests to be queued before subsequent requests are denied. Default value of n is 64.
enable-ino32
this option enables file system to present 32-bit inodes instead of 64- bit inodes.

9.2.3.2. Mounting Volumes Manually

Manually Mount a Red Hat Storage Volume

Create a mount point and run the mount -t glusterfs HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR command to manually mount a Red Hat Storage volume.

Note

The server specified in the mount command is used to fetch the glusterFS configuration volfile, which describes the volume name. The client then communicates directly with the servers mentioned in the volfile (which may not actually include the server used for mount).
  1. If a mount point has not yet been created for the volume, run the mkdir command to create a mount point.
    # mkdir /mnt/glusterfs
  2. Run the mount -t glusterfs command, using the key in the task summary as a guide.
    # mount -t glusterfs server1:/test-volume /mnt/glusterfs

9.2.3.3. Mounting Volumes Automatically

Volumes can be mounted automatically each time the systems starts.
The server specified in the mount command is used to fetch the glusterFS configuration volfile, which describes the volume name. The client then communicates directly with the servers mentioned in the volfile (which may not actually include the server used for mount).

Mounting a Volume Automatically

Mount a Red Hat Storage Volume automatically at server start.
  1. Open the /etc/fstab file in a text editor.
  2. Append the following configuration to the fstab file.
    HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR glusterfs defaults,_netdev 0 0
    Using the example server names, the entry contains the following replaced values.
    server1:/test-volume /mnt/glusterfs glusterfs defaults,_netdev 0 0

9.2.3.4. Testing Mounted Volumes

Testing Mounted Red Hat Storage Volumes

Using the command-line, verify the Red Hat Storage volumes have been successfully mounted. All three commands can be run in the order listed, or used independently to verify a volume has been successfully mounted.
  1. Run the mount command to check whether the volume was successfully mounted.
    # mount
    server1:/test-volume on /mnt/glusterfs type fuse.glusterfs(rw,allow_other,default_permissions,max_read=131072
  2. Run the df command to display the aggregated storage space from all the bricks in a volume.
    # df -h /mnt/glusterfs 
    Filesystem           Size  Used  Avail  Use%  Mounted on
    server1:/test-volume  28T  22T   5.4T   82%   /mnt/glusterfs
  3. Move to the mount directory using the cd command, and list the contents.
    # cd /mnt/glusterfs 
    # ls

9.3. NFS

Linux, and other operating systems that support the NFSv3 standard can use NFS to access the Red Hat Storage volumes.
Differences in implementation of the NFSv3 standard in operating systems may result in some operational issues. If issues are encountered when using NFSv3, contact Red Hat support to receive more information on Red Hat Storage Server client operating system compatibility, and information about known issues affecting NFSv3.
NFS ACL v3 is supported, which allows getfacl and setfacl operations on NFS clients. The following options are provided to configure the Access Control Lists (ACL) in the glusterFS NFS server with the nfs.acl option. For example:
  • To set nfs.acl ON, run the following command:
    # gluster volume set <volname> nfs.acl on
  • To set nfs.acl OFF, run the following command:
    # gluster volume set <volname> nfs.acl off

Note

ACL is ON by default.
Red Hat Storage includes Network Lock Manager (NLM) v4. NLM protocol allows NFSv3 clients to lock files across the network. NLM is required to make applications running on top of NFSv3 mount points to use the standard fcntl() (POSIX) and flock() (BSD) lock system calls to synchronize access across clients.
This section describes how to use NFS to mount Red Hat Storage volumes (both manually and automatically) and how to verify that the volume has been mounted successfully.

9.3.1. Using NFS to Mount Red Hat Storage Volumes

You can use either of the following methods to mount Red Hat Storage volumes:

Note

Currently GlusterFS NFS server only supports version 3 of NFS protocol. As a preferred option, always configure version 3 as the default version in the nfsmount.conf file at /etc/nfsmount.conf by adding the following text in the file:
Defaultvers=3
In case the file is not modified, then ensure to add vers=3 manually in all the mount commands.
# mount nfsserver:export -o vers=3 /MOUNTPOINT
After mounting a volume, you can test the mounted volume using the procedure described in Section 9.3.1.4, “Testing Volumes Mounted Using NFS”.

9.3.1.1. Manually Mounting Volumes Using NFS

Manually Mount a Red Hat Storage Volume using NFS

Create a mount point and run the mount command to manually mount a Red Hat Storage volume using NFS.
  1. If a mount point has not yet been created for the volume, run the mkdir command to create a mount point.
    # mkdir /mnt/glusterfs
  2. Run the correct mount command for the system.
    For Linux
    # mount -t nfs -o vers=3 server1:/test-volume /mnt/glusterfs
    For Solaris
    # mount -o vers=3 nfs://server1:38467/test-volume /mnt/glusterfs

Manually Mount a Red Hat Storage Volume using NFS over TCP

Create a mount point and run the mount command to manually mount a Red Hat Storage volume using NFS over TCP.

Note

glusterFS NFS server does not support UDP. If a NFS client such as Solaris client, connects by default using UDP, the following message appears:
requested NFS version or transport protocol is not supported
The option nfs.mount-udp is supported for mounting a volume, by default it is disabled. The following are the limitations:
  • If nfs.mount-udp is enabled, the MOUNT protocol needed for NFSv3 can handle requests from NFS-clients that require MOUNT over UDP. This is useful for at least some versions of Solaris, IBM AIX and HP-UX.
  • Currently, MOUNT over UDP does not have support for mounting subdirectories on a volume. Mounting server:/volume/subdir exports is only functional when MOUNT over TCP is used.
  • MOUNT over UDP does currently not have support for different authentication options that MOUNT over TCP honours. Enabling nfs.mount-udp may give more permissions to NFS clients than intended via various authentication options like nfs.rpc-auth-allow, nfs.rpc-auth-reject and nfs.export-dir.
  1. If a mount point has not yet been created for the volume, run the mkdir command to create a mount point.
    # mkdir /mnt/glusterfs
  2. Run the correct mount command for the system, specifying the TCP protocol option for the system.
    For Linux
    # mount -t nfs -o vers=3,mountproto=tcp server1:/test-volume /mnt/glusterfs
    For Solaris
    # mount -o proto=tcp, nfs://server1:38467/test-volume /mnt/glusterfs

9.3.1.2. Automatically Mounting Volumes Using NFS

Red Hat Storage volumes can be mounted automatically using NFS, each time the system starts.

Note

In addition to the tasks described below, Red Hat Storage supports Linux, UNIX, and similar operating system's standard method of auto-mounting NFS mounts.
Update the /etc/auto.master and /etc/auto.misc files, and restart the autofs service. Whenever a user or process attempts to access the directory it will be mounted in the background on-demand.

Mounting a Volume Automatically using NFS

Mount a Red Hat Storage Volume automatically using NFS at server start.
  1. Open the /etc/fstab file in a text editor.
  2. Append the following configuration to the fstab file.
    HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR glusterfs mountdir nfs defaults,_netdev, 0 0
    Using the example server names, the entry contains the following replaced values.
    server1:/test-volume /mnt/glusterfs nfs defaults,_netdev, 0 0

Mounting a Volume Automatically using NFS over TCP

Mount a Red Hat Storage Volume automatically using NFS over TCP at server start.
  1. Open the /etc/fstab file in a text editor.
  2. Append the following configuration to the fstab file.
    HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR glusterfs nfs defaults,_netdev,mountproto=tcp 0 0
    Using the example server names, the entry contains the following replaced values.
    server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,mountproto=tcp 0 0

9.3.1.3. Authentication Support for Subdirectory Mount

This update extends nfs.export-dir option to provide client authentication during sub-directory mount. The nfs.export-dir and nfs.export-dirs options provide granular control to restrict or allow specific clients to mount a sub-directory. These clients can be authenticated with either an IP, host name or a Classless Inter-Domain Routing (CIDR) range.
  • nfs.export-dirs: By default, all NFS sub-volumes are exported as individual exports. This option allows you to manage this behaviour. When this option is turned off, none of the sub-volumes are exported and hence the sub-directories cannot be mounted. This option is on by default.
    To set this option to off, run the following command:
    # gluster volume set <volname> nfs.export-dirs off
    To set this option to on, run the following command:
    # gluster volume set <volname> nfs.export-dirs on
  • nfs.export-dir: This option allows you to export specified subdirectories on the volume. You can export a particular subdirectory, for example:
    # gluster volume set <volname> nfs.export-dir /d1,/d2/d3/d4,/d6
    where d1, d2, d3, d4, d6 are the sub-directories.
    You can also control the access to mount these subdirectories based on the IP address, host name or a CIDR. For example:
    # gluster volume set <volname> nfs.export-dir "/d1(<ip address>),/d2/d3/d4(<host name>|<ip address>),/d6(<CIDR>)"
    The directory /d1, /d2 and /d6 are directories inside the volume. Volume name must not be added to the path. For example if the volume vol1 has directories d1 and d2, then to export these directories use the following command: #gluster volume set vol1 nfs.export-dir "/d1(192.0.2.2),d2(192.0.2.34)"

9.3.1.4. Testing Volumes Mounted Using NFS

You can confirm that Red Hat Storage directories are mounting successfully.
To test mounted volumes

Testing Mounted Red Hat Storage Volumes

Using the command-line, verify the Red Hat Storage volumes have been successfully mounted. All three commands can be run in the order listed, or used independently to verify a volume has been successfully mounted.
  1. Run the mount command to check whether the volume was successfully mounted.
    # mount
    server1:/test-volume on /mnt/glusterfs type nfs (rw,addr=server1)
  2. Run the df command to display the aggregated storage space from all the bricks in a volume.
    # df -h /mnt/glusterfs 
    Filesystem              Size Used Avail Use% Mounted on 
    server1:/test-volume    28T  22T  5.4T  82%  /mnt/glusterfs
  3. Move to the mount directory using the cd command, and list the contents.
    # cd /mnt/glusterfs 
    # ls

9.3.2. Troubleshooting NFS

Q: The mount command on the NFS client fails with RPC Error: Program not registered. This error is encountered due to one of the following reasons:
Q: The rpcbind service is not running on the NFS client. This could be due to the following reasons:
Q: The NFS server glusterfsd starts but the initialization fails with nfsrpc- service: portmap registration of program failed error message in the log.
Q: The NFS server start-up fails with the message Port is already in use in the log file.
Q: The mount command fails with with NFS server failed error:
Q: The showmount command fails with clnt_create: RPC: Unable to receive error. This error is encountered due to the following reasons:
Q: The application fails with Invalid argument or Value too large for defined data type
Q: After the machine that is running NFS server is restarted the client fails to reclaim the locks held earlier.
Q: The rpc actor failed to complete successfully error is displayed in the nfs.log, even after the volume is mounted successfully.
Q: The mount command fails with No such file or directory.
Q:
The mount command on the NFS client fails with RPC Error: Program not registered. This error is encountered due to one of the following reasons:
  • The NFS server is not running. You can check the status using the following command:
    #gluster volume status
  • The volume is not started. You can check the status using the following command:
    #gluster volume info
  • rpcbind is restarted. To check if rpcbind is running, execute the following command:
    #ps ax| grep rpcbind
A:
  • If the NFS server is not running, then restart the NFS server using the following command:
    #gluster volume start <volname>
  • If the volume is not started, then start the volume using the following command:
    #gluster volume start <volname>
  • If both rpcbind and NFS server is running then restart the NFS server using the following commands:
    #gluster volume stop <volname>
    #gluster volume start <volname>
Q:
The rpcbind service is not running on the NFS client. This could be due to the following reasons:
  • The portmap is not running.
  • Another instance of kernel NFS server or glusterNFS server is running.
A:
Start the rpcbind service by running the following command:
# service rpcbind start
Q:
The NFS server glusterfsd starts but the initialization fails with nfsrpc- service: portmap registration of program failed error message in the log.
A:
NFS start-up succeeds but the initialization of the NFS service can still fail preventing clients from accessing the mount points. Such a situation can be confirmed from the following error messages in the log file:
[2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap 
[2010-05-26 23:33:47] E [rpcsvc.c:2682:rpcsvc_program_register] rpc-service: portmap registration of program failed
[2010-05-26 23:33:47] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
[2010-05-26 23:33:47] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
[2010-05-26 23:33:47] C [nfs.c:531:notify] nfs: Failed to initialize protocols
[2010-05-26 23:33:49] E [rpcsvc.c:2614:rpcsvc_program_unregister_portmap] rpc-service: Could not unregister with portmap
[2010-05-26 23:33:49] E [rpcsvc.c:2731:rpcsvc_program_unregister] rpc-service: portmap unregistration of program failed
[2010-05-26 23:33:49] E [rpcsvc.c:2744:rpcsvc_program_unregister] rpc-service: Program unregistration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
  1. Start the rpcbind service on the NFS server by running the following command:
    # service rpcbind start
    After starting rpcbind service, glusterFS NFS server needs to be restarted.
  2. Stop another NFS server running on the same machine.
    Such an error is also seen when there is another NFS server running on the same machine but it is not the glusterFS NFS server. On Linux systems, this could be the kernel NFS server. Resolution involves stopping the other NFS server or not running the glusterFS NFS server on the machine. Before stopping the kernel NFS server, ensure that no critical service depends on access to that NFS server's exports.
    On Linux, kernel NFS servers can be stopped by using either of the following commands depending on the distribution in use:
    # service nfs-kernel-server stop
    # service nfs stop
  3. Restart glusterFS NFS server.
Q:
The NFS server start-up fails with the message Port is already in use in the log file.
A:
This error can arise in case there is already a glusterFS NFS server running on the same machine. This situation can be confirmed from the log file, if the following error lines exist:
[2010-05-26 23:40:49] E [rpc-socket.c:126:rpcsvc_socket_listen] rpc-socket: binding socket failed:Address already in use
[2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use 
[2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection 
[2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed 
[2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 
[2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed 
[2010-05-26 23:40:49] C [nfs.c:531:notify] nfs: Failed to initialize protocols
In this release, the glusterFS NFS server does not support running multiple NFS servers on the same machine. To resolve the issue, one of the glusterFS NFS servers must be shutdown.
Q:
The mount command fails with with NFS server failed error:
A:
mount: mount to NFS server '10.1.10.11' failed: timed out (retrying).
Review and apply the suggested solutions to correct the issue.
  • Disable name lookup requests from NFS server to a DNS server.
    The NFS server attempts to authenticate NFS clients by performing a reverse DNS lookup to match host names in the volume file with the client IP addresses. There can be a situation where the NFS server either is not able to connect to the DNS server or the DNS server is taking too long to respond to DNS request. These delays can result in delayed replies from the NFS server to the NFS client resulting in the timeout error.
    NFS server provides a work-around that disables DNS requests, instead relying only on the client IP addresses for authentication. The following option can be added for successful mounting in such situations:
    option nfs.addr.namelookup off

    Note

    Remember that disabling the NFS server forces authentication of clients to use only IP addresses. If the authentication rules in the volume file use host names, those authentication rules will fail and client mounting will fail.
  • NFS version used by the NFS client is other than version 3 by default.
    glusterFS NFS server supports version 3 of NFS protocol by default. In recent Linux kernels, the default NFS version has been changed from 3 to 4. It is possible that the client machine is unable to connect to the glusterFS NFS server because it is using version 4 messages which are not understood by glusterFS NFS server. The timeout can be resolved by forcing the NFS client to use version 3. The vers option to mount command is used for this purpose:
    # mount nfsserver:export -o vers=3 /MOUNTPOINT
Q:
The showmount command fails with clnt_create: RPC: Unable to receive error. This error is encountered due to the following reasons:
  • The firewall might have blocked the port.
  • rpcbind might not be running.
A:
Check the firewall settings, and open ports 111 for portmap requests/replies and glusterFS NFS server requests/replies. glusterFS NFS server operates over the following port numbers: 38465, 38466, and 38467.
Q:
The application fails with Invalid argument or Value too large for defined data type
A:
These two errors generally happen for 32-bit NFS clients, or applications that do not support 64-bit inode numbers or large files.
Use the following option from the command-line interface to make glusterFS NFS return 32-bit inode numbers instead:
NFS.enable-ino32 <on | off>
This option is off by default, which permits NFS to return 64-bit inode numbers by default.
Applications that will benefit from this option include those that are:
  • built and run on 32-bit machines, which do not support large files by default,
  • built to 32-bit standards on 64-bit systems.
Applications which can be rebuilt from source are recommended to be rebuilt using the following flag with gcc:
-D_FILE_OFFSET_BITS=64
Q:
After the machine that is running NFS server is restarted the client fails to reclaim the locks held earlier.
A:
The Network Status Monitor (NSM) service daemon (rpc.statd) is started before gluster NFS server. Therefore, NSM sends a notification to the client to reclaim the locks. When the clients send the reclaim request, the NFS server does not respond as it is not started yet. Therefore the client request fails.
Solution: To resolve the issue, prevent the NSM daemon from starting when the server starts.
Run chkconfig --list nfslock to check if NSM is configured during OS boot.
If any of the entries are on,run chkconfig nfslock off to disable NSM clients during boot, which resolves the issue.
Q:
The rpc actor failed to complete successfully error is displayed in the nfs.log, even after the volume is mounted successfully.
A:
gluster NFS supports only NFS version 3. When nfs-utils mounts a client when the version is not mentioned, it tries to negotiate using version 4 before falling back to version 3. This is the cause of the messages in both the server log and the nfs.log file.
[2013-06-25 00:03:38.160547] W [rpcsvc.c:180:rpcsvc_program_actor] 0-rpc-service: RPC program version not available (req 100003 4)
[2013-06-25 00:03:38.160669] E [rpcsvc.c:448:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
To resolve the issue, declare NFS version 3 and the noacl option in the mount command as follows:
mount -t nfs -o vers=3,noacl server1:/test-volume /mnt/glusterfs
Q:
The mount command fails with No such file or directory.
A:
This problem is encountered as the volume is not present.

9.3.3. NFS Ganesha

Important

  • nfs-ganesha is a technology preview feature. Technology preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process. As Red Hat considers making future iterations of technology preview features generally available, we will provide commercially reasonable support to resolve any reported issues that customers experience when using these features.
  • Red Hat Storage currently does not support NFSv4 delegations, Multi-head NFS and High Availability. These will be added in the upcoming releases of Red Hat Storage nfs-ganesha. It is not a feature recommended for production deployment in its current form. However, Red Hat Storage volumes can be exported via nfs-ganesha for consumption by both NFSv3 and NFSv4 clients.
nfs-ganesha is a user space file server for the NFS protocol with support for NFSv3, v4, v4.1, pNFS.
Red Hat Storage volume is supported with the community's V2.1-RC1 release of nfs-ganesha. The current release of Red Hat Storage offers nfs-ganesha for use with Red Hat Storage volumes as an early beta technology preview feature. This community release of nfs-ganesha has improved NFSv4 protocol support and stability. With this technology preview feature Red Hat Storage volumes can be exported via nfs-ganesha for consumption by both NFSv3 and NFSv4 clients.

9.3.3.1. Installing nfs-ganesha

nfs-ganesha can be installed using any of the following methods:
  • Installing nfs-ganesha using yum
  • Installing nfs-ganesha during an ISO Installation
  • Installing nfs-ganesha using RHN / Red Hat Satellite
9.3.3.1.1. Installing using yum
The nfs-ganesha package can be installed using the following command:
# yum install nfs-ganesha
The package installs the following:
# rpm -qlp nfs-ganesha-2.1.0.2-4.el6rhs.x86_64.rpm
/etc/glusterfs-ganesha/README
/etc/glusterfs-ganesha/nfs-ganesha.conf
/etc/glusterfs-ganesha/org.ganesha.nfsd.conf
/usr/bin/ganesha.nfsd
/usr/lib64/ganesha
/usr/lib64/ganesha/libfsalgluster.so
/usr/lib64/ganesha/libfsalgluster.so.4
/usr/lib64/ganesha/libfsalgluster.so.4.2.0
/usr/lib64/ganesha/libfsalgpfs.so
/usr/lib64/ganesha/libfsalgpfs.so.4
/usr/lib64/ganesha/libfsalgpfs.so.4.2.0
/usr/lib64/ganesha/libfsalnull.so
/usr/lib64/ganesha/libfsalnull.so.4
/usr/lib64/ganesha/libfsalnull.so.4.2.0
/usr/lib64/ganesha/libfsalproxy.so
/usr/lib64/ganesha/libfsalproxy.so.4
/usr/lib64/ganesha/libfsalproxy.so.4.2.0
/usr/lib64/ganesha/libfsalvfs.so
/usr/lib64/ganesha/libfsalvfs.so.4
/usr/lib64/ganesha/libfsalvfs.so.4.2.0
/usr/share/doc/nfs-ganesha
/usr/share/doc/nfs-ganesha/ChangeLog
/usr/share/doc/nfs-ganesha/LICENSE.txt
/usr/bin/ganesha.nfsd is the nfs-ganesha daemon.
9.3.3.1.2. Installing nfs-ganesha During an ISO Installation
For more information about installing Red Hat Storage from an ISO image, refer to Installing from an ISO Image in the Red Hat Storage 3.0 Installation Guide.
  1. When installing Red Hat Storage using the ISO, in the Customizing the Software Selection screen, select Red Hat Storage Tools Group and click Optional Packages.
  2. From the list of packages, select nfs-ganesha and click Close.
    Installing nfs-ganesha

    Figure 9.1. Installing nfs-ganesha

  3. Proceed with the remaining installation steps for Red Hat Storage. For more information refer to Installing from an ISO Image in the Red Hat Storage 3.0 Installation Guide.
9.3.3.1.3. Installing from Red Hat Satellite Server or Red Hat Network
Ensure that your system is subscribed to the required channels. For more information refer to Subscribing to the Red Hat Storage Server Channels in the Red Hat Storage 3.0 Installation Guide.
  1. Install nfs-ganesha by executing the following command:
    # yum install nfs-ganesha
  2. Verify the installation by running the following command:
    # yum list nfs-ganesha
    
    Installed Packages
    nfs-ganesha.x86_64      2.1.0.2-4.el6rhs      rhs-3-for-rhel-6-server-rpms

9.3.3.2. Pre-requisites to run nfs-ganesha

Note

  • Red Hat does not recommend running nfs-ganesha in mixed-mode and/or hybrid environments. This includes multi-protocol environments where NFS and CIFS shares are used simultaneously, or running nfs-ganesha together with gluster-nfs, kernel-nfs or gluster-fuse clients.
  • Only one of nfs-ganesha, gluster-nfs server or kernel-nfs can be enabled on a given machine/host as all NFS implementations use the port 2049 and only one can be active at a given time. Hence you must disable gluster-nfs (it is enabled by default on a volume) and kernel-nfs before nfs-ganesha is started.
Ensure you take the following pre-requisites into consideration before proceeding to run nfs-ganesha:
  • A Red Hat Storage volume should be available for export and nfs-ganesha rpms have to be installed.
  • IPv6 should be enabled on the host interface that is used by the nfs-ganesha daemon. To enable IPv6 support, perform the following steps:
    1. Comment or remove the line options ipv6 disable=1 in the /etc/modprobe.d/ipv6.conf file.
    2. Reboot the system.

9.3.3.3. Exporting and Unexporting Volumes through nfs-ganesha

This release supports gluster CLI commands to export or unexport Red Hat Storage volumes via nfs-ganesha. These commands use the DBus interface to add or remove exports dynamically.
Before using the CLI options for nfs-ganesha, execute the following steps:
  1. Copy the org.ganesha.nfsd.conf file into the /etc/dbus-1/system.d/ directory. The org.ganesha.nfsd.conf file can be found in /etc/glusterfs-ganesha/ on installation of nfs-ganesha rpms.
  2. Execute the following command:
    service messagebus restart

Note

The connection to the DBus server is only made once at server initialization. nfs-ganesha must be restarted if the file was not copied prior to starting nfs-ganesha.
Exporting Volumes through nfs-ganesha
Volume set options can be used to export or unexport a Red Hat Storage volume via nfs-ganesha. Use these volume options to export a Red Hat Storage volume.
  1. Disable gluster-nfs on all Red Hat Storage volumes.
    # gluster volume set volname nfs.disable on
    gluster-nfs and nfs-ganesha cannot run simultaneously. Hence, gluster-nfs must be disabled on all Red Hat Storage volumes before exporting them via nfs-ganesha.
  2. To set the host IP, execute the following command:
    # gluster vol set volname nfs-ganesha.host IP
    This command sets the host IP to start nfs-ganesha.In a multi-node volume environment, it is recommended that all the nfs-ganesha related commands/operations must be run on one of the nodes only. Hence, the IP address provided must be the IP of that node. If a Red Hat Storage volume is already exported, setting a different host IP will take immediate effect.
  3. To start nfs-ganesha, execute the following command:
    # gluster volume set volname nfs-ganesha.enable on
Unexporting Volumes through nfs-ganesha
To unexport a Red Hat Storage volume, execute the following command:
# gluster vol set volname nfs-ganesha.enable off
This command unexports the Red Hat Storage volume without affecting other exports.
Restarting nfs-ganesha
Before restarting nfs-ganesha, unexport all Red Hat Storage volumes by executing the following command:
# gluster vol set volname nfs-ganesha.enable off
Execute each of the following steps on all the volumes to be exported.
  1. To set the host IP, execute the following command:
    # gluster vol set volname nfs-ganesha.host IP
  2. To restart nfs-ganesha, execute the following command:
    # gluster volume set volname nfs-ganesha.enable on
Verifying the Status
To verify the status of the volume set options, follow the guidelines mentioned below:
  • Check if nfs-ganesha is started by executing the following command:
    ps aux | grep ganesha
  • Check if the volume is exported.
    showmount -e localhost
  • The logs of ganesha.nfsd daemon are written to /tmp/ganesha.log. Check the log file on noticing any unexpected behaviour. This file will be lost in case of a system reboot.

9.3.3.4. Supported Features of nfs-ganesha

Dynamic Exports of Volumes
Previous versions of nfs-ganesha required a restart of the server whenever the administrator had to add or remove exports. nfs-ganesha now supports addition and removal of exports dynamically. Dynamic exports is managed by the DBus interface. DBus is a system local IPC mechanism for system management and peer-to-peer application communication.

Note

Modifying an export in place is currently not supported.
Exporting Multiple Entries
With this version of nfs-ganesha, mulitple Red Hat Storage volumes or sub-directories can now be exported simultaneously.
Pseudo File System
This version of nfs-ganesha now creates and maintains a NFSv4 pseudo-file system, which provides clients with seamless access to all exported objects on the server.
Access Control List
nfs-ganesha NFSv4 protocol includes integrated support for Access Control List (ACL)s, which are similar to those used by Windows. These ACLs can be used to identify a trustee and specify the access rights allowed, or denied for that trustee.This feature is disabled by default.

Note

AUDIT and ALARM ACE types are not currently supported.

9.3.3.5. Manually Configuring nfs-ganesha Exports

It is recommended to use gluster CLI options to export or unexport volumes through nfs-ganesha. However, this section provides some information on changing configurable parameters in nfs-ganesha. Such parameter changes require nfs-ganesha to be started manually.
To start nfs-ganesha manually, execute the following command:
# /usr/bin/ganesha.nfsd -f location of nfs-ganesha.conf file -L location of log file -N log level -d
For example:
/usr/local/bin/ganesha.nfsd -f nfs-ganesha.conf -L nfs-ganesha.log -N NIV_DEBUG -d
where:
  • nfs-ganesha.conf is the configuration file that is available by default on installation of nfs-ganesha rpms. This file is located at /etc/glusterfs-ganesha.
  • nfs-ganesha.log is the log file for the ganesha.nfsd process.
  • NIV_DEBUG is the log level.
Sample export configuration file:
To export any Red Hat Storage volume or directory, copy the EXPORT block into a .conf file, for example export.conf. Edit the parameters appropriately and include the export.conf file in nfs-ganesha.conf. This can be done by adding the line below at the end of nfs-ganesha.conf.
%include "export.conf"
The following are the minimal set of parameters required to export any entry. The values given here are the defualt values used by the CLI options to start or stop nfs-ganesha.
# cat export.conf 

EXPORT{    
 Export_Id = 1 ;   # Export ID unique to each export
 Path = "volume_path";  # Path of the volume to be exported. Eg: "/test_volume"

 FSAL { 
  name = GLUSTER;
  hostname = "10.xx.xx.xx";  # IP of one of the nodes in the trusted pool
  volume = "volume_name";  # Volume name. Eg: "test_volume"
 }

 Access_type = RW;  # Access permissions
 Squash = No_root_squash; # To enable/disable root squashing
 Disable_ACL = TRUE;  # To enable/disable ACL
 Pseudo = "pseudo_path";  # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
 Protocols = "3,4" ;  # NFS protocols supported
 Transports = "UDP,TCP" ; # Transport protocols supported
 SecType = "sys";  # Security flavors supported
}
The following section describes various configurations possible via nfs-ganesha. Minor changes have to be made to the export.conf file to see the expected behaviour.
Exporting Subdirectories
To export subdirectories within a volume, edit the following parameters in the export.conf file.
Path = "path_to_subdirectory";  # Path of the volume to be exported. Eg: "/test_volume/test_subdir"

 FSAL { 
  name = GLUSTER;
  hostname = "10.xx.xx.xx";  # IP of one of the nodes in the trusted pool
  volume = "volume_name";  # Volume name. Eg: "test_volume"
  volpath = "path_to_subdirectory_with_respect_to_volume"; #Subdirectory path from the root of the volume. Eg: "/test_subdir"
 }
Exporting Multiple Entries
To export multiple export entries, define separate export block in the export.conf file for each of the entires, with unique export ID.
For example:
# cat export.conf 
EXPORT{    
 Export_Id = 1 ;   # Export ID unique to each export
 Path = "test_volume";  # Path of the volume to be exported. Eg: "/test_volume"

 FSAL { 
  name = GLUSTER;
  hostname = "10.xx.xx.xx";  # IP of one of the nodes in the trusted pool
  volume = "test_volume";  # Volume name. Eg: "test_volume"
 }

 Access_type = RW;  # Access permissions
 Squash = No_root_squash; # To enable/disable root squashing
 Disable_ACL = TRUE;  # To enable/disable ACL
 Pseudo = "/test_volume";  # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
 Protocols = "3,4" ;  # NFS protocols supported
 Transports = "UDP,TCP" ; # Transport protocols supported
 SecType = "sys";  # Security flavors supported
}

EXPORT{    
 Export_Id = 2 ;   # Export ID unique to each export
 Path = "test_volume/test_subdir";  # Path of the volume to be exported. Eg: "/test_volume"

 FSAL { 
  name = GLUSTER;
  hostname = "10.xx.xx.xx";  # IP of one of the nodes in the trusted pool
  volume = "test_volume";  # Volume name. Eg: "test_volume"
  volpath = "/test_subdir"
 }

 Access_type = RW;  # Access permissions
 Squash = No_root_squash; # To enable/disable root squashing
 Disable_ACL = "FALSE;  # To enable/disable ACL
 Pseudo = "/test_subdir";  # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
 Protocols = "3,4" ;  # NFS protocols supported
 Transports = "UDP,TCP" ; # Transport protocols supported
 SecType = "sys";  # Security flavors supported
}


#showmount -e localhost
Export list for localhost:
/test_volume (everyone)
/test_volume/test_subdir (everyone)
/        (everyone)
Providing Permissions for Specific Clients
The parameter values and permission values given in the EXPORT block applies to any client that mounts the exported volume. To provide specific permissions to specific clients , introduce a client block inside the EXPORT block.
For example, to assign specific permissions for client 10.70.43.92, add the following block in the EXPORT block.
client {
        clients = "10.xx.xx.xx";  # IP of the client.
        allow_root_access = true;
        access_type = "RO"; # Read-only permissions
        Protocols = "3"; # Allow only NFSv3 protocol.
        anonymous_uid = 1440;
        anonymous_gid = 72;
  }
All the other clients inherit the permissions that are declared outside the client block.
Enabling and Disabling NFSv4 ACLs
To enable NFSv4 ACLs , edit the following parameter:
Disable_ACL = FALSE;
Providing Pseudo Path for NFSv4 Mount
To set NFSv4 pseudo path , edit the below parameter:
Pseudo = "pseudo_path"; # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
This path has to be used while mounting the export entry in NFSv4 mode.
Adding and Removing Export Entries Dynamically
File org.ganesha.nfsd.conf is installed in /etc/glusterfs-ganesha/ as part of the nfs-ganesha rpms. To export entries dynamically without restarting nfs-ganesha, execute the following steps:
  1. Copy the file org.ganesha.nfsd.conf into the directory /etc/dbus-1/system.d/.
  2. Execute the following command:
    service messagebus restart
  • Adding an export dynamically
    To add an export dynamically, add an export block as explained in section Exporting Multiple Entries, and execute the following command:
    dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/path-to-export.conf string:'EXPORT(Path=/path-in-export-block)'
    For example, to add testvol1 dynamically:
    dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/home/nfs-ganesha/export.conf string:'EXPORT(Path=/testvol1)')
       
    method return sender=:1.35 -> dest=:1.37 reply_serial=2
  • Removing an export dynamically
    To remove an export dynamically, execute the following command:
    dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.RemoveExport int32:export-id-in-the-export-block
    For example:
    dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.RemoveExport int32:79
       
    method return sender=:1.35 -> dest=:1.37 reply_serial=2

9.3.3.6. Accessing nfs-ganesha Exports

nfs-ganesha exports can be accessed by mounting them in either NFSv3 or NFSv4 mode.
Mounting exports in NFSv3 mode
To mount an export in NFSv3 mode, execute the following command:
mount -t nfs -o vers=3 ip:/volname /mountpoint
For example:
mount -t nfs -o vers=3 10.70.0.0:/testvol /mnt
Mounting exports in NFSv4 mode
To mount an export in NFSv4 mode, execute the following command:
mount -t nfs -o vers=4 ip:/volname /mountpoint
For example:
mount -t nfs -o vers=4 10.70.0.0:/testvol /mnt

9.3.3.7. Troubleshooting

  • Situation
    nfs-ganesha fails to start.
    Solution
    Follow the listed steps to fix the issue:
    1. Review the /tmp/ganesha.log to understand the cause of failure.
    2. Ensure the kernel and gluster nfs services are inactive.
    3. Ensure you execute both the nfs-ganesha.host and nfs-ganesha.enable volume set options.
    For more information see, Section 7.3.3.5 Manually Configuring nfs-ganesha Exports.
  • Situation
    nfs-ganesha has started and fails to export a volume.
    Solution
    Follow the listed steps to fix the issue:
    1. Ensure the file org.ganesha.nfsd.conf is copied into /etc/dbus-1/systemd/ before starting nfs-ganesha.
    2. In case you had not copied the file, restart nfs-ganesha. For more information see, Section 7.3.3.3 Exporting and Unexporting Volumes through nfs-ganesha
  • Situation
    nfs-ganesha fails to stop
    Solution
    Execute the following steps
    1. Check for the status of the nfs-ganesha process.
    2. If it is still running, issue a kill -9 signal on its PID.
    3. Run the following command to check if nfs, mountd, rquotad, nlockmgr and rquotad services are unregistered cleanly.
      rpcinfo -p
      • If the services are not unregistered, then delete these entries using the following command:
        rpcinfo -d

        Note

        You can also restart the rpcbind service instead of using rpcinfo -d on individual entries.
    4. Force start the volume by using the following command:
      # gluster volume start <volname> force
  • Situation
    Permission issues.
    Solution
    By default, the root squash option is disabled when you start nfs-ganesha using the CLI. In case, you encounter any permission issues, check the unix permissions of the exported entry.

9.4. SMB

The Server Message Block (SMB) protocol can be used to access to Red Hat Storage volumes by exporting the glusterFS mount point as a SMB share on the server.
This section describes how to mount SMB shares on Microsoft Windows-based clients (both manually and automatically) and how to verify the volume has been mounted successfully.

Note

SMB access using the Mac OS X Finder is not supported.
The Mac OS X command line can be used to access Red Hat Storage volumes using SMB.

9.4.1. Automatic and Manual Exporting

SMB Prerequisites

The following configuration items need to be implemented before using SMB with Red Hat Storage.
  1. Run gluster volume set VOLNAME stat-prefetch off to disable stat-prefetch for the volume.
  2. Run gluster volume set VOLNAME server.allow-insecure on to permit insecure ports.

    Note

    This allows Samba to communicate with brick processes even with untrusted ports.
  3. Edit the /etc/glusterfs/glusterd.vol in each Red Hat Storage node, and add the following setting:
    option rpc-auth-allow-insecure on

    Note

    This allows Samba to communicate with glusterd even with untrusted ports.
  4. Restart glusterd service on each Red Hat Server node.
  5. Run the following command to ensure proper lock and I/O coherency.
    gluster volume set <VOLNAME> storage.batch-fsync-delay-usec 0

Automatically Exporting Red Hat Storage Volumes Through Samba

When a volume is started using the gluster volume start VOLNAME command, the volume is automatically exported through Samba on all Red Hat Storage servers running Samba. Disabling volume mounting through Samba requires changes to the s30samba-start.sh script.
  1. With elevated privileges, navigate to /var/lib/glusterd/hooks/1/start/post
  2. Rename the S30samba-start.sh to K30samba-start.sh.
    For more information about these scripts, see Section 15.2, “Prepackaged Scripts”.
  3. Run # smbstatus -S on the server to display the status of the volume:
    Service        pid     machine             Connected at
    -------------------------------------------------------------------
    gluster-<VOLNAME> 11967   __ffff_192.168.1.60  Mon Aug  6 02:23:25 2012

Note

To enable Samba to start on boot, run the following command:
#chkconfig smb on

Manually Exporting Red Hat Storage Volumes Through Samba

Use Samba to export Red Hat Storage volumes through the Server Message Block (SMB) protocol.

Note

To be able mount from any server in the trusted storage pool, repeat these steps on each Red Hat Storage node. For more advanced configurations, refer to the Samba documentation.
  1. Open the /etc/samba/smb.conf file in a text editor and add the following lines for a simple configuration:
    [gluster-<VOLNAME>]
    comment = For samba share of volume VOLNAME
    vfs objects = glusterfs
    glusterfs:volume = VOLNAME
    glusterfs:logfile = /var/log/samba/VOLNAME.log
    glusterfs:loglevel = 7
    path = /
    read only = no
    guest ok = yes
    The configuration options are described in the following table:

    Table 9.7. Configuration Options

    Configuration Options Required? Default Value Description
    Path Yes n/a It represents the path that is relative to the root of the gluster volume that is being shared. Hence / represents the root of the gluster volume. Exporting a subdirectory of a volume is supported and /subdir in path exports only that subdirectory of the volume.
    glusterfs:volume Yes n/a The volume name that is shared.
    glusterfs:logfile No NULL Path to the log file that will be used by the gluster modules that are loaded by the vfs plugin. Standard Samba variable substitutions as mentioned in smb.conf are supported.
    glusterfs:loglevel No 7 This option is equivalent to the client-log-level option of gluster. 7 is the default value and corresponds to the INFO level.
    glusterfs:volfile_server No localhost The gluster server to be contacted to fetch the volfile for the volume.
  2. Run service smb [re]start to start or restart the smb service.
  3. Run smbpasswd to set the SMB password.
    # smbpasswd -a username
    Specify the SMB password. This password is used during the SMB mount.

Note

To enable Samba to start on boot, run the following command:
#chkconfig smb on

9.4.2. Mounting Volumes using SMB

Samba follows the permissions on the shared directory, and uses the logged in username to perform access control.
To allow a non root user to read/write into the mounted volume, ensure you execute the following steps:
  1. Add the user on the all the Samba servers based on your configuration:
    # adduser <username>
  2. Add the user to the list of Samba users on all Samba servers and assign password by executing the following command:
    # smbpasswd -a <username>
  3. Perform a FUSE mount of the gluster volume on any one of the Samba servers and provide required permissions to the user by executing the following commands:
    # mount -t glusterfs -oacl <ip address>:<volname> <mountpoint>
    # setfacl -muser:<username>:rwx <mountpoint>

9.4.2.1. Manually Mounting Volumes Using SMB on Red Hat Enterprise Linux and Windows

Mounting a Volume Manually using SMB on Red Hat Enterprise Linux

Mount a Red Hat Storage volume manually using Server Message Block (SMB) on Red Hat Enterprise Linux.
  1. Install the cifs-utils package on the client.
    # yum install cifs-utils
  2. Run mount -t cifs to mount the exported SMB share, using the syntax example as guidance.

    Example 9.1. mount -t cifs Command Syntax

    # mount -t cifs \\\\Samba_Server_IP_Address\\Share_Name Mount_Point -o user=<username>, pass=<password>
    Run # mount -t cifs \\\\SAMBA_SERVER_IP\\gluster-VOLNAME /mnt/smb -o user=<username>,pass=<password> for a Red Hat Storage volume exported through SMB, which uses the /etc/samba/smb.conf file with the following configuration.
    [gluster-<VOLNAME>]
    comment = For samba share of volume VOLNAME
    vfs objects = glusterfs
    glusterfs:volume = VOLNAME
    glusterfs:logfile = /var/log/samba/VOLNAME.log
    glusterfs:loglevel = 7
    path = /
    read only = no
    guest ok = yes
  3. Run # smbstatus -S on the server to display the status of the volume:
    Service        pid     machine             Connected at
    -------------------------------------------------------------------
    gluster-<VOLNAME> 11967   __ffff_192.168.1.60  Mon Aug  6 02:23:25 2012

Mounting a Volume Manually using SMB through Microsoft Windows Explorer

Mount a Red Hat Storage volume manually using Server Message Block (SMB) on Microsoft Windows using Windows Explorer.
  1. In Windows Explorer, click ToolsMap Network Drive…. to open the Map Network Drive screen.
  2. Choose the drive letter using the Drive drop-down list.
  3. In the Folder text box, specify the path of the server and the shared resource in the following format: \\SERVER_NAME\VOLNAME.
  4. Click Finish to complete the process, and display the network drive in Windows Explorer.
  5. If the Windows Security screen pops up, enter the username and password and click OK.
  6. Navigate to the network drive to verify it has mounted correctly.

Mounting a Volume Manually using SMB on Microsoft Windows Command-line.

Mount a Red Hat Storage volume manually using Server Message Block (SMB) on Microsoft Windows using Windows Explorer.
  1. Click StartRun, and then type cmd.
  2. Enter net use z: \\SERVER_NAME\VOLNAME <password> /USER:<username> where z: is the drive letter to assign to the shared volume.
    For example, net use y: \\server1\test-volume test-password /USER:test-user
  3. Navigate to the network drive to verify it has mounted correctly.

9.4.2.2. Automatically Mounting Volumes Using SMB on Red Hat Enterprise Linux and Windows

You can configure your system to automatically mount Red Hat Storage volumes using SMB on Microsoft Windows-based clients each time the system starts.
To automatically mount a Red Hat Storage volume using SMB on Red Hat Enterprise Linux

Mounting a Volume Automatically using SMB on Red Hat Enterprise Linux

Mount a Red Hat Storage Volume automatically using NFS at server start.
  1. Open the /etc/fstab file in a text editor.
  2. Append the following configuration to the fstab file.
    You must specify the filename and its path that contains the user name and/or password in the credentials option in /etc/fstab file. See the mount.cifs man page for more information.
    \\HOSTNAME|IPADDRESS\SHARE_NAME MOUNTDIR
    Using the example server names, the entry contains the following replaced values.
    \\server1\test-volume /mnt/glusterfs cifs credentials=/etc/samba/passwd,_netdev 0 0
  3. Run # smbstatus -S on the server to display the status of the volume:
    Service        pid     machine             Connected at
    -------------------------------------------------------------------
    gluster-<VOLNAME> 11967   __ffff_192.168.1.60  Mon Aug  6 02:23:25 2012

Mounting a Volume Automatically on Server Start using SMB through Microsoft Windows Explorer

Mount a Red Hat Storage volume manually using Server Message Block (SMB) on Microsoft Windows using Windows Explorer.
  1. In Windows Explorer, click ToolsMap Network Drive…. to open the Map Network Drive screen.
  2. Choose the drive letter using the Drive drop-down list.
  3. In the Folder text box, specify the path of the server and the shared resource in the following format: \\SERVER_NAME\VOLNAME.
  4. Click the Reconnect at logon check box.
  5. Click Finish to complete the process, and display the network drive in Windows Explorer.
  6. Navigate to the network drive to verify it has mounted correctly.

9.5. Configuring Automated IP Failover for NFS and SMB

In a replicated volume environment, Cluster Trivial Database (CTDB) can be configured to provide high availability for NFS and SMB exports. CTDB adds virtual IP addresses (VIPs) and a heartbeat service to Red Hat Storage Server.
When a node in the trusted storage pool fails, CTDB enables a different node to take over the IP address of the failed node. This ensures the IP addresses for the services provided are always available.

Note

  • Amazon Elastic Compute Cloud (EC2) does not support VIPs and is therefore not compatible with this solution.

9.5.1. Setting Up CTDB

Configuring CTDB on Red Hat Storage Server

Red Hat Storage supports a maximum of four nodes running clustered Samba.

Note

  • If you already have an older version of CTDB, then remove CTDB by executing the following command:
    # yum remove ctdb
    After removing the older version, proceed with installing the latest CTDB.
  • Install CTDB on all the nodes that are used as Samba servers to the latest version using the following command:
    # yum install ctdb2.5
  • In a CTDB based high availability environment of NFS and SMB, the locks will not be migrated on failover.
  • You must ensure to open Port 4379 between the Red Hat Storage servers.
  1. Create a replicate volume. The bricks must be on different machines. This volume will host the lock file, hence choose the brick size accordingly. To create a replicate volume run the following command:
    # gluster volume create <volname> replica <n> <ipaddress>:/<brick path>.......N times
    where,
    N: The number of nodes that are used as Samba servers.
    For example:
    # gluster volume create ctdb replica 4 10.16.157.75:/rhs/brick1/ctdb/b1 10.16.157.78:/rhs/brick1/ctdb/b2 10.16.157.81:/rhs/brick1/ctdb/b3 10.16.157.84:/rhs/brick1/ctdb/b4
  2. In the following files, replace all in the statement META=all to the newly created volume name
    /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh 
    /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh.
    For example:
    META=all
    to
    META=ctdb
  3. Start the volume.
    The S29CTDBsetup.sh script runs on all Red Hat Storage servers and adds the following lines to the [global] section of your Samba configuration file at /etc/samba/smb.conf.
    clustering = yes
    idmap backend = tdb2
    The script stops Samba server, modifies Samba configuration, adds an entry in /etc/fstab/ for the mount, and mounts the volume at /gluster/lock on all the nodes with Samba server. It also enables automatic start of CTDB service on reboot.

    Note

    When you stop a volume, S29CTDB-teardown.sh script runs on all Red Hat Storage servers and removes the following lines from [global] section of your Samba configuration file at /etc/samba/smb.conf.
    clustering = yes
    idmap backend = tdb2
    It also removes an entry in /etc/fstab/ for the mount and unmount the volume at /gluster/lock.
  4. Verify if the file /etc/sysconfig/ctdb exists on all the nodes that is used as Samba server. This file contains Red Hat Storage recommended CTDB configurations.
  5. Create /etc/ctdb/nodes file on all the nodes that is used as Samba servers and add the IPs of these nodes to the file.
    10.16.157.0
    10.16.157.3
    10.16.157.6
    10.16.157.9
    The IPs listed here are the private IPs of Samba servers.
  6. On all the nodes that are used as Samba server which require IP failover, create /etc/ctdb/public_addresses file and add the virtual IPs that CTDB should create to this file. Add these IP address in the following format:
    <Virtual IP>/<routing prefix><node interface>
    For example:
    192.168.1.20/24 eth0
    192.168.1.21/24 eth0

9.5.2. Starting and Verifying your Configuration

Perform the following to start and verify your configuration:

Start CTDB and Verify the Configuration

Start the CTDB service and verify the virtual IP (VIP) addresses of a shut down server are carried over to another server in the replicated volume.
  1. Run # service ctdb start to start the CTDB service.
  2. Run # chkconfig smb off to prevent CTDB starting Samba automatically when the server is restarted.
  3. Verify that CTDB is running using the following commands:
    # ctdb status
    # ctdb ip
    # ctdb ping -n all
  4. Mount a Red Hat Storage volume using any one of the VIPs.
  5. Run # ctdb ip to locate the physical server serving the VIP.
  6. Shut down the CTDB VIP server to verify successful configuration.
    When the Red Hat Storage Server serving the VIP is shut down there will be a pause for a few seconds, then I/O will resume.

9.6. POSIX Access Control Lists

POSIX Access Control Lists (ACLs) allow different permissions for different users or groups to be assigned to files or directories, independent of the original owner or the owning group.
For example, the user John creates a file. He does not allow anyone in the group to access the file, except for another user, Antony (even if there are other users who belong to the group john).
This means, in addition to the file owner, the file group, and others, additional users and groups can be granted or denied access by using POSIX ACLs.

9.6.1. Setting POSIX ACLs

Two types of POSIX ACLs are available: access ACLs, and default ACLs.
Use access ACLs to grant permission to a specific file or directory.
Use default ACLs to set permissions at the directory level for all files in the directory. If a file inside that directory does not have an ACL, it inherits the permissions of the default ACLs of the directory.
ACLs can be configured for each file:
  • Per user
  • Per group
  • Through the effective rights mask
  • For users not in the user group for the file

9.6.1.1. Setting Access ACLs

Access ACLs grant permission for both files and directories.
The # setfacl –m entry_typefile_name command sets and modifies access ACLs

setfaclentry_type Options

The ACL entry_type translates to the POSIX ACL representations of owner, group, and other.
Permissions must be a combination of the characters r (read), w (write), and x (execute). Specify the ACL entry_type as described below, separating multiple entry types with commas.
u:user_name:permissons
Sets the access ACLs for a user. Specify the user name, or the UID.
g:group_name:permissions
Sets the access ACLs for a group. Specify the group name, or the GID.
m:permission
Sets the effective rights mask. The mask is the combination of all access permissions of the owning group, and all user and group entries.
o:permissions
Sets the access ACLs for users other than the ones in the group for the file.
If a file or directory already has a POSIX ACL, and the setfacl command is used, the additional permissions are added to the existing POSIX ACLs or the existing rule is modified.
For example, to give read and write permissions to user antony:
# setfacl -m u:antony:rw /mnt/gluster/data/testfile

9.6.1.2. Setting Default ACLs

New files and directories inherit ACL information from their parent directory, if that parent has an ACL that contains default entries. Default ACL entries can only be set on directories.
The # setfacl -d --set entry_type directory command sets default ACLs for files and directories.

setfaclentry_type Options

The ACL entry_type translates to the POSIX ACL representations of owner, group, and other.
Permissions must be a combination of the characters r (read), w (write), and x (execute). Specify the ACL entry_type as described below, separating multiple entry types with commas.
u:user_name:permissons
Sets the access ACLs for a user. Specify the user name, or the UID.
g:group_name:permissions
Sets the access ACLs for a group. Specify the group name, or the GID.
m:permission
Sets the effective rights mask. The mask is the combination of all access permissions of the owning group, and all user and group entries.
o:permissions
Sets the access ACLs for users other than the ones in the group for the file.
For example, run # setfacl -d --set o::r /mnt/gluster/data to set the default ACLs for the /data directory to read-only for users not in the user group,

Note

An access ACL set for an individual file can override the default ACL permissions.
Effects of a Default ACL
The following are the ways in which the permissions of a directory's default ACLs are passed to the files and subdirectories in it:
  • A subdirectory inherits the default ACLs of the parent directory both as its default ACLs and as an access ACLs.
  • A file inherits the default ACLs as its access ACLs.

9.6.2. Retrieving POSIX ACLs

Run the # getfacl command to view the existing POSIX ACLs for a file or directory.
# getfacl path/filename
View the existing access ACLs of the sample.jpg file using the following command.
# getfacl /mnt/gluster/data/test/sample.jpg
# owner: antony
# group: antony
user::rw-
group::rw-
other::r--
# getfacl directory name
View the default ACLs of the /doc directory using the following command.
# getfacl /mnt/gluster/data/doc
# owner: antony
# group: antony
user::rw-
user:john:r--
group::r--
mask::r--
other::r--
default:user::rwx
default:user:antony:rwx
default:group::r-x
default:mask::rwx
default:other::r-x

9.6.3. Removing POSIX ACLs

Run # setfacl -x ACL entry_type file to remove all permissions for a user, groups, or others.

setfaclentry_type Options

The ACL entry_type translates to the POSIX ACL representations of owner, group, and other.
Permissions must be a combination of the characters r (read), w (write), and x (execute). Specify the ACL entry_type as described below, separating multiple entry types with commas.
u:user_name
Sets the access ACLs for a user. Specify the user name, or the UID.
g:group_name
Sets the access ACLs for a group. Specify the group name, or the GID.
m:permission
Sets the effective rights mask. The mask is the combination of all access permissions of the owning group, and all user and group entries.
o:permissions
Sets the access ACLs for users other than the ones in the group for the file.
For example, to remove all permissions from the user antony:
# setfacl -x u:antony /mnt/gluster/data/test-file

9.6.4. Samba and ACLs

POSIX ACLs are enabled by default when using Samba to access a Red Hat Storage volume. Samba is compiled with the --with-acl-support option, so no special flags are required when accessing or mounting a Samba share.

Chapter 10. Managing Red Hat Storage Volumes

This chapter describes how to perform common volume management operations on the Red Hat Storage volumes.

10.1. Configuring Volume Options

Note

Volume options can be configured while the trusted storage pool is online.
The current settings for a volume can be viewed using the following command:
# gluster volume info VOLNAME
Volume options can be configured using the following command:
# gluster volume set VOLNAME OPTION PARAMETER
For example, to specify the performance cache size for test-volume:
# gluster volume set test-volume performance.cache-size 256MB
Set volume successful
The following table lists available volume options along with their description and default value.

Note

The default values are subject to change, and may not be the same for Red Hat Storage all versions.
Option Value Description Allowed Values Default Value
auth.allow IP addresses or hostnames of the clients which are allowed to access the volume. Valid hostnames or IP addresses, which includes wild card patterns including *. For example, 192.168.1.*. A list of comma separated addresses is acceptable, but a single hostname must not exceed 256 characters. * (allow all)
auth.reject IP addresses or hostnames of the clients which are denied access to the volume. Valid hostnames or IP addresses, which includes wild card patterns including *. For example, 192.168.1.*. A list of comma separated addresses is acceptable, but a single hostname must not exceed 256 characters. none (reject none)

Note

Using auth.allow and auth.reject options, you can control access of only glusterFS FUSE-based clients. Use nfs.rpc-auth-* options for NFS access control.
cluster.min-free-disk Specifies the percentage of disk space that must be kept free. This may be useful for non-uniform bricks. Percentage of required minimum free disk space. 10%
cluster.op-version Allows you to set the operating version of the cluster. The op-version number cannot be downgraded and is set for all the volumes. Also the op-version does not appear when you execute the gluster volume info command. 2 | 30000 Default value is 2 after an upgrade from RHS 2.1. Value is set to 30000 for a new cluster deployment.
cluster.self-heal-daemon Specifies whether proactive self-healing on replicated volumes is activated. on | off on
cluster.server-quorum-type If set to server, this option enables the specified volume to participate in the server-side quorum. For more information on configuring the server-side quorum, see Section 10.9.1.1, “Configuring Server-Side Quorum” none | server none
cluster.server-quorum-ratio Sets the quorum percentage for the trusted storage pool. 0 - 100 >50%
cluster.quorum-type If set to fixed, this option allows writes to a file only if the number of active bricks in that replica set (to which the file belongs) is greater than or equal to the count specified in the cluster.quorum-count option. If set to auto, this option allows writes to the file only if the percentage of active replicate bricks is more than 50% of the total number of bricks that constitute that replica. If there are only two bricks in the replica group, the first brick must be up and running to allow modifications. fixed | auto none
cluster.quorum-count The minimum number of bricks that must be active in a replica-set to allow writes. This option is used in conjunction with cluster.quorum-type =fixed option to specify the number of bricks to be active to participate in quorum. The cluster.quorum-type = auto option will override this value. 1 - replica-count 0
diagnostics.brick-log-level Changes the log-level of the bricks. INFO | DEBUG | WARNING | ERROR | CRITICAL | NONE | TRACE info
diagnostics.client-log-level Changes the log-level of the clients. INFO | DEBUG | WARNING | ERROR | CRITICAL | NONE | TRACE info
diagnostics.brick-sys-log-level Depending on the value defined for this option, log messages at and above the defined level are generated in the syslog and the brick log files. INFO | WARNING | ERROR | CRITICAL CRITICAL
diagnostics.client-sys-log-level Depending on the value defined for this option, log messages at and above the defined level are generated in the syslog and the client log files. INFO | WARNING | ERROR | CRITICAL CRITICAL
diagnostics.client-log-format Allows you to configure the log format to log either with a message id or without one on the client. no-msg-id | with-msg-id with-msg-id
diagnostics.brick-log-format Allows you to configure the log format to log either with a message id or without one on the brick. no-msg-id | with-msg-id with-msg-id
diagnostics.brick-log-flush-timeout The length of time for which the log messages are buffered, before being flushed to the logging infrastructure (gluster or syslog files) on the bricks. 30 - 300 seconds (30 and 300 included) 120 seconds
diagnostics.brick-log-buf-size The maximum number of unique log messages that can be suppressed until the timeout or buffer overflow, whichever occurs first on the bricks. 0 and 20 (0 and 20 included) 5
diagnostics.client-log-flush-timeout The length of time for which the log messages are buffered, before being flushed to the logging infrastructure (gluster or syslog files) on the clients. 30 - 300 seconds (30 and 300 included) 120 seconds
diagnostics.client-log-buf-size The maximum number of unique log messages that can be suppressed until the timeout or buffer overflow, whichever occurs first on the clients. 0 and 20 (0 and 20 included) 5
features.quota-deem-statfs When this option is set to on, it takes the quota limits into consideration while estimating the filesystem size. The limit will be treated as the total size instead of the actual size of filesystem. on | off off
features.read-only Specifies whether to mount the entire volume as read-only for all the clients accessing it. on | off off
group small-file-perf This option enables the open-behind and quick-read translators on the volume, and can be done only if all the clients of the volume are using Red Hat Storage 2.1. NA -
network.ping-timeout The time the client waits for a response from the server. If a timeout occurs, all resources held by the server on behalf of the client are cleaned up. When the connection is reestablished, all resources need to be reacquired before the client can resume operations on the server. Additionally, locks are acquired and the lock tables are updated. A reconnect is a very expensive operation and must be avoided. 42 seconds 42 seconds
nfs.acl Disabling nfs.acl will remove support for the NFSACL sideband protocol. This is enabled by default. enable | disable enable
nfs.enable-ino32 For nfs clients or applciatons that do not support 64-bit inode numbers, use this option to make NFS return 32-bit inode numbers instead. Disabled by default, so NFS returns 64-bit inode numbers. enable | disable disable
nfs.export-dir By default, all NFS volumes are exported as individual exports. This option allows you to export specified subdirectories on the volume. The path must be an absolute path. Along with the path allowed, list of IP address or hostname can be associated with each subdirectory. None
nfs.export-dirs By default, all NFS sub-volumes are exported as individual exports. This option allows any directory on a volume to be exported separately. on | off on

Note

The value set for nfs.export-dirs and nfs.export-volumes options are global and applies to all the volumes in the Red Hat Storage trusted storage pool.
nfs.export-volumes Enables or disables exporting entire volumes. If disabled and used in conjunction with nfs.export-dir, you can set subdirectories as the only exports. on | off on
nfs.mount-rmtab Path to the cache file that contains a list of NFS-clients and the volumes they have mounted. Change the location of this file to a mounted (with glusterfs-fuse, on all storage servers) volume to gain a trusted pool wide view of all NFS-clients that use the volumes. The contents of this file provide the information that can get obtained with the showmount command. Path to a directory /var/lib/glusterd/nfs/rmtab
nfs.mount-udp Enable UDP transport for the MOUNT sideband protocol. By default, UDP is not enabled, and MOUNT can only be used over TCP. Some NFS-clients (certain Solaris, HP-UX and others) do not support MOUNT over TCP and enabling nfs.mount-udp makes it possible to use NFS exports provided by Red Hat Storage. disable | enable disable
nfs.nlm By default, the Network Lock Manager (NLMv4) is enabled. Use this option to disable NLM. Red Hat does not recommend disabling this option. on on|off
nfs.rpc-auth-allow IP_ADRESSES A comma separated list of IP addresses allowed to connect to the server. By default, all clients are allowed. Comma separated list of IP addresses accept all
nfs.rpc-auth-reject IP_ADRESSES A comma separated list of addresses not allowed to connect to the server. By default, all connections are allowed. Comma separated list of IP addresses reject none
nfs.ports-insecure Allows client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting for allowing insecure ports for all exports using a single option. on | off off
nfs.addr-namelookup Specifies whether to lookup names for incoming client connections. In some configurations, the name server can take too long to reply to DNS queries, resulting in timeouts of mount requests. This option can be used to disable name lookups during address authentication. Note that disabling name lookups will prevent you from using hostnames in nfs.rpc-auth-* options. on | off on
nfs.port Associates glusterFS NFS with a non-default port. 1025-65535 38465- 38467
nfs.disable Specifies whether to disable NFS exports of individual volumes. on | off off
nfs.server-aux-gids When enabled, the NFS-server will resolve the groups of the user accessing the volume. NFSv3 is restricted by the RPC protocol (AUTH_UNIX/AUTH_SYS header) to 16 groups. By resolving the groups on the NFS-server, this limits can get by-passed. on|off off
open-behind It improves the application's ability to read data from a file by sending success notifications to the application whenever it receives a open call. on | off off
performance.io-thread-count The number of threads in the IO threads translator. 0 - 65 16
performance.cache-max-file-size Sets the maximum file size cached by the io-cache translator. Can be specified using the normal size descriptors of KB, MB, GB, TB, or PB (for example, 6GB). Size in bytes, or specified using size descriptors. 2 ^ 64-1 bytes
performance.cache-min-file-size Sets the minimum file size cached by the io-cache translator. Can be specified using the normal size descriptors of KB, MB, GB, TB, or PB (for example, 6GB). Size in bytes, or specified using size descriptors. 0
performance.cache-refresh-timeout The number of seconds cached data for a file will be retained. After this timeout, data re-validation will be performed. 0 - 61 seconds 1 second
performance.cache-size Size of the read cache. Size in bytes, or specified using size descriptors. 32 MB
performance.md-cache-timeout The time period in seconds which controls when metadata cache has to be refreshed. If the age of cache is greater than this time-period, it is refreshed. Every time cache is refreshed, its age is reset to 0. 0-60 seconds 1 second
performance.quick-read.priority Sets the priority of the order in which files get flushed from the cache when it is full. min/max -
performance.quick-read.max-file-size Maximum size of the file that can be fetched using the get interface. Files larger than this size are not cached by quick-read. 0-1MB 64KB
performance.quick-read.cache-timeout Timeout for the validation of a cached file. After this timeout, the properties of the file is compared with that of the cached copy. If the file has changed after it has been cached, the cache is flushed. 1-60 seconds 1 second
performance.quick-read.cache-size The size of the quick-read cache that is used to cache all files. Can be specified using the normal size descriptors of KB, MB, GB. 0-32GB 128MB
performance.use-anonymous-fd This option requires open-behind to be on. For read operations, use anonymous FD when the original FD is open-behind and not yet opened in the backend. Yes | No Yes
performance.lazy-open This option requires open-behind to be on. Perform an open in the backend only when a necessary FOP arrives (for example, write on the FD, unlink of the file). When this option is disabled, perform backend open immediately after an unwinding open. Yes/No Yes
server.allow-insecure Allows client connections from unprivileged ports. By default, only privileged ports are allowed. This is a global setting for allowing insecure ports to be enabled for all exports using a single option. on | off off

Important

Turning server.allow-insecure to on allows ports to accept/reject messages from insecure ports. Enable this option only if your deployment requires it, for example if there are too many bricks in each volume, or if there are too many services which have already utilized all the privileged ports in the system. You can control access of only glusterFS FUSE-based clients. Use nfs.rpc-auth-* options for NFS access control.
server.root-squash Prevents root users from having root privileges, and instead assigns them the privileges of nfsnobody. This squashes the power of the root users, preventing unauthorized modification of files on the Red Hat Storage Servers. on | off off
server.anonuid Value of the UID used for the anonymous user when root-squash is enabled. When root-squash is enabled, all the requests received from the root UID (that is 0) are changed to have the UID of the anonymous user. 0 - 4294967295 65534 (this UID is also known as nfsnobody)
server.anongid Value of the GID used for the anonymous user when root-squash is enabled. When root-squash is enabled, all the requests received from the root GID (that is 0) are changed to have the GID of the anonymous user. 0 - 4294967295 65534 (this UID is also known as nfsnobody)
server.gid-timeout The time period in seconds which controls when cached groups has to expire. This is the cache that contains the groups (GIDs) where a specified user (UID) belongs to. This option is used only when server.manage-gids is enabled. 0-4294967295 seconds 2 seconds
server.manage-gids Resolve groups on the server-side. By enabling this option, the groups (GIDs) a user (UID) belongs to gets resolved on the server, instead of using the groups that were send in the RPC Call by the client. This option makes it possible to apply permission checks for users that belong to bigger group lists than the protocol supports (approximately 93). on|off off
server.statedump-path Specifies the directory in which the statedump files must be stored. /var/run/gluster (for a default installation) Path to a directory
storage.health-check-interval Sets the time interval in seconds for a filesystem health check. You can set it to 0 to disable. The POSIX translator on the bricks performs a periodic health check. If this check fails, the filesystem exported by the brick is not usable anymore and the brick process (glusterfsd) logs a warning and exits. 0-4294967295 seconds 30 seconds
storage.owner-uid Sets the UID for the bricks of the volume. This option may be required when some of the applications need the brick to have a specific UID to function correctly. Example: For QEMU integration the UID/GID must be qemu:qemu, that is, 107:107 (107 is the UID and GID of qemu). Any integer greater than or equal to -1. The UID of the bricks are not changed. This is denoted by -1.
storage.owner-gid Sets the GID for the bricks of the volume. This option may be required when some of the applications need the brick to have a specific GID to function correctly. Example: For QEMU integration the UID/GID must be qemu:qemu, that is, 107:107 (107 is the UID and GID of qemu). Any integer greater than or equal to -1. The GID of the bricks are not changed. This is denoted by -1.

10.2. Expanding Volumes

Volumes can be expanded while the trusted storage pool is online and available. For example, you can add a brick to a distributed volume, which increases distribution and adds capacity to the Red Hat Storage volume. Similarly, you can add a group of bricks to a distributed replicated volume, which increases the capacity of the Red Hat Storage volume.

Note

When expanding distributed replicated or distributed striped volumes, the number of bricks being added must be a multiple of the replica or stripe count. For example, to expand a distributed replicated volume with a replica count of 2, you need to add bricks in multiples of 2 (such as 4, 6, 8, etc.).

Procedure 10.1. Expanding a Volume

  1. From any server in the trusted storage pool, use the following command to probe the server on which you want to add a new brick :
    # gluster peer probe HOSTNAME
    For example:
    # gluster peer probe server4
    Probe successful
  2. Add the brick using the following command:
    # gluster volume add-brick VOLNAME NEW_BRICK
    For example:
    # gluster volume add-brick test-volume server4:/exp4
    Add Brick successful
    If you want to change the replica/stripe count, you must add the replica/stripe count to the add-brick command.
    For example,
    # gluster volume add-brick replica 3 test-volume server4:/exp4
    When increasing the replica/stripe count of a distribute replicate/stripe volume, the number of replica/stripe bricks to be added must be equal to the number of distribute subvolumes.
  3. Check the volume information using the following command:
    # gluster volume info
    The command output displays information similar to the following:
    Volume Name: test-volume
    Type: Distribute
    Status: Started
    Number of Bricks: 4
    Bricks:
    Brick1: server1:/exp1
    Brick2: server2:/exp2
    Brick3: server3:/exp3
    Brick4: server4:/exp4
  4. Rebalance the volume to ensure that files will be distributed to the new brick. Use the rebalance command as described in Section 10.6, “Rebalancing Volumes”.

10.3. Shrinking Volumes

You can shrink volumes while the trusted storage pool is online and available. For example, you may need to remove a brick that has become inaccessible in a distributed volume because of a hardware or network failure.

Note

When shrinking distributed replicated or distributed striped volumes, the number of bricks being removed must be a multiple of the replica or stripe count. For example, to shrink a distributed striped volume with a stripe count of 2, you need to remove bricks in multiples of 2 (such as 4, 6, 8, etc.). In addition, the bricks you are removing must be from the same sub-volume (the same replica or stripe set). In a non-replicated volume, all bricks must be available in order to migrate data and perform the remove brick operation. In a replicated volume, at least one of the bricks in the replica must be available.

Procedure 10.2. Shrinking a Volume

  1. Remove a brick using the following command:
    # gluster volume remove-brick VOLNAME BRICK start
    For example:
    # gluster volume remove-brick test-volume server2:/exp2 start
    Remove Brick start successful

    Note

    If the remove-brick command is run with force or without any option, the data on the brick that you are removing will no longer be accessible at the glusterFS mount point. When using the start option, the data is migrated to other bricks, and on a successful commit the removed brick's information is deleted from the volume configuration. Data can still be accessed directly on the brick.
  2. You can view the status of the remove brick operation using the following command:
    # gluster volume remove-brick VOLNAME BRICK status
    For example:
    # gluster volume remove-brick test-volume server2:/exp2 status
          Node    Rebalanced-files          size       scanned      failures         status
     ---------         -----------   -----------   -----------   -----------   ------------
     localhost                  16      16777216            52             0    in progress
    192.168.1.1                 13      16723211            47             0    in progress
  3. When the data migration shown in the previous status command is complete, run the following command to commit the brick removal:
    # gluster volume remove-brick VOLNAME BRICK commit
    For example,
    # gluster volume remove-brick test-volume server2:/exp2 commit
  4. After the brick removal, you can check the volume information using the following command:
    # gluster volume info 
    The command displays information similar to the following:
    # gluster volume info
    Volume Name: test-volume
    Type: Distribute
    Status: Started
    Number of Bricks: 3
    Bricks:
    Brick1: server1:/exp1
    Brick3: server3:/exp3
    Brick4: server4:/exp4

10.3.1. Stopping a remove-brick Operation

Important

Stopping a remove-brick operation is a technology preview feature. Technology Preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process. As Red Hat considers making future iterations of Technology Preview features generally available, we will provide commercially reasonable efforts to resolve any reported issues that customers experience when using these features.
A remove-brick operation that is in progress can be stopped by using the stop command.

Note

Files that were already migrated during remove-brick operation will not be migrated back to the same brick when the operation is stopped.
To stop remove brick operation, use the following command:
# gluster volume remove-brick VOLNAME BRICK stop
For example:
gluster volume remove-brick di rhs1:/brick1/di21 rhs1:/brick1/di21 stop

Node   Rebalanced-files   size     scanned  failures   skipped   status  run-time in secs
----      -------         ----       ----     ------    -----     -----    ------   
localhost     23          376Bytes    34        0        0      stopped      2.00
rhs1          0           0Bytes      88        0        0      stopped      2.00  
rhs2          0           0Bytes       0        0        0      not started  0.00
'remove-brick' process may be in the middle of a file migration.
The process will be fully stopped once the migration of the file is complete.
Please check remove-brick process for completion before doing any further brick related tasks on the volume.

10.4. Migrating Volumes

Data can be redistributed across bricks while the trusted storage pool is online and available.Before replacing bricks on the new servers, ensure that the new servers are successfully added to the trusted storage pool.

Note

Before performing a replace-brick operation, review the known issues related to replace-brick operation in the Red Hat Storage 3.0 Release Notes.

10.4.1. Replacing a Subvolume on a Distribute or Distribute-replicate Volume

This procedure applies only when at least one brick from the subvolume to be replaced is online. In case of a Distribute volume, the brick that must be replaced must be online. In case of a Distribute-replicate, at least one brick from the subvolume from the replica set that must be replaced must be online.
To replace the entire subvolume with new bricks on a Distribute-replicate volume, follow these steps:
  1. Add the new bricks to the volume.
    # gluster volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] NEW-BRICK

    Example 10.1. Adding a Brick to a Distribute Volume

    # gluster volume add-brick test-volume server5:/exp5
    Add Brick successful
  2. Verify the volume information using the command:
    # gluster volume info
     Volume Name: test-volume
        Type: Distribute
        Status: Started
        Number of Bricks: 5
        Bricks:
        Brick1: server1:/exp1
        Brick2: server2:/exp2
        Brick3: server3:/exp3
        Brick4: server4:/exp4
        Brick5: server5:/exp5

    Note

    In case of a Distribute-replicate or stripe volume, you must specify the replica or stripe count in the add-brick command and provide the same number of bricks as the replica or stripe count to the add-brick command.
  3. Remove the bricks to be replaced from the subvolume.
    1. Start the remove-brick operation using the command:
      # gluster volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> start

      Example 10.2. Start a remove-brick operation on a distribute volume

      #gluster volume remove-brick test-volume server2:/exp2 start
      Remove Brick start successful
    2. View the status of the remove-brick operation using the command:
      # gluster volume remove-brick <VOLNAME> [replica <COUNT>] BRICK status

      Example 10.3. View the Status of remove-brick Operation

      # gluster volume remove-brick test-volume server2:/exp2 status
      Node     Rebalanced-files size        scanned failures status
      ------------------------------------------------------------------
      server2  16               16777216    52      0        in progress
      Keep monitoring the remove-brick operation status by executing the above command. When the value of the status field is set to complete in the output of remove-brick status command, proceed further.
    3. Commit the remove-brick operation using the command:
      #gluster volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> commit

      Example 10.4. Commit the remove-brick Operation on a Distribute Volume

      #gluster volume remove-brick test-volume server2:/exp2 commit
    4. Verify the volume information using the command:
      # gluster volume info
      Volume Name: test-volume
      Type: Distribute
      Status: Started
      Number of Bricks: 4
      Bricks:
      Brick1: server1:/exp1
      Brick3: server3:/exp3
      Brick4: server4:/exp4
      Brick5: server5:/exp5
    5. Verify the content on the brick after committing the remove-brick operation on the volume. If there are any files leftover, copy it through FUSE or NFS mount.
      1. Verify if there are any pending files on the bricks of the subvolume.
        Along with files, all the application-specific extended attributes must be copied. glusterFS also uses extended attributes to store its internal data. The extended attributes used by glusterFS are of the form trusted.glusterfs.*, trusted.afr.*, and trusted.gfid. Any extended attributes other than ones listed above must also be copied.
        To copy the application-specific extended attributes and to achieve a an effect similar to the one that is described above, use the following shell script:
        Syntax:
        # copy.sh <glusterfs-mount-point> <brick>

        Example 10.5. Code Snippet Usage

        If the mount point is /mnt/glusterfs and brick path is /export/brick1, then the script must be run as:
        # copy.sh /mnt/glusterfs /export/brick
        #!/bin/bash
        
        MOUNT=$1
        BRICK=$2
        
        for file in `find $BRICK ! -type d`; do
            rpath=`echo $file | sed -e "s#$BRICK\(.*\)#\1#g"`
            rdir=`dirname $rpath`
        
            cp -fv $file $MOUNT/$rdir;
        
            for xattr in `getfattr -e hex -m. -d $file 2>/dev/null | sed -e '/^#/d' | grep -v -E "trusted.glusterfs.*" | grep -v -E "trusted.afr.*" | grep -v "trusted.gfid"`;
            do
                key=`echo $xattr | cut -d"=" -f 1`
                value=`echo $xattr | cut -d"=" -f 2`
        
                setfattr $MOUNT/$rpath -n $key -v $value
            done
        done
      2. To identify a list of files that are in a split-brain state, execute the command:
        #gluster volume heal test-volume info
      3. If there are any files listed in the output of the above command, delete those files from the mount point and manually retain the correct copy of the file after comparing the files across the bricks in a replica set. Selecting the correct copy of the file needs manual intervention by the System Administrator.

10.4.2. Replacing an Old Brick with a New Brick on a Replicate or Distribute-replicate Volume

A single brick can be replaced during a hardware failure situation, such as a disk failure or a server failure. The brick that must be replaced could either be online or offline. This procedure is applicable for volumes with replication. In case of a Replicate or Distribute-replicate volume types, after replacing the brick, self-heal is triggered to heal the data on the new brick.
Procedure to replace an old brick with a new brick on a Replicate or Distribute-replicate volume:
  1. Ensure that the new brick (sys5:/home/gfs/r2_5) that replaces the old brick (sys0:/home/gfs/r2_0) is empty. Ensure that all the bricks are online. The brick that must be replaced can be in an offline state.
  2. Bring the brick that must be replaced to an offline state, if it is not already offline.
    1. Identify the PID of the brick to be replaced, by executing the command:
      # gluster volume status
      Status of volume: r2
      Gluster process                  Port    Online    Pid 
      ------------------------------------------------------- 
      Brick sys0:/home/gfs/r2_0         49152    Y    5342
        
      Brick sys1:/home/gfs/r2_1         49153    Y    5354 
      
      Brick sys2:/home/gfs/r2_2         49154    Y    5365
       
      Brick sys3:/home/gfs/r2_3         49155    Y    5376
    2. Log in to the host on which the brick to be replaced has its process running and kill the brick.
      #kill -9 <PID>
    3. Ensure that the brick to be replaced is offline and the other bricks are online by executing the command:
      # gluster volume status 
      Status of volume: r2 
      Gluster process                  Port    Online  Pid 
      ------------------------------------------------------
      Brick sys0:/home/gfs/r2_0         N/A      N    5342
      
      Brick sys1:/home/gfs/r2_1         49153    Y    5354
       
      Brick sys2:/home/gfs/r2_2         49154    Y    5365
      
      Brick sys3:/home/gfs/r2_3         49155    Y    5376
  3. Create a FUSE mount point from any server to edit the extended attributes. Using the NFS and CIFS mount points, you will not be able to edit the extended attributes.
  4. Perform the following operations to change the Automatic File Replication extended attributes so that the heal process happens from the other brick (sys1:/home/gfs/r2_1) in the replica pair to the new brick (sys5:/home/gfs/r2_5).
    Note that /mnt/r2 is the FUSE mount path.
    1. Create a new directory on the mount point and ensure that a directory with such a name is not already present.
      #mkdir /mnt/r2/<name-of-nonexistent-dir>
    2. Delete the directory and set the extended attributes.
      #rmdir /mnt/r2/<name-of-nonexistent-dir>
      #setfattr -n trusted.non-existent-key -v abc /mnt/r2 
      #setfattr -x trusted.non-existent-key /mnt/r2
    3. Ensure that the extended attributes on the other bricks in the replica (in this example, trusted.afr.r2-client-0) is not set to zero.
      #getfattr -d -m. -e hex /home/gfs/r2_1 # file: home/gfs/r2_1 
      security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000 
      trusted.afr.r2-client-0=0x000000000000000300000002  
      trusted.afr.r2-client-1=0x000000000000000000000000 
      trusted.gfid=0x00000000000000000000000000000001 
      trusted.glusterfs.dht=0x0000000100000000000000007ffffffe 
      trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440
  5. Execute the replace-brick command with the force option:
    # gluster volume replace-brick r2 sys0:/home/gfs/r2_0 sys5:/home/gfs/r2_5 commit force 
    volume replace-brick: success: replace-brick commit successful
  6. Check if the new brick is online.
    # gluster volume status
    Status of volume: r2 
    Gluster process                    Port    Online    Pid 
    --------------------------------------------------------- 
    Brick sys5:/home/gfs/r2_5            49156    Y    5731 
    
    Brick sys1:/home/gfs/r2_1            49153    Y    5354 
    
    Brick sys2:/home/gfs/r2_2            49154    Y    5365 
    
    Brick sys3:/home/gfs/r2_3            49155    Y    5376
  7. Ensure that after the self-heal completes, the extended attributes are set to zero on the other bricks in the replica.
    # getfattr -d -m. -e hex /home/gfs/r2_1 
    getfattr: Removing leading '/' from absolute path names 
    # file: home/gfs/r2_1 
    security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000 
    trusted.afr.r2-client-0=0x000000000000000000000000
    trusted.afr.r2-client-1=0x000000000000000000000000 
    trusted.gfid=0x00000000000000000000000000000001 
    trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
    trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440
    Note that in this example, the extended attributes trusted.afr.r2-client-0 and trusted.afr.r2-client-1 are set to zero.

10.4.3. Replacing an Old Brick with a New Brick on a Distribute Volume

Important

In case of a Distribute volume type, replacing a brick using this procedure will result in data loss.
  1. Replace a brick with a commit force option:
    # gluster volume replace-brick <VOLNAME> <BRICK> <NEW-BRICK> commit force

    Example 10.6. Replace a brick on a Distribute Volume

    # gluster volume replace-brick r2 sys0:/home/gfs/r2_0 sys5:/home/gfs/r2_5 commit force 
    volume replace-brick: success: replace-brick commit successful
  2. Verify if the new brick is online.
    # gluster volume status
    Status of volume: r2 
    Gluster process                    Port    Online    Pid 
    --------------------------------------------------------- 
    Brick sys5:/home/gfs/r2_5            49156    Y    5731 
    
    Brick sys1:/home/gfs/r2_1            49153    Y    5354 
    
    Brick sys2:/home/gfs/r2_2            49154    Y    5365 
    
    Brick sys3:/home/gfs/r2_3            49155    Y    5376

Note

All the replace-brick command options except the commit force option are deprecated.

10.5. Replacing Hosts

10.5.1. Replacing a Host Machine with a Different IP Address

You can replace a failed host machine with another host having a different IP address.

Important

Ensure that the new peer has the exact disk capacity as that of the one it is replacing. For example, if the peer in the cluster has two 100GB drives, then the new peer should have the same disk capacity and number of drives.
In the following example the original machine which has had an irrecoverable failure is sys0 and the replacement machine is sys5. The brick with an unrecoverable failure is sys0:/home/gfs/r2_0 and the replacement brick is sys5:/home/gfs/r2_5.
  1. First probe the new peer from one of the existing peers to bring it into the cluster.
    # gluster peer probe sys5
  2. Ensure that the new brick (sys5:/home/gfs/r2_5) that replaces the old brick (sys0:/home/gfs/r2_0) is empty. Ensure that all the bricks are online. The brick that must be replaced can be in an offline state.
  3. Bring the brick that must be replaced to an offline state, if it is not already offline.
    1. Identify the PID of the brick to be replaced, by executing the command:
             # gluster volume status
              Status of volume: r2
              Gluster process                   Port          Online       Pid 
              Brick sys0:/home/gfs/r2_0         49152           Y          5342
              Brick sys1:/home/gfs/r2_1         49153           Y          5354 
              Brick sys2:/home/gfs/r2_2         49154           Y          5365
              Brick sys3:/home/gfs/r2_3         49155           Y          5376
    2. Log in to the host on which the brick to be replaced has its process running and kill the brick.
                #kill -9 <PID>
    3. Ensure that the brick to be replaced is offline and the other bricks are online by executing the command:
            # gluster volume status
              Status of volume: r2 
              Gluster process                   Port    Online  Pid 
              Brick sys0:/home/gfs/r2_0         N/A      N    5342
              Brick sys1:/home/gfs/r2_1         49153    Y    5354
              Brick sys2:/home/gfs/r2_2         49154    Y    5365
              Brick sys3:/home/gfs/r2_3         49155    Y    5376
      
  4. Create a FUSE mount point from any server to edit the extended attributes. Using the NFS and CIFS mount points, you will not be able to edit the extended attributes.
        #mount -t glusterfs server-ip:/volname mount-point
  5. Perform the following operations to change the Automatic File Replication extended attributes so that the heal process happens from the other brick (sys1:/home/gfs/r2_1) in the replica pair to the new brick (sys5:/home/gfs/r2_5). Note that /mnt/r2 is the FUSE mount path.
    1. Create a new directory on the mount point and ensure that a directory with such a name is not already present.
         #mkdir /mnt/r2/<name-of-nonexistent-dir>
    2. Delete the directory and set the extended attributes.
           #rmdir /mnt/r2/<name-of-nonexistent-dir>
                 #setfattr -n trusted.non-existent-key -v abc /mnt/r2
                 #setfattr -x trusted.non-existent-key /mnt/r2
    3. Ensure that the extended attributes on the other bricks in the replica (in this example, trusted.afr.r2-client-0) is not set to zero.
          #getfattr -d -m. -e hex /home/gfs/r2_1 # file: home/gfs/r2_1
                  security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000 
                  trusted.afr.r2-client-0=0x000000000000000300000002  
                  trusted.afr.r2-client-1=0x000000000000000000000000 
                  trusted.gfid=0x00000000000000000000000000000001 
                  trusted.glusterfs.dht=0x0000000100000000000000007ffffffe 
                  trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440
      
  6. Execute the replace-brick command with the force option:
        # gluster volume replace-brick r2 sys0:/home/gfs/r2_0 sys5:/home/gfs/r2_5 commit force
       volume replace-brick: success: replace-brick commit successful
    
  7. Verify if the new brick is online.
        # gluster volume status
       Status of volume: r2 
       Gluster process                              Port    Online    Pid 
       Brick sys5:/home/gfs/r2_5            49156    Y    5731 
       Brick sys1:/home/gfs/r2_1            49153    Y    5354 
       Brick sys2:/home/gfs/r2_2            49154    Y    5365 
       Brick sys3:/home/gfs/r2_3            49155    Y    5376
    
    At this point, self heal is triggered automatically by the self-heal Daemon. The status of the heal process can be seen by executing hte coomand:
        #gluster volume heal volname info
  8. Detach the original machine from the trusted pool.
    #gluster peer detach sys0
  9. Ensure that after the self-heal completes, the extended attributes are set to zero on the other bricks in the replica.
    #getfattr -d -m. -e hex /home/gfs/r2_1
        getfattr: Removing leading '/' from absolute path names 
        #file: home/gfs/r2_1 
        security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000 
        trusted.afr.r2-client-0=0x000000000000000000000000
        trusted.afr.r2-client-1=0x000000000000000000000000 
        trusted.gfid=0x00000000000000000000000000000001 
        trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
        trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440
    

    Note

    Note that in this example, the extended attributes trusted.afr.r2-client-0 and trusted.afr.r2-client-1 are set to zero.

10.5.2. Replacing a Host Machine with the Same IP Address

You can replace a failed host machine with another host having the same IP address.
In the following example the original host that is irrecoverable had the hostname: Example1 and IP address:192.168.1.44
  1. Stop the glusterd service on the Example1 node.
    # service glusterd stop
  2. Retrieve the UUID of the failed node (1) from another of the cluster by executing the command:
    # gluster peer status
    Number of Peers: 2
    Hostname: 192.168.1.44
    
    Uuid: 1d9677dc-6159-405e-9319-ad85ec030880
    State: Peer in Cluster (Connected)
    Hostname: 192.168.1.45
    
    Uuid: b5ab2ec3-5411-45fa-a30f-43bd04caf96b
    State: Peer Rejected (Connected)
    
    Note that the UUID of the failed host is b5ab2ec3-5411-45fa-a30f-43bd04caf96b
  3. Edit the glusterd.info file in the new host and include the UUID of the host you retrieved in the previous step.
    # cat /var/lib/glusterd/glusterd.info                                                        UUID=b5ab2ec3-5411-45fa-a30f-43bd04caf96b 
    operating-version=30000
    
  4. Select any node (say Example2) in the cluster and retrieve its UUID from the glusterd.info file.
    # grep -i uuid /var/lib/glusterd/glusterd.info
    UUID=8cc6377d-0153-4540-b965-a4015494461c
    
  5. Gather the peer information files from the node (Example2) in the previous step. Execute the following command in that node (Example2) of the cluster.
    # cp -a /var/lib/glusterd/peers /tmp/
  6. Remove the peer file corresponding to failed peer the /tmp/peers directory.
    rm -r /tmp/peers/b5ab2ec3-5411-45fa-a30f-43bd04caf96b
    Note that the UUID corresponds to the UUID of the failed node retrieved in Step 2.
  7. Archive all the files and copy those to the failed node(Example1)
    # cd /tmp; tar -cvf peers.tar peers
  8. Copy the above created file to the new peer
    # scp /tmp/peers.tar root@NEWNODE:/tmp
  9. Copy the extracted content to the /var/lib/glusterd/peers directory. Execute the following command in the newly added node (Example1)
    # tar -xvf /tmp/peers.tar
    # cp peers/* /var/lib/glusterd/peers/
  10. Select any other node in the cluster other than the node (Example2) selected in step 4. Copy the peer file corresponding to the UUID of the node retrieved in step4 to the new node (Example1) by executing the following command:
    # scp /var/lib/glusterd/peers/<UUID-retrieved-from-step4> root@NEWNODE:/var/lib/glusterd/peers/
  11. Retrieve the brick directory information, by executing the following command in any node in the cluster.
    # gluster volume info
  12. Create a brick directory. If the brick directory is already available delete it and create a new one.
    # mkdir brick1
  13. Use a FUSE mount point to mount the GlusterFS volume
    # mount -t glusterfs <server-name>:/<vol-name> <mount>
  14. Create a new directory on the mount and delete that directory and set the extended attributes and clear them. This is done to ensure that the self heal process is triggered in the right direction.
    # mkdir <mount>/dir1
    # rmdir <mount>/dir1
    # setfattr -n trusted.non-existent-key -v abc /mnt/r2
    # setfattr -x trusted.non-existent-key /mnt/r2
    
  15. Retrieve the volume-id from the existing brick from another node by executing the following command on any node that contains the bricks for the volume.
    # getfattr -d -m. -ehex <brick-path>
    Copy the volume ID.
    # getfattr -d -m. -ehex /rhs/brick1/drv1
    getfattr: Removing leading '/' from absolute path names
    # file: rhs/brick1/drv1
    trusted.afr.drv-client-0=0x000000000000000000000000
    trusted.afr.drv-client-1=0x000000000000000000000000
    trusted.gfid=0x00000000000000000000000000000001
    trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
    trusted.glusterfs.volume-id=0x8f16258c88a0498fbd53368706af7496
    In the above example, the volume id is 0x8f16258c88a0498fbd53368706af7496
  16. Set this volume id on the brick created in the newly added node and execute the following command on the newly added host (Example1)
    # setfattr -n trusted.glusterfs.volume-id -v <volume-id> <brick-path>
    For Example:
    #setfattr -n trusted.glusterfs.volume-id -v 0x8f16258c88a0498fbd53368706af7496 /rhs/brick2/drv2
  17. Restart the glusterd service.

    Note

    If there are only 2 nodes in the cluster, perform the first 3 steps and continue with the following steps:
    1. Regenerate peer file in the newly created node.
    2. Edit /var/lib/glusterd/peers/<uuid-of-other-peer>to contain the following:
      UUID=<uuid-of-other-node>
      state=3
      hostname=<hostname>
    Continue from step 11 to 17 as documented above.
  18. Perform a self heal on the restored volume.
    # gluster volume heal <VOLNAME>
  19. You can view the gluster volume self heal status by executing the following command:
    # gluster volume heal <VOLNAME> info

10.6. Rebalancing Volumes

If a volume has been expanded or shrunk using the add-brick or remove-brick commands, the data on the volume needs to be rebalanced among the servers.

Note

In a non-replicated volume, all bricks should be online to perform the rebalance operation using the start option. In a replicated volume, at least one of the bricks in the replica should be online.
To rebalance a volume, use the following command on any of the servers:
# gluster volume rebalance VOLNAME start
For example:
# gluster volume rebalance test-volume start
Starting rebalancing on volume test-volume has been successful
A rebalance operation, without force option, will attempt to balance the space utilized across nodes, thereby skipping files to rebalance in case this would cause the target node of migration to have lesser available space than the source of migration. This leads to link files that are still left behind in the system and hence may cause performance issues in access when a large number of such link files are present.
Enhancements made to the file rename and rebalance operations in Red Hat Storage 2.1 update 5 requires that all the clients connected to a cluster operate with the same or later versions. If the clients operate on older versions, and a rebalance operation is performed, the following warning message is displayed and the rebalance operation will not be executed.
volume rebalance: <volname>: failed: Volume <volname> has one or more connected clients of a version lower than RHS-2.1 update 5. Starting rebalance in this state could lead to data loss.
Please disconnect those clients before attempting this command again.
Red Hat strongly recommends you to disconnect all the older clients before executing the rebalance command to avoid a potential data loss scenario.

Warning

The Rebalance command can be executed with the force option even when the older clients are connected to the cluster. However, this could lead to a data loss situation.
A rebalance operation with force, balances the data based on the layout, and hence optimizes or does away with the link files, but may lead to an imbalanced storage space used across bricks. This option is to be used only when there are a large number of link files in the system.
To rebalance a volume forcefully, use the following command on any of the servers:
# gluster volume rebalance VOLNAME start force
For example:
# gluster volume rebalance test-volume start force
Starting rebalancing on volume test-volume has been successful

10.6.1. Displaying Status of a Rebalance Operation

To display the status of a volume rebalance operation, use the following command:
# gluster volume rebalance VOLNAME status
For example:
# gluster volume rebalance test-volume status
     Node    Rebalanced-files          size       scanned      failures         status
---------         -----------   -----------   -----------   -----------   ------------
localhost                 112         14567           150            0    in progress
10.16.156.72              140          2134           201            2    in progress
The time taken to complete the rebalance operation depends on the number of files on the volume and their size. Continue to check the rebalancing status, and verify that the number of rebalanced or scanned files keeps increasing.
For example, running the status command again might display a result similar to the following:
# gluster volume rebalance test-volume status
     Node    Rebalanced-files          size       scanned      failures         status
---------         -----------   -----------   -----------   -----------   ------------
localhost                 112         14567           150            0    in progress
10.16.156.72              140          2134           201            2    in progress
The rebalance status will be shown as completed the following when the rebalance is complete:
# gluster volume rebalance test-volume status
     Node    Rebalanced-files          size       scanned      failures         status
---------         -----------   -----------   -----------   -----------   ------------
localhost                 112         15674           170            0       completed
10.16.156.72              140          3423           321            2       completed

10.6.2. Stopping a Rebalance Operation

To stop a rebalance operation, use the following command:
# gluster volume rebalance VOLNAME stop
For example:
# gluster volume rebalance test-volume stop
     Node    Rebalanced-files          size       scanned      failures         status
---------         -----------   -----------   -----------   -----------   ------------
localhost                 102         12134           130            0         stopped
10.16.156.72              110          2123           121            2         stopped
Stopped rebalance process on volume test-volume

10.7. Stopping Volumes

To stop a volume, use the following command:
# gluster volume stop VOLNAME
For example, to stop test-volume:
# gluster volume stop test-volume
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
Stopping volume test-volume has been successful

10.8. Deleting Volumes

To delete a volume, use the following command:
# gluster volume delete VOLNAME
For example, to delete test-volume:
# gluster volume delete test-volume
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
Deleting volume test-volume has been successful

10.9. Managing Split-brain

Split-brain is a state when a data or availability inconsistencies originating from the maintenance of two separate data sets with overlap in scope, either because of servers in a network design, or a failure condition based on servers not communicating and synchronizing their data to each other.
In Red Hat Storage, split-brain is a term applicable to Red Hat Storage volumes in a replicate configuration. A file is said to be in split-brain when the copies of the same file in different bricks that constitute the replica-pair have mismatching data and/or meta-data contents such that they are conflicting each other and automatic healing is not possible. In this scenario, you can decide which is the correct file (source) and which is the one that require healing (sink) by inspecting at the mismatching files from the backend bricks.
The AFR translator in glusterFS makes use of extended attributes to keep track of the operations on a file. These attributes determine which brick is the source and which brick is the sink for a file that require healing. If the files are clean, the extended attributes are all zeroes indicating that no heal is necessary. When a heal is required, they are marked in such a way that there is a distinguishable source and sink and the heal can happen automatically. But, when a split brain occurs, these extended attributes are marked in such a way that both bricks mark themselves as sources, making automatic healing impossible.
When a split-brain occurs, applications cannot perform certain operations like read and write on the file. Accessing the files results in the application receiving an Input/Output Error.
The three types of split-brains that occur in Red Hat Storage are:
  • Data split-brain: Contents of the file under split-brain are different in different replica pairs and automatic healing is not possible.
  • Metadata split-brain : The metadata of the files (example, user defined extended attribute) are different and automatic healing is not possible.
  • Entry split-brain: This happens when a file have different gfids on each of the replica pair.
The only way to resolve split-brains is by manually inspecting the file contents from the backend and deciding which is the true copy (source ) and modifying the appropriate extended attributes such that healing can happen automatically.

10.9.1. Preventing Split-brain

To prevent split-brain in the trusted storage pool, you must configure server-side and client-side quorum.

10.9.1.1. Configuring Server-Side Quorum

The quorum configuration in a trusted storage pool determines the number of server failures that the trusted storage pool can sustain. If an additional failure occurs, the trusted storage pool will become unavailable. If too many server failures occur, or if there is a problem with communication between the trusted storage pool nodes, it is essential that the trusted storage pool be taken offline to prevent data loss.
After configuring the quorum ratio at the trusted storage pool level, you must enable the quorum on a particular volume by setting cluster.server-quorum-type volume option as server. For more information on this volume option, see Section 10.1, “Configuring Volume Options”.
Configuration of the quorum is necessary to prevent network partitions in the trusted storage pool. Network Partition is a scenario where, a small set of nodes might be able to communicate together across a functioning part of a network, but not be able to communicate with a different set of nodes in another part of the network. This can cause undesirable situations, such as split-brain in a distributed system. To prevent a split-brain situation, all the nodes in at least one of the partitions must stop running to avoid inconsistencies.
This quorum is on the server-side, that is, the glusterd service. Whenever the glusterd service on a machine observes that the quorum is not met, it brings down the bricks to prevent data split-brain. When the network connections are brought back up and the quorum is restored the bricks in the volume are brought back up. When the quorum is not met for a volume, any commands that update the volume configuration or peer addition or detach are not allowed. It is to be noted that both, the glusterd service not running and the network connection between two machines being down are treated equally.
You can configure the quorum percentage ratio for a trusted storage pool. If the percentage ratio of the quorum is not met due to network outages, the bricks of the volume participating in the quorum in those nodes are taken offline. By default, the quorum is met if the percentage of active nodes is more than 50% of the total storage nodes. However, if the quorum ratio is manually configured, then the quorum is met only if the percentage of active storage nodes of the total storage nodes is greater than or equal to the set value.
To configure the quorum ratio, use the following command:
# gluster volume set all cluster.server-quorum-ratio PERCENTAGE
For example, to set the quorum to 51% of the trusted storage pool:
# gluster volume set all cluster.server-quorum-ratio 51%
In this example, the quorum ratio setting of 51% means that more than half of the nodes in the trusted storage pool must be online and have network connectivity between them at any given time. If a network disconnect happens to the storage pool, then the bricks running on those nodes are stopped to prevent further writes.
You must ensure to enable the quorum on a particular volume to participate in the server-side quorum by running the following command:
# gluster volume set VOLNAME cluster.server-quorum-type server

Important

For a two-node trusted storage pool, it is important to set the quorum ratio to be greater than 50% so that two nodes separated from each other do not both believe they have a quorum.
For a replicated volume with two nodes and one brick on each machine, if the server-side quorum is enabled and one of the nodes goes offline, the other node will also be taken offline because of the quorum configuration. As a result, the high availability provided by the replication is ineffective. To prevent this situation, a dummy node can be added to the trusted storage pool which does not contain any bricks. This ensures that even if one of the nodes which contains data goes offline, the other node will remain online. Note that if the dummy node and one of the data nodes goes offline, the brick on other node will be also be taken offline, and will result in data unavailability.

10.9.1.2. Configuring Client-Side Quorum

Replication in Red Hat Storage Server allows modifications as long as at least one of the bricks in a replica group is online. In a network-partition scenario, different clients connect to different bricks in the replicated environment. In this situation different clients may modify the same file on different bricks. When a client is witnessing brick disconnections, a file could be modified on different bricks at different times while the other brick is off-line in the replica. For example, in a 1 X 2 replicate volume, while modifying the same file, it can so happen that client C1 can connect only to brick B1 and client C2 can connect only to brick B2. These situations lead to split-brain and the file becomes unusable and manual intervention is required to fix this issue.
Client-side quorum is implemented to minimize split-brains. Client-side quorum configuration determines the number of bricks that must be up for it to allow data modification. If client-side quorum is not met, files in that replica group become read-only. This client-side quorum configuration applies for all the replica groups in the volume, if client-side quorum is not met for m of n replica groups only m replica groups becomes read-only and the rest of the replica groups continue to allow data modifications.

Example 10.7. Client-Side Quorum

In the above scenario, when the client-side quorum is not met for replica group A, only replica group A becomes read-only. Replica groups B and C continue to allow data modifications.

Important

If there are only two bricks in the replica group, then the first brick must be up and running to allow modifications.
Configure the client-side quorum using cluster.quorum-type and cluster.quorum-count options. For more information on these options, see Section 10.1, “Configuring Volume Options”.

Important

When you integrate Red Hat Storage with Red Hat Enterprise Virtualization or Red Hat OpenStack, the client-side quorum is enabled when you run gluster volume set VOLNAME group virt command. If on a two replica set up, if the first brick in the replica pair is offline, virtual machines will be paused because quorum is not met and writes are disallowed.
Consistency is achieved at the cost of fault tolerance. If fault-tolerance is preferred over consistency, disable client-side quorum with the following command:
# gluster volume reset VOLNAME quorum-type

10.9.2. Recovering from File Split-brain

Procedure 10.3. Steps to recover from a file split-brain

  1. Run the following command to obtain the path of the file that is in split-brain:
    # gluster volume heal VOLNAME info split-brain
    From the command output, identify the files for which file operations performed from the client keep failing with Input/Output error.
  2. Close the applications that opened split-brain file from the mount point. If you are using a virtual machine, you must power off the machine.
  3. Obtain and verify the AFR changelog extended attributes of the file using the getfattr command. Then identify the type of split-brain to determine which of the bricks contains the 'good copy' of the file.
    getfattr -d -m . -e hex <file-path-on-brick>
    For example,
    # getfattr -d -e hex -m. brick-a/file.txt  
    \#file: brick-a/file.txt
    security.selinux=0x726f6f743a6f626a6563745f723a66696c655f743a733000
    trusted.afr.vol-client-2=0x000000000000000000000000
    trusted.afr.vol-client-3=0x000000000200000000000000
    trusted.gfid=0x307a5c9efddd4e7c96e94fd4bcdcbd1b
    The extended attributes with trusted.afr.<volname>-client-<subvolume-index> are used by AFR to maintain changelog of the file. The values of the trusted.afr.<volname>-client-<subvolume-index> are calculated by the glusterFS client (FUSE or NFS-server) processes. When the glusterFS client modifies a file or directory, the client contacts each brick and updates the changelog extended attribute according to the response of the brick.
    subvolume-index is the brick number - 1 of gluster volume info VOLNAME output.
    For example,
    # gluster volume info vol
    Volume Name: vol 
    Type: Distributed-Replicate  
    Volume ID: 4f2d7849-fbd6-40a2-b346-d13420978a01  
    Status: Created  
    Number of Bricks: 4 x 2 = 8 
    Transport-type: tcp  
    Bricks:  
    brick-a: server1:/gfs/brick-a  
    brick-b: server1:/gfs/brick-b  
    brick-c: server1:/gfs/brick-c  
    brick-d: server1:/gfs/brick-d  
    brick-e: server1:/gfs/brick-e  
    brick-f: server1:/gfs/brick-f  
    brick-g: server1:/gfs/brick-g  
    brick-h: server1:/gfs/brick-h
    In the example above:
    Brick             |    Replica set        |    Brick subvolume index
    ----------------------------------------------------------------------------
    -/gfs/brick-a     |       0               |       0
    -/gfs/brick-b     |       0               |       1
    -/gfs/brick-c     |       1               |       2
    -/gfs/brick-d     |       1               |       3
    -/gfs/brick-e     |       2               |       4
    -/gfs/brick-f     |       2               |       5
    -/gfs/brick-g     |       3               |       6
    -/gfs/brick-h     |       3               |       7
    ```
    Each file in a brick maintains the changelog of itself and that of the files present in all the other bricks in it's replica set as seen by that brick.
    In the example volume given above, all files in brick-a will have 2 entries, one for itself and the other for the file present in it's replica pair. The following is the changelog for brick-b,
    • trusted.afr.vol-client-0=0x000000000000000000000000 - is the changelog for itself (brick-a)
    • trusted.afr.vol-client-1=0x000000000000000000000000 - changelog for brick-b as seen by brick-a
    Likewise, all files in brick-b will have the following:
    • trusted.afr.vol-client-0=0x000000000000000000000000 - changelog for brick-a as seen by brick-b
    • trusted.afr.vol-client-1=0x000000000000000000000000 - changelog for itself (brick-b)
    The same can be extended for other replica pairs.
    Interpreting changelog (approximate pending operation count) value
    Each extended attribute has a value which is 24 hexa decimal digits. First 8 digits represent changelog of data. Second 8 digits represent changelog of metadata. Last 8 digits represent Changelog of directory entries.
    Pictorially representing the same is as follows:
    0x 000003d7 00000001 00000000110
            |      |       |
            |      |        \_ changelog of directory entries
            |       \_ changelog of metadata
             \ _ changelog of data
    For directories, metadata and entry changelogs are valid. For regular files, data and metadata changelogs are valid. For special files like device files and so on, metadata changelog is valid. When a file split-brain happens it could be either be data split-brain or meta-data split-brain or both.
    The following is an example of both data, metadata split-brain on the same file:
    # getfattr -d -m . -e hex /gfs/brick-?/a 
    getfattr: Removing leading '/' from absolute path names
    \#file: gfs/brick-a/a 
    trusted.afr.vol-client-0=0x000000000000000000000000  
    trusted.afr.vol-client-1=0x000003d70000000100000000  
    trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57  
    \#file: gfs/brick-b/a  
    trusted.afr.vol-client-0=0x000003b00000000100000000  
    trusted.afr.vol-client-1=0x000000000000000000000000 
    trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57
    Scrutinize the changelogs
    The changelog extended attributes on file /gfs/brick-a/a are as follows:
    • The first 8 digits of trusted.afr.vol-client-0 are all zeros (0x00000000................),
      The first 8 digits of trusted.afr.vol-client-1 are not all zeros (0x000003d7................).
      So the changelog on /gfs/brick-a/a implies that some data operations succeeded on itself but failed on /gfs/brick-b/a.
    • The second 8 digits of trusted.afr.vol-client-0 are all zeros (0x........00000000........), and the second 8 digits of trusted.afr.vol-client-1 are not all zeros (0x........00000001........).
      So the changelog on /gfs/brick-a/a implies that some metadata operations succeeded on itself but failed on /gfs/brick-b/a.
    The changelog extended attributes on file /gfs/brick-b/a are as follows:
    • The first 8 digits of trusted.afr.vol-client-0 are not all zeros (0x000003b0................).
      The first 8 digits of trusted.afr.vol-client-1 are all zeros (0x00000000................).
      So the changelog on /gfs/brick-b/a implies that some data operations succeeded on itself but failed on /gfs/brick-a/a.
    • The second 8 digits of trusted.afr.vol-client-0 are not all zeros (0x........00000001........)
      The second 8 digits of trusted.afr.vol-client-1 are all zeros (0x........00000000........).
      So the changelog on /gfs/brick-b/a implies that some metadata operations succeeded on itself but failed on /gfs/brick-a/a.
    Here, both the copies have data, metadata changes that are not on the other file. Hence, it is both data and metadata split-brain.
    Deciding on the correct copy
    You must inspect stat and getfattr output of the files to decide which metadata to retain and contents of the file to decide which data to retain. To continue with the example above, here, we are retaining the data of /gfs/brick-a/a and metadata of /gfs/brick-b/a.
    Resetting the relevant changelogs to resolve the split-brain
    Resolving data split-brain
    You must change the changelog extended attributes on the files as if some data operations succeeded on /gfs/brick-a/a but failed on /gfs/brick-b/a. But /gfs/brick-b/a should not have any changelog showing data operations succeeded on /gfs/brick-b/a but failed on /gfs/brick-a/a. You must reset the data part of the changelog on trusted.afr.vol-client-0 of /gfs/brick-b/a.
    Resolving metadata split-brain
    You must change the changelog extended attributes on the files as if some metadata operations succeeded on /gfs/brick-b/a but failed on /gfs/brick-a/a. But /gfs/brick-a/a should not have any changelog which says some metadata operations succeeded on /gfs/brick-a/a but failed on /gfs/brick-b/a. You must reset metadata part of the changelog on trusted.afr.vol-client-1 of /gfs/brick-a/a
    Run the following commands to reset the extended attributes.
    1. On /gfs/brick-b/a, for trusted.afr.vol-client-0 0x000003b00000000100000000 to 0x000000000000000100000000, execute the following command:
      # setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /gfs/brick-b/a
    2. On /gfs/brick-a/a, for trusted.afr.vol-client-1 0x0000000000000000ffffffff to 0x000003d70000000000000000, execute the following command:
      # setfattr -n trusted.afr.vol-client-1 -v 0x000003d70000000000000000 /gfs/brick-a/a
    After you reset the extended attributes, the changelogs would look similar to the following:
    # getfattr -d -m . -e hex /gfs/brick-?/a  
    getfattr: Removing leading '/' from absolute path names  
    \#file: gfs/brick-a/a  
    trusted.afr.vol-client-0=0x000000000000000000000000  
    trusted.afr.vol-client-1=0x000003d70000000000000000  
    trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57  
    
    \#file: gfs/brick-b/a  
    trusted.afr.vol-client-0=0x000000000000000100000000  
    trusted.afr.vol-client-1=0x000000000000000000000000  
    trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57
    
    Resolving Directory entry split-brain
    AFR has the ability to conservatively merge different entries in the directories when there is a split-brain on directory. If on one brick directory storage has entries 1, 2 and has entries 3, 4 on the other brick then AFR will merge all of the entries in the directory to have 1, 2, 3, 4 entries in the same directory. But this may result in deleted files to re-appear in case the split-brain happens because of deletion of files on the directory. Split-brain resolution needs human intervention when there is at least one entry which has same file name but different gfid in that directory.
    For example:
    On brick-a the directory has 2 entries file1 with gfid_x and file2 . On brick-b directory has 2 entries file1 with gfid_y and file3. Here the gfid's of file1 on the bricks are different. These kinds of directory split-brain needs human intervention to resolve the issue. You must remove either file1 on brick-a or the file1 on brick-b to resolve the split-brain.
    In addition, the corresponding gfid-link file must be removed. The gfid-link files are present in the .glusterfs directory in the top-level directory of the brick. If the gfid of the file is 0x307a5c9efddd4e7c96e94fd4bcdcbd1b (the trusted.gfid extended attribute received from the getfattr command earlier), the gfid-link file can be found at /gfs/brick-a/.glusterfs/30/7a/307a5c9efddd4e7c96e94fd4bcdcbd1b.

    Warning

    Before deleting the gfid-link, you must ensure that there are no hard links to the file present on that brick. If hard-links exist, you must delete them.
  4. Trigger self-heal by running the following command:
    # ls -l <file-path-on-gluster-mount>
    or
    # gluster volume heal VOLNAME

10.9.3. Triggering Self-Healing on Replicated Volumes

For replicated volumes, when a brick goes offline and comes back online, self-healing is required to resync all the replicas. There is a self-heal daemon which runs in the background, and automatically initiates self-healing every 10 minutes on any files which require healing.
There are various commands that can be used to check the healing status of volumes and files, or to manually initiate healing:
  • To view the list of files that need healing:
    # gluster volume heal VOLNAME info
    For example, to view the list of files on test-volume that need healing:
    # gluster volume heal test-volume info
    Brick server1:/gfs/test-volume_0
    Number of entries: 0
     
    Brick server2:/gfs/test-volume_1
    /95.txt
    /32.txt
    /66.txt
    /35.txt
    /18.txt
    /26.txt - Possibly undergoing heal
    /47.txt 
    /55.txt
    /85.txt - Possibly undergoing heal
    ...
    Number of entries: 101
  • To trigger self-healing only on the files which require healing:
    # gluster volume heal VOLNAME
    For example, to trigger self-healing on files which require healing on test-volume:
    # gluster volume heal test-volume
    Heal operation on volume test-volume has been successful
  • To trigger self-healing on all the files on a volume:
    # gluster volume heal VOLNAME full
    For example, to trigger self-heal on all the files on test-volume:
    # gluster volume heal test-volume full
    Heal operation on volume test-volume has been successful
  • To view the list of files on a volume that are in a split-brain state:
    # gluster volume heal VOLNAME info split-brain
    For example, to view the list of files on test-volume that are in a split-brain state:
    # gluster volume heal test-volume info split-brain
    Brick server1:/gfs/test-volume_2 
    Number of entries: 12
    at                   path on brick
    ----------------------------------
    2012-06-13 04:02:05  /dir/file.83
    2012-06-13 04:02:05  /dir/file.28
    2012-06-13 04:02:05  /dir/file.69
    Brick server2:/gfs/test-volume_2
    Number of entries: 12
    at                   path on brick
    ----------------------------------
    2012-06-13 04:02:05  /dir/file.83
    2012-06-13 04:02:05  /dir/file.28
    2012-06-13 04:02:05  /dir/file.69
    ...

10.10. Non Uniform File Allocation (NUFA)

Important

Non Uniform File Allocation (NUFA) is a technology preview feature. Technology preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process. As Red Hat considers making future iterations of technology preview features generally available, we will provide commercially reasonable support to resolve any reported issues that customers experience when using these features. Red Hat Storage currently does not support NFSv4 delegations, Multi-head NFS and High Availability. These will be added in the upcoming releases of Red Hat Storage nfs-ganesha. It is not a feature recommended for production deployment in its current form. However, Red Hat Storage volumes can be exported via nfs-ganesha for consumption by both NFSv3 and NFSv4 clients.
When a client on a server creates files, the files are allocated to a brick in the volume based on the file name. This allocation may not be ideal, as there is higher latency and unnecessary network traffic for read/write operations to a non-local brick or export directory. NUFA ensures that the files are created in the local export directory of the server, and as a result, reduces latency and conserves bandwidth for that server accessing that file. This can also be useful for applications running on mount points on the storage server.
If the local brick runs out of space or reaches the minimum disk free limit, instead of allocating files to the local brick, NUFA distributes files to other bricks in the same volume if there is space available on those bricks.
NUFA should be enabled before creating any data in the volume. To enable NUFA, execute gluster volume set VOLNAMEcluster.nufa enableon.

Important

NUFA is supported under the following conditions:
  • Volumes with only with one brick per server.
  • For use with a FUSE client. NUFA is not supported with NFS or SMB.
  • A client that is mounting a NUFA-enabled volume must be present within the trusted storage pool.

Chapter 11. Configuring Red Hat Storage for Enhancing Performance

This chapter provides information about configuring Red Hat Storage and explains clear and simple activities that can improve system performance.

11.1. Brick Configuration

Format bricks using the following configurations to enhance performance:

Procedure 11.1. Brick Configuration

  1. LVM layer

    Align the I/O at the Logical Volume Manager (LVM) layer using the --dataalignment alignment number option while creating the physical volume. This number is calculated by multiplying the stripe element size of hardware RAID and count of data disks in that RAID level. This configuration at the LVM layer makes a significant impact on the overall performance. This has to be set regardless of the RAID level. If 12 disks are used to configure RAID 10 and RAID 6 then the data disks would be 6 and 10 respectively.
    • Run the following command for the RAID 10 volume configured using 12 disks (6 data disks) and 256 K stripe element size:
      # pvcreate --dataalignment 1536K <disk>
        Physical volume <disk> successfully created
    • Run the following command for the RAID 6 volume, which is configured using 12 disks and 256 K stripe element size:
      # pvcreate --dataalignment 2560K <disk>
        Physical volume <disk> successfully created
    • To view the previously configured physical volume settings for --dataalignment, run the following command :
      # pvs -o +pe_start <disk>
        PV         VG   Fmt  Attr PSize PFree 1st PE 
        /dev/sdb        lvm2 a--  9.09t 9.09t   2.50m
  2. XFS Inode Size

    Due to extensive use of extended attributes, Red Hat Storage recommends the XFS inode size to be set to 512 bytes from the default 256 bytes. So, the inode size for XFS should be set to 512 bytes, while formatting the Red Hat Storage bricks. To set the inode size, run the following command:
    # mkfs.xfs -i size=512 <disk>
  3. XFS RAID Alignment

    To align the I/O at the file system layer it is important to set the correct stripe unit (stripe element size) and stripe width (number of data disks) while formatting the file system. These options are sometimes auto-detected but manual configuration is required for many of the hardware RAID volumes.
    • Run the following command for the RAID 10 volume, which is configured using 12 disks and 256K stripe element size:
      #  mkfs.xfs  -d su=256K,sw=6 <disk>
      where, su is the RAID controller stripe unit size and sw is the number of data disks.
    • Run the following command for the RAID 6 volume, which is configured using 12 disks and 256K stripe element size:
      #  mkfs.xfs  -d su=256K,sw=10 <disk>
  4. Logical Block Size for the Directory

    In an XFS file system, you can select a logical block size for the file system directory that is greater than the logical block size of the file system. Increasing the logical block size for the directories from the default 4 K, decreases the directory I/O, which in turn improves the performance of directory operations. For example:
    #  mkfs.xfs  -n size=8192 <disk>
    The RAID 6 configuration output is as follows:
    # mkfs.xfs -f -i size=512 -n size=8192 -d su=256k,sw=10  <logical volume>
    meta-data=/dev/mapper/gluster-brick1 isize=512    agcount=32, agsize=37748736 blks 
                   =                       sectsz=512      attr=2, projid32bit=0 
    data        =                       bsize=4096      blocks=1207959552, imaxpct=5 
                   =                       sunit=64          swidth=640 blks 
    naming   =version 2        bsize=8192      ascii-ci=0 
    log          =internal log    bsize=4096      blocks=521728, version=2 
                   =                       sectsz=512      sunit=64 blks, lazy-count=1 
    realtime =none                extsz=4096     blocks=0, rtextents=0
  5. Allocation Strategy

    inode32 and inode64 are the two most common allocation strategies for XFS. With the inode32 allocation strategy, XFS places all the inodes in the first 1TB of disk. With a larger disk, all the inodes would be stuck in first 1 TB.
    With inode64, mount option inodes would be replaced near to the data, which would minimize disk seeks.
    With the current release, inode32 allocation strategy is used by default.
    To use inode64 allocation strategy with the current release, the file system needs be mounted with the inode64 mount option. For example:
    # mount -t xfs -o inode64 <logical volume> <mount point>
  6. Access Time

    If the application does not require to update the access time on files, then the file system should always be mounted with the noatime mount option. For example:
    # mount -t xfs -o inode64,noatime <logical volume> <mount point>
    This optimization improves the performance of small-file reads by avoiding updates to the XFS inodes when files are read.
    /etc/fstab entry for option E + F
     <logical volume> <mount point>xfs     inode64,noatime   0 0
  7. Performance tuning option in Red Hat Storage

    Run the following command after creating the volume:
    # tuned-adm profile default ; tuned-adm profile rhs-high-throughput
    
    Switching to profile 'default' 
    Applying ktune sysctl settings: 
    /etc/ktune.d/tunedadm.conf:                           [  OK  ] 
    Applying sysctl settings from /etc/sysctl.conf  
    Starting tuned:                                       [  OK  ] 
    Stopping tuned:                                       [  OK  ] 
    Switching to profile 'rhs-high-throughput'
    This profile performs the following:
    • Increases read ahead to 64 MB
    • Changes I/O scheduler to deadline
    • Disables power-saving mode
  8. Writeback caching

    For small-file and random write performance, we strongly recommend a writeback cache, that is, a non-volatile random-access memory (NVRAM) in your storage controller. For example, normal Dell and HP storage controllers have it. Make sure that NVRAM is enabled, that is, the battery is working. Refer your hardware documentation for details about enabling NVRAM.
    Do not enable writeback caching in disk drives. This is a policy where the disk drive considers the write is complete before the write actually makes it to the magnetic media (platter). As a result, the disk write cache might lose its data during a power failure or even lose its metadata leading to file system corruption.
  9. Allocation groups

    Each XFS file system is partitioned into regions called allocation groups. Allocation groups are similar to the block groups in ext3, but allocation groups are much larger than block groups and are used for scalability and parallelism rather than disk locality. The default allocation for an allocation group is 1 TB.
    Allocation group count must be large enough to sustain the concurrent allocation workload. In most of the cases, the allocation group count chosen by the mkfs.xfs command would give the optimal performance. Do not change the allocation group count chosen by mkfs.xfs, while formatting the file system.
  10. Percentage of space allocation to inodes

    If the workload consists of very small files (average file size is less than 10 KB ), then it is recommended to set the maxpct value to 10, while formatting the file system.

11.2. Network

Data traffic network becomes a bottleneck when storage nodes increase in number. By adding a 10GbE or faster network for data traffic, you can achieve faster per-node performance. Jumbo frames must be enabled at all levels, that is, client , Red Hat Storage node, and ethernet switch levels. An MTU of size N+208 must be supported by an ethernet switch where N=9000. We recommend you to have a separate network for management and data traffic when protocols such as NFS /CIFS are used instead of a native client. The preferred bonding mode for Red Hat Storage client is mode 6 (balance-alb), which allows a client to transmit writes in parallel on separate NICs most of the time.

11.3. RAM on the Nodes

Red Hat Storage does not consume significant compute resources from the storage nodes themselves. However, read intensive workloads can benefit greatly from additional RAM.

11.4. Number of Clients

Red Hat Storage is designed to support environments with large numbers of clients. As the I/O of individual clients is often limited, system throughput is generally greater if the number of clients is more than the number of servers.
If the number of clients is greater than 100, you must switch to the rhs-virtualization tuned profile, which increases ARP (Address Resolution Protocol) table size, but has less aggressive read ahead setting of 4 MB. This is 32 times the Linux default but small enough to avoid fairness issues with large numbers of files being concurrently read.
To switch to a Red Hat Storage virtualization tuned profile, run the following command:
# tuned-adm profile rhs-virtualization

11.5. Replication

If a system is configured for two ways, active-active replication, write throughput will generally be half of what it would be in a non-replicated configuration. However, read throughput is generally improved by replication, as reads can be delivered from either storage node.

11.6. Hardware RAID

If the workload consists of strictly small files, then RAID 10 is optimal configuration. The RAID 6 provides better space efficiency, good large file performance, and good small file read performance. RAID 6 small-file write performance, while respectable with write-back caching, is not as good as RAID 10 because of read-modify-write sequences for each write.
For 12 disks, the RAID 6 volume can provide ~40% more storage space as compared to the RAID 10 volume which will have 50% reduction in capacity.
Larger hardware RAID stripe element size utilizes the hardware RAID cache effectively and can provide more parallelism at the RAID level for small files because fewer disks would be required to participate in read operations.

Chapter 12. Managing Geo-replication

This section introduces geo-replication, illustrates the various deployment scenarios, and explains how to configure geo-replication and mirroring.

12.1. About Geo-replication

Geo-replication provides a distributed, continuous, asynchronous, and incremental replication service from one site to another over Local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
Geo-replication uses a master–slave model, where replication and mirroring occurs between the following partners:
  • Master – a Red Hat Storage volume.
  • Slave – a Red Hat Storage volume. A slave volume can be either a local volume, such as localhost::volname, or a volume on a remote host, such as remote-host::volname.

12.2. Replicated Volumes vs Geo-replication

The following table lists the differences between replicated volumes and geo-replication:
Replicated VolumesGeo-replication
Mirrors data across bricks within one trusted storage pool.Mirrors data across geographically distributed trusted storage pools.
Provides high-availability.Provides back-ups of data for disaster recovery.
Synchronous replication: each and every file operation is applied to all the bricks.Asynchronous replication: checks for changes in files periodically, and syncs them on detecting differences.

12.3. Preparing to Deploy Geo-replication

This section provides an overview of geo-replication deployment scenarios and describes how to check the minimum system requirements.

12.3.1. Exploring Geo-replication Deployment Scenarios

Geo-replication provides an incremental replication service over Local Area Networks (LANs), Wide Area Network (WANs), and the Internet. This section illustrates the most common deployment scenarios for geo-replication, including the following:
  • Geo-replication over LAN
  • Geo-replication over WAN
  • Geo-replication over the Internet
  • Multi-site cascading geo-replication
Geo-replication over LAN
Geo-replication over LAN
Geo-replication over WAN
Geo-replication over WAN
Geo-replication over Internet
Geo-replication over Internet
Multi-site cascading Geo-replication
Multi-site cascading geo-replication

12.3.2. Geo-replication Deployment Overview

Deploying geo-replication involves the following steps:
  1. Verify that your environment matches the minimum system requirements. See Section 12.3.3, “Prerequisites”.
  2. Determine the appropriate deployment scenario. See Section 12.3.1, “Exploring Geo-replication Deployment Scenarios”.
  3. Start geo-replication on the master and slave systems. See Section 12.4, “Starting Geo-replication”.

12.3.3. Prerequisites

The following are prerequisites for deploying geo-replication:
  • The master and slave volumes must be Red Hat Storage instances.
  • Password-less SSH access is required between one node of the master volume (the node from which the geo-replication create command will be executed), and one node of the slave volume (the node whose IP/hostname will be mentioned in the slave name when running the geo-replication create command).
    Create the public and private keys using ssh-keygen (without passphrase) on the master node:
    # ssh-keygen
    Copy the public key to the slave node:
    # ssh-copy-id root@slave_node_IPaddress/Hostname

    Note

    Password-less SSH access is required from the master node to slave node, whereas password-less SSH access is not required from the slave node to master node.
    A password-less SSH connection is also required for gsyncd between every node in the master to every node in the slave. The gluster system:: execute gsec_create command creates secret-pem files on all the nodes in the master, and is used to implement the password-less SSH connection. The push-pem option in the geo-replication create command pushes these keys to all the nodes in the slave.
    For more information on the gluster system:: execute gsec_create and push-pem commands, see Section 12.3.4, “Configuring the Environment and Creating a Geo-replication Session”.

12.3.4. Configuring the Environment and Creating a Geo-replication Session

Before configuring the geo-replication environment, ensure that the time on all the servers are synchronized.
Time Synchronization
You can use commands to create a geo-replication session.

Procedure 12.1. Creating Geo-replication Sessions

  1. To create a common pem pub file, run the following command on the master node where the password-less SSH connection is configured:
    # gluster system:: execute gsec_create
  2. Create the geo-replication session using the following command. The push-pem option is needed to perform the necessary pem-file setup on the slave nodes.
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem [force]
    For example:
    # gluster volume geo-replication master-vol example.com::slave-vol create push-pem

    Note

    There must be password-less SSH access between the node from which this command is run, and the slave host specified in the above command. This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave. If the verification fails, you can use the force option which will ignore the failed verification and create a geo-replication session.
  3. Verify the status of the created session by running the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status

12.4. Starting Geo-replication

This section describes how to and start geo-replication in your storage environment, and verify that it is functioning correctly.

12.4.1. Starting a Geo-replication Session

Important

You must create the geo-replication session before starting geo-replication. For more information, see Section 12.3.4, “Configuring the Environment and Creating a Geo-replication Session”.
To start geo-replication, use one of the following commands:
  • To start the geo-replication session between the hosts:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
    For example:
    # gluster volume geo-replication master-vol example.com::slave-vol start
    Starting geo-replication session between master-vol & example.com::slave-vol has been successful
    This command will start distributed geo-replication on all the nodes that are part of the master volume. If a node that is part of the master volume is down, the command will still be successful. In a replica pair, the geo-replication session will be active on any of the replica nodes, but remain passive on the others.
    After executing the command, it may take a few minutes for the session to initialize and become stable.

    Note

    If you attempt to create a geo-replication session and the slave already has data, the following error message will be displayed:
    slave-node::slave is not empty. Please delete existing files in slave-node::slave and retry, or use force to continue without deleting the existing files. geo-replication command failed
  • To start the geo-replication session forcefully between the hosts:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
    For example:
    # gluster volume geo-replication master-vol example.com::slave-vol start force
    Starting geo-replication session between master-vol & example.com::slave-vol has been successful
    This command will force start geo-replication sessions on the nodes that are part of the master volume. If it is unable to successfully start the geo-replication session on any node which is online and part of the master volume, the command will still start the geo-replication sessions on as many nodes as it can. This command can also be used to re-start geo-replication sessions on the nodes where the session has died, or has not started.

12.4.2. Verifying a Successful Geo-replication Deployment

You can use the status command to verify the status of geo-replication in your environment:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status
For example:
# gluster volume geo-replication master-vol example.com::slave-vol status

12.4.3. Displaying Geo-replication Status Information

The status command can be used to display information about a specific geo-replication master session, master-slave session, or all geo-replication sessions.
In Red Hat Storage 2.1 release, the status output displayed only node level information. Now, with Red Hat Storage 2.1 Update 1 release, the status output is enhanced to provide node and brick level information.
  • To display information on all geo-replication sessions from a particular master volume, use the following command:
    # gluster volume geo-replication MASTER_VOL status
  • To display information of a particular master-slave session, use the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status
  • To display the details of a master-slave session, use the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status detail

    Important

    There will be a mismatch between the outputs of the df command (including -h and -k) and inode of the master and slave volumes when the data is in full sync. This is due to the extra inode and size consumption by the changelog journaling data, which keeps track of the changes done on the file system on the master volume. Instead of running the df command to verify the status of synchronization, use # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status detail instead.
    The status of a session can be one of the following:
    • Initializing: This is the initial phase of the Geo-replication session; it remains in this state for a minute in order to make sure no abnormalities are present.
    • Not Started: The geo-replication session is created, but not started.
    • Active: The gsync daemon in this node is active and syncing the data.
    • Passive: A replica pair of the active node. The data synchronization is handled by active node. Hence, this node does not sync any data.
    • Faulty: The geo-replication session has experienced a problem, and the issue needs to be investigated further. For more information, see Section 12.9, “Troubleshooting Geo-replication ” section.
    • Stopped: The geo-replication session has stopped, but has not been deleted.
    • Crawl Status
      • Changelog Crawl: The changelog translator has produced the changelog and that is being consumed by gsyncd daemon to sync data.
      • Hybrid Crawl: The gsyncd daemon is crawling the glusterFS file system and generating pseudo changelog to sync data.
    • Checkpoint Status: Displays the status of the checkpoint, if set. Otherwise, it displays as N/A.

12.4.4. Configuring a Geo-replication Session

To configure a geo-replication session, use the following command:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config [options]
For example, to view the list of all option/value pairs:
# gluster volume geo-replication Volume1 example.com::slave-vol config
To delete a setting for a geo-replication config option, prefix the option with ! (exclamation mark). For example, to reset log-level to the default value:
# gluster volume geo-replication Volume1 example.com::slave-vol config '!log-level'
Configurable Options
The following table provides an overview of the configurable options for a geo-replication setting:
OptionDescription
gluster-log-file LOGFILEThe path to the geo-replication glusterfs log file.
gluster-log-level LOGFILELEVELThe log level for glusterfs processes.
log-file LOGFILEThe path to the geo-replication log file.
log-level LOGFILELEVELThe log level for geo-replication.
ssh-command COMMANDThe SSH command to connect to the remote machine (the default is SSH).
rsync-command COMMANDThe rsync command to use for synchronizing the files (the default is rsync).
use-tarssh trueThe use-tarssh command allows tar over Secure Shell protocol. Use this option to handle workloads of files that have not undergone edits.
volume_id=UIDThe command to delete the existing master UID for the intermediate/slave node.
timeout SECONDSThe timeout period in seconds.
sync-jobs NThe number of simultaneous files/directories that can be synchronized.
ignore-deletesIf this option is set to 1, a file deleted on the master will not trigger a delete operation on the slave. As a result, the slave will remain as a superset of the master and can be used to recover the master in the event of a crash and/or accidental delete.
checkpoint [LABEL|now]Sets a checkpoint with the given option LABEL. If the option is set as now, then the current time will be used as the label.

12.4.4.1. Geo-replication Checkpoints

12.4.4.1.1. About Geo-replication Checkpoints
Geo-replication data synchronization is an asynchronous process, so changes made on the master may take time to be replicated to the slaves. Data replication to a slave may also be interrupted by various issues, such network outages.
Red Hat Storage provides the ability to set geo-replication checkpoints. By setting a checkpoint, synchronization information is available on whether the data that was on the master at that point in time has been replicated to the slaves.
12.4.4.1.2. Configuring and Viewing Geo-replication Checkpoint Information
  • To set a checkpoint on a geo-replication session, use the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config checkpoint [now|LABEL]
    For example, to set checkpoint between Volume1 and example.com:/data/remote_dir:
    # gluster volume geo-replication Volume1 example.com::slave-vol config checkpoint now
    geo-replication config updated successfully
    The label for a checkpoint can be set as the current time using now, or a particular label can be specified, as shown below:
    # gluster volume geo-replication Volume1 example.com::slave-vol config checkpoint NEW_ACCOUNTS_CREATED
    geo-replication config updated successfully.
  • To display the status of a checkpoint for a geo-replication session, use the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status
  • To delete checkpoints for a geo-replication session, use the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config '!checkpoint'
    For example, to delete the checkpoint set between Volume1 and example.com::slave-vol:
    # gluster volume geo-replication Volume1 example.com::slave-vol config '!checkpoint' 
    geo-replication config updated successfully
  • To view the history of checkpoints for a geo-replication session (including set, delete, and completion events), use the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config log-file | xargs grep checkpoint
    For example, to display the checkpoint history between Volume1 and example.com::slave-vol:
    # gluster volume geo-replication Volume1 example.com::slave-vol config  log-file | xargs grep checkpoint
    [2013-11-12 12:40:03.436563] I [gsyncd(conf):359:main_i] <top>: checkpoint as of 2012-06-04 12:40:02 set
    [2013-11-15 12:41:03.617508] I master:448:checkpt_service] _GMaster: checkpoint as of 2013-11-12 12:40:02 completed
    [2013-11-12 03:01:17.488917] I [gsyncd(conf):359:main_i] <top>: checkpoint as of 2013-06-22 03:01:12 set 
    [2013-11-15 03:02:29.10240] I master:448:checkpt_service] _GMaster: checkpoint as of 2013-06-22 03:01:12 completed

12.4.5. Stopping a Geo-replication Session

To stop a geo-replication session, use one of the following commands:
  • To stop a geo-replication session between the hosts:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
    For example:
    #gluster volume geo-replication master-vol example.com::slave-vol stop
    Stopping geo-replication session between master-vol & example.com::slave-vol has been successful

    Note

    The stop command will fail if:
    • any node that is a part of the volume is offline.
    • if it is unable to stop the geo-replication session on any particular node.
    • if the geo-replication session between the master and slave is not active.
  • To stop a geo-replication session forcefully between the hosts:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop force
    For example:
    # gluster volume geo-replication master-vol example.com::slave-vol stop force
    Stopping geo-replication session between master-vol & example.com::slave-vol has been successful
    Using force will stop the geo-replication session between the master and slave even if any node that is a part of the volume is offline. If it is unable to stop the geo-replication session on any particular node, the command will still stop the geo-replication sessions on as many nodes as it can. Using force will also stop inactive geo-replication sessions.

12.4.6. Deleting a Geo-replication Session

Important

You must first stop a geo-replication session before it can be deleted. For more information, see Section 12.4.5, “Stopping a Geo-replication Session”.
To delete a geo-replication session, use the following command:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL delete
For example:
# gluster volume geo-replication master-vol example.com::slave-vol delete
geo-replication command executed successfully

Note

The delete command will fail if:
  • any node that is a part of the volume is offline.
  • if it is unable to delete the geo-replication session on any particular node.
  • if the geo-replication session between the master and slave is still active.

Important

The SSH keys will not removed from the master and slave nodes when the geo-replication session is deleted. You can manually remove the pem files which contain the SSH keys from the /var/lib/glusterd/geo-replication/ directory.

12.5. Starting Geo-replication on a Newly Added Brick

12.5.1. Starting Geo-replication for a New Brick on a New Node

If a geo-replication session is running, and a brick is added to the volume on a newly added node in the trusted storage pool, then you must perform the following steps to start the geo-replication daemon on the new node:

Procedure 12.2. Starting Geo-replication for a New Brick on a New Node

  1. Run the following command on the master node where password-less SSH connection is configured, in order to create a common pem pub file.
    # gluster system:: execute gsec_create
  2. Create the geo-replication session using the following command. The push-pem and force options are required to perform the necessary pem-file setup on the slave nodes.
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem force
    For example:
    # gluster volume geo-replication master-vol example.com::slave-vol create push-pem force

    Note

    There must be password-less SSH access between the node from which this command is run, and the slave host specified in the above command. This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave.
  3. Start the geo-replication session between the slave and master forcefully, using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
  4. Verify the status of the created session, using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status

12.5.2. Starting Geo-replication for a New Brick on an Existing Node

When adding a brick to the volume on an existing node in the trusted storage pool with a geo-replication session running, the geo-replication daemon on that particular node will automatically be restarted. The new brick will then be recognized by the geo-replication daemon. This is an automated process and no configuration changes are required.

12.6. Disaster Recovery

When the master volume goes offline, you can perform the following disaster recovery procedures to minimize the interruption of service:

12.6.1. Promoting a Slave to Master

If the master volume goes offline, you can promote a slave volume to be the master, and start using that volume for data access.
Run the following commands on the slave machine to promote it to be the master:
# gluster volume set VOLNAME geo-replication.indexing on
# gluster volume set VOLNAME changelog on
You can now configure applications to use the slave volume for I/O operations.

12.6.2. Failover and Failback

Red Hat Storage provides geo-replication failover and failback capabilities for disaster recovery. If the master goes offline, you can perform a failover procedure so that a slave can replace the master. When this happens, all the I/O operations, including reads and writes, are done on the slave which is now acting as the master. When the original master is back online, you can perform a failback procedure on the original slave so that it synchronizes the differences back to the original master.

Procedure 12.3. Performing a Failover and Failback

  1. Create a new geo-replication session with the original slave as the new master, and the original master as the new slave. For more information on setting and creating geo-replication session, see Section 12.3.4, “Configuring the Environment and Creating a Geo-replication Session”.
  2. Start the special synchronization mode to speed up the recovery of data from slave.
    # gluster volume geo-replication ORIGINAL_SLAVE ORIGINAL_MASTER config special-sync-mode recover
  3. Set a checkpoint to help verify the status of the data synchronization.
    # gluster volume geo-replication ORIGINAL_SLAVE ORIGINAL_MASTER config checkpoint now
  4. Start the new geo-replication session using the following command:
    # gluster volume geo-replication ORIGINAL_SLAVE ORIGINAL_MASTER start
  5. Monitor the checkpoint output using the following command, until the status displays: checkpoint as of <time of checkpoint creation> is completed at <time of completion>. .
    # gluster volume geo-replication ORIGINAL_SLAVE ORIGINAL_MASTER status 
  6. To resume the original master and original slave back to their previous roles, stop the I/O operations on the original slave, and using steps 3 and 5, ensure that all the data from the original slave is restored back to the original master. After the data from the original slave is restored back to the original master, stop the current geo-replication session (the failover session) between the original slave and original master, and resume the previous roles.

12.7. Example - Setting up Cascading Geo-replication

This section provides step by step instructions to set up a cascading geo-replication session. The configuration of this example has three volumes and the volume names are master-vol, interimmaster-vol, and slave-vol.
  1. Verify that your environment matches the minimum system requirements listed in Section 12.3.3, “Prerequisites”.
  2. Determine the appropriate deployment scenario. For more information on deployment scenarios, see Section 12.3.1, “Exploring Geo-replication Deployment Scenarios”.
  3. Configure the environment and create a geo-replication session between master-vol and interimmaster-vol.
    1. Create a common pem pub file, run the following command on the master node where the password-less SSH connection is configured:
      # gluster system:: execute gsec_create
    2. Create the geo-replication session using the following command. The push-pem option is needed to perform the necessary pem-file setup on the slave nodes.
      # gluster volume geo-replication master-vol interimhost.com::interimmaster-vol create
      push-pem
    3. Verify the status of the created session by running the following command:
      # gluster volume geo-replication master-vol interim_HOST::interimmaster-vol status
  4. Start a Geo-replication session between the hosts:
    # gluster volume geo-replication master-vol interimhost.com::interimmaster-vol start
    This command will start distributed geo-replication on all the nodes that are part of the master volume. If a node that is part of the master volume is down, the command will still be successful. In a replica pair, the geo-replication session will be active on any of the replica nodes, but remain passive on the others. After executing the command, it may take a few minutes for the session to initialize and become stable.
  5. Verifying the status of geo-replication session by running the following command:
    # gluster volume geo-replication master-vol interimhost.com::interimmaster-vol status
  6. To create a geo-replication session between interimmaster volume and slave volume, repeat step 3 to step 5 replacing master-vol with interimmaster-vol and interimmaster-vol with slave- vol. You must run these commands on interimmaster.
    Here, the interimmaster volume will act as the master volume for the geo-replication session between interimmaster and slave.

12.8. Recommended Practices

Manually Setting the Time
If you have to change the time on the bricks manually, then the geo-replication session and indexing must be disabled when setting the time on all the bricks. All bricks in a geo-replication environment must be set to the same time, as this avoids the out-of-time sync issue described in Section 12.3.4, “Configuring the Environment and Creating a Geo-replication Session”. Bricks not operating on the same time setting, or changing the time while the geo-replication is running, will corrupt the geo-replication index. The recommended way to set the time manually is using the following procedure.

Procedure 12.4. Manually Setting the Time on Bricks in a Geo-replication Environment

  1. Stop geo-replication between the master and slave, using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
  2. Stop geo-replication indexing, using the following command:
    # gluster volume set MASTER_VOL geo-replication.indexing off
  3. Set a uniform time on all the bricks.
  4. Restart the geo-replication sessions, using the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
Performance Tuning
When the following option is set, it has been observed that there is an increase in geo-replication performance. On the slave volume, run the following command:
# gluster volume set SLAVE_VOL batch-fsync-delay-usec 0
Initially Replicating Large Volumes to a Remote Slave Locally using a LAN
For replicating large volumes to a slave in a remote location, it may be useful to do the initial replication to disks locally on a local area network (LAN), and then physically transport the disks to the remote location. This eliminates the need of doing the initial replication of the whole volume over a slower and more expensive wide area network (WAN) connection. The following procedure provides instructions for setting up a local geo-replication session, physically transporting the disks to the remote location, and then setting up geo-replication over a WAN.

Procedure 12.5. Initially Replicating to a Remote Slave Locally using a LAN

  1. Create a geo-replication session locally within the LAN. For information on creating a geo-replication session, see Section 12.3.4, “Configuring the Environment and Creating a Geo-replication Session”.

    Important

    You must remember the order in which the bricks/disks are specified when creating the slave volume. This information is required later for configuring the remote geo-replication session over the WAN.
  2. Ensure that the initial data on the master is synced to the slave volume. You can verify the status of the synchronization by using the status command, as shown in Section 12.4.3, “Displaying Geo-replication Status Information”.
  3. Stop and delete the geo-replication session.
    For information on stopping and deleting the the geo-replication session, see Section 12.4.5, “Stopping a Geo-replication Session” and Section 12.4.6, “Deleting a Geo-replication Session”.

    Important

    You must ensure that there are no stale files in /var/lib/glusterd/geo-replication/.
  4. Stop and delete the slave volume.
    For information on stopping and deleting the volume, see Section 10.7, “Stopping Volumes” and Section 10.8, “Deleting Volumes”.
  5. Remove the disks from the slave nodes, and physically transport them to the remote location. Make sure to remember the order in which the disks were specified in the volume.
  6. At the remote location, attach the disks and mount them on the slave nodes. Make sure that the file system or logical volume manager is recognized, and that the data is accessible after mounting it.
  7. Configure a trusted storage pool for the slave using the peer probe command.
    For information on configuring a trusted storage pool, see Chapter 7, Trusted Storage Pools.
  8. Delete the glusterFS-related attributes on the bricks. This should be done before creating the volume. You can remove the glusterFS-related attributes by running the following command:
    # for i in `getfattr -d -m . ABSOLUTE_PATH_TO_BRICK 2>/dev/null | grep trusted | awk -F = '{print $1}'`; do setfattr -x $i ABSOLUTE_PATH_TO_BRICK; done
    Run the following command to ensure that there are no xattrs still set on the brick:
    # getfattr -d -m . ABSOLUTE_PATH_TO_BRICK
  9. After creating the trusted storage pool, create the Red Hat Storage volume with the same configuration that it had when it was on the LAN. For information on creating volumes, see Chapter 8, Red Hat Storage Volumes.

    Important

    Make sure to specify the bricks in same order as they were previously when on the LAN. A mismatch in the specification of the brick order may lead to data loss or corruption.
  10. Start and mount the volume, and check if the data is intact and accessible.
    For information on starting and mounting volumes, see Section 8.10, “Starting Volumes ” and Section 9.2.3, “Mounting Red Hat Storage Volumes”.
  11. Configure the environment and create a geo-replication session from the master to this remote slave.
    For information on configuring the environment and creating a geo-replication session, see Section 12.3.4, “Configuring the Environment and Creating a Geo-replication Session”.
  12. Start the geo-replication session between the master and the remote slave.
    For information on starting the geo-replication session, see Section 12.4, “Starting Geo-replication”.
  13. Use the status command to verify the status of the session, and check if all the nodes in the session are stable.

12.9. Troubleshooting Geo-replication

This section describes the most common troubleshooting scenarios related to geo-replication.

12.9.1. Locating Log Files

The following log files are used for a geo-replication session:
  • Master-log-file - log file for the process that monitors the master volume.
  • Slave-log-file - log file for process that initiates changes on a slave.
  • Master-gluster-log-file - log file for the maintenance mount point that the geo-replication module uses to monitor the master volume.
  • Slave-gluster-log-file - If the slave is a Red Hat Storage Volume, this log file is the slave's counterpart of Master-gluster-log-file.

12.9.1.1. Master Log Files

To get the Master-log-file for geo-replication, use the following command:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config log-file
For example:
# gluster volume geo-replication Volume1 example.com::slave-vol config log-file

12.9.1.2. Slave Log Files

To get the log file for geo-replication on a slave, use the following procedure. glusterd must be running on slave machine.

Procedure 12.6. 

  1. On the master, run the following command to display the session-owner details:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config session-owner
    For example:
    # gluster volume geo-replication Volume1 example.com::slave-vol config session-owner 5f6e5200-756f-11e0-a1f0-0800200c9a66
  2. On the slave, run the following command with the session-owner value from the previous step:
    # gluster volume geo-replication SLAVE_VOL config log-file /var/log/gluster/SESSION_OWNER:remote-mirror.log 
    For example:
    # gluster volume geo-replication slave-vol config log-file /var/log/gluster/5f6e5200-756f-11e0-a1f0-0800200c9a66:remote-mirror.log

12.9.2. Synchronization Is Not Complete

Situation
The geo-replication status is displayed as Stable, but the data has not been completely synchronized.
Solution
A full synchronization of the data can be performed by erasing the index and restarting geo-replication. After restarting geo-replication, it will begin a synchronization of the data using checksums. This may be a long and resource intensive process on large data sets. If the issue persists, contact Red Hat Support.
For more information about erasing the index, see Section 10.1, “Configuring Volume Options”.

12.9.3. Issues with File Synchronization

Situation
The geo-replication status is displayed as Stable, but only directories and symlinks are synchronized. Error messages similar to the following are in the logs:
[2011-05-02 13:42:13.467644] E [master:288:regjob] GMaster: failed to sync ./some_file`
Solution
Geo-replication requires rsync v3.0.0 or higher on the host and the remote machines. Verify if you have installed the required version of rsync.

12.9.4. Geo-replication Status is Often Faulty

Situation
The geo-replication status is often displayed as Faulty, with a backtrace similar to the following:
012-09-28 14:06:18.378859] E [syncdutils:131:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File "/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 152, in twraptf(*aa) File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 118, in listen rid, exc, res = recv(self.inf) File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 42, in recv return pickle.load(inf) EOFError
Solution
This usually indicates that RPC communication between the master gsyncd module and slave gsyncd module is broken. Make sure that the following pre-requisites are met:
  • Password-less SSH is set up properly between the host and remote machines.
  • FUSE is installed on the machines. The geo-replication module mounts Red Hat Storage volumes using FUSE to sync data.

12.9.5. Intermediate Master is in a Faulty State

Situation
In a cascading environment, the intermediate master is in a faulty state, and messages similar to the following are in the log:
raise RuntimeError ("aborting on uuid change from %s to %s" % \ RuntimeError: aborting on uuid change from af07e07c-427f-4586-ab9f- 4bf7d299be81 to de6b5040-8f4e-4575-8831-c4f55bd41154
Solution
In a cascading configuration, an intermediate master is loyal to its original primary master. The above log message indicates that the geo-replication module has detected that the primary master has changed. If this change was deliberate, delete the volume-id configuration option in the session that was initiated from the intermediate master.

12.9.6. Remote gsyncd Not Found

Situation
The master is in a faulty state, and messages similar to the following are in the log:
[2012-04-04 03:41:40.324496] E [resource:169:errfail] Popen: ssh> bash: /usr/local/libexec/glusterfs/gsyncd: No such file or directory
Solution
The steps to configure a SSH connection for geo-replication have been updated. Use the steps as described in Section 12.3.4, “Configuring the Environment and Creating a Geo-replication Session”

Chapter 13. Managing Directory Quotas

Directory quotas allow you to set limits on disk space used by directories or the volume. Storage administrators can control the disk space utilization at the directory or the volume level, or both. This is particularly useful in cloud deployments to facilitate the use of utility billing models.

13.1. Enabling Quotas

You must enable directory quotas to set disk limits.
Enable quotas on a volume using the following command:
# gluster volume quota VOLNAME enable
For example, to enable quota on test-volume:
# gluster volume quota test-volume enable 
volume quota : success

Important

  • Do not enable quota using the volume-set command. This option is no longer supported.
  • Do not enable quota while quota-remove-xattr.sh is still running.

13.2. Setting Limits

Note

Before setting quota limits on any directory, ensure that there is at least one brick available per replica set.
To see the current status of bricks of a volume, run the following command:
# gluster volume status <VOLNAME> status
A Hard Limit is the maximum disk space you can utilize on a volume or directory.
Set the hard limit for a directory in the volume with the following command, specifying the hard limit size in MB, GB, TB or PB:
# gluster volume quota VOLNAME limit-usage path hard_limit
For example:
  • To set a hard limit of 100GB on /dir:
    # gluster volume quota VOLNAME limit-usage /dir 100GB
  • To set a hard limit of 1TB for the volume:
    # gluster volume quota VOLNAME limit-usage / 1TB
A Soft Limit is an attribute of a directory that is specified as a percentage of the hard limit. When disk usage reaches the soft limit of a given directory, the system begins to log this information in the logs of the brick on which data is written. The brick logs can be fount at:
/var/log/glusterfs/bricks/<path-to-brick.log>
By default, the soft limit is 80% of the hard limit.
Set the soft limit for a volume with the following command, specifying the soft limit size as a percentage of the hard limit:
# gluster volume quota VOLNAME limit-usage path hard_limit soft_limit
For example:
  • To set the soft limit to 76% of the hard limit on /dir:
    # gluster volume quota VOLNAME limit-usage /dir 100GB 76%
  • To set the soft limit to 68% of the hard limit on the volume:
    # gluster volume quota VOLNAME limit-usage / 1TB 68%

Note

When setting the soft limit, ensure you retain the hard limit value previously created.

13.3. Setting the Default Soft Limit

The default soft limit is an attribute of the volume that is specified as a percentage. The default soft limit for any volume is 80%.
When you do not specify the soft limit along with the hard limit, the default soft limit is applied to the directory or volume.
Configure the default soft limit value using the following command:
# gluster volume quota VOLNAME default-soft-limit soft_limit
For example, to set the default soft limit to 90% on test-volume run the following command:
# gluster volume quota test-volume default-soft-limit 90%
volume quota : success
Ensure that the value is set using the following command:
# gluster volume quota test-volume list

Note

If you change the soft limit at the directory level and then change the volume's default soft limit, the directory-level soft limit previously configured will remain the same.

13.4. Displaying Quota Limit Information

You can display quota limit information on all of the directories on which a limit is set.
To display quota limit information on all of the directories on which a limit is set, use the following command:
# gluster volume quota VOLNAME list
For example, to view the quota limits set on test-volume:
# gluster volume quota test-volume list
Path       Hard-limit  Soft-limit   Used    Available
------------------------------------------------------
/           50GB        75%       0Bytes    50.0GB
/dir        10GB        75%       0Bytes    10.0GB
/dir/dir2   20GB        90%       0Bytes    20.0GB
To display disk limit information on a particular directory on which limit is set, use the following command:
# gluster volume quota VOLNAME list /<directory_name>
For example, to view limits set on /dir directory of the volume /test-volume :
# gluster volume quota test-volume list /dir
Path  Hard-limit   Soft-limit   Used   Available
-------------------------------------------------
/dir   10.0GB          75%       0Bytes  10.0GB
To display disk limit information on multiple directories on which a limit is set, using the following command:
# gluster volume quota VOLNAME list /<directory_name1> /<directory_name2>
For example, to view quota limits set on directories /dir and /dir/dir2 of volume test-volume :
# gluster volume quota test-volume list /dir /dir/dir2
Path    Hard-limit   Soft-limit   Used     Available
------------------------------------------------------
/dir       10.0GB        75%        0Bytes     10.0GB
/dir/dir2  20.0GB        90%        0Bytes     20.0GB

13.4.1. Displaying Quota Limit Information Using the df Utility

To report the disk usage using the df utility, taking quota limits into consideration, run the following command:
# gluster volume set VOLNAME quota-deem-statfs on
In this case, the total disk space of the directory is taken as the quota hard limit set on the directory of the volume.

Note

The default value for quota-deem-statfs is off. However, it is recommended to set quota-deem-statfs to on.
The following example displays the disk usage when quota-deem-statfs is off:
# gluster volume set test-volume features.quota-deem-statfs off
volume set: success
# gluster volume quota test-volume list
Path        Hard-limit    Soft-limit     Used     Available
-----------------------------------------------------------
/              300.0GB        90%        11.5GB     288.5GB
/John/Downloads 77.0GB        75%        11.5GB     65.5GB
Disk usage for volume test-volume as seen on client1:
# df -hT /home
Filesystem           Type            Size  Used Avail Use% Mounted on
server1:/test-volume fuse.glusterfs  400G   12G  389G   3% /home
The following example displays the disk usage when quota-deem-statfs is on:
# gluster volume set test-volume features.quota-deem-statfs on
volume set: success
# gluster vol quota test-volume list
Path        Hard-limit    Soft-limit     Used     Available
-----------------------------------------------------------
/              300.0GB        90%        11.5GB     288.5GB
/John/Downloads 77.0GB        75%        11.5GB     65.5GB
Disk usage for volume test-volume as seen on client1:
# df -hT /home
Filesystem            Type            Size  Used Avail Use% Mounted on
server1:/test-volume  fuse.glusterfs  300G   12G  289G   4% /home
The quota-deem-statfs option when set to on, allows the administrator to make the user view the total disk space available on the directory as the hard limit set on it.

13.5. Setting Timeout

There are two types of timeouts that you can configure for a volume quota:
  • Soft timeout is the frequency at which the quota server-side translator checks the volume usage when the usage is below the soft limit. The soft timeout is in effect when the disk usage is less than the soft limit.
    To set the soft timeout, use the following command:
    #gluster volume quota VOLNAME soft-timeout time

    Note

    The default soft timeout is 60 seconds.
    For example, to set the soft timeout on test-volume to 1 minute:
    # gluster volume quota test-volume soft-timeout 1min
    volume quota : success
  • Hard timeout is the frequency at which the quota server-side translator checks the volume usage when the usage is above the soft limit. The hard timeout is in effect when the disk usage is between the soft limit and the hard limit.
    To set the hard timeout, use the following command:
    # gluster volume quota VOLNAME hard-timeout time

    Note

    The default hard timeout is 5 seconds.
    For example, to set the hard timeout for 30 seconds:
    # gluster volume quota test-volume hard-timeout 30s
    volume quota : success

    Note

    As the margin of error for disk usage is proportional to the workload of the applications running on the volume, ensure that you set the hard-timeout and soft-timeout taking the workload into account.

13.6. Setting Alert Time

Alert time is the frequency at which you want your usage information to be logged after you reach the soft limit.
To set the alert time, use the following command:
# gluster volume quota VOLNAME alert-time time

Note

The default alert-time is 1 week.
For example, to set the alert time to 1 day:
# gluster volume quota test-volume alert-time 1d
volume quota : success

13.7. Removing Disk Limits

You can remove disk limit usage settings on a given directory, if quota set is not required.
Remove disk limit usage set on a particular directory using the following command:
# gluster volume quota VOLNAME remove /<directory-name>
For example, to remove the disk limit usage on /data directory of test-volume:
# gluster volume quota test-volume remove /data
volume quota : success
For example, to remove quota from volume:
# gluster vol quota test-volume remove /
volume quota : success

Note

Removing quota limit from the volume ("/" in the above example) does not impact quota limit usage on directories.

13.8. Disabling Quotas

You can disable directory quotas using the following command:
# gluster volume quota VOLNAME disable
For example, to disable the quota translator on test-volume:
# gluster volume quota test-volume disable
Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y
volume quota : success

Note

  • When you disable quotas, all previously configured limits are removed from the volume.
  • Disabling quotas may take some time. To find out if the operation is in progress, run the following command:
    # ps ax | grep "quota-remove-xattr.sh"
    The output determines if the quota-remove-xattr.sh command is running. If quota-remove-xattr.sh is still running, the disabling of quotas is in progress.
  • Do not re-enable quota until the quota-remove-xattr.sh process is finished.

Chapter 14. Monitoring Your Red Hat Storage Workload

Monitoring storage volumes is helpful when conducting a capacity planning or performance tuning activity on a Red Hat Storage volume. You can monitor the Red Hat Storage volumes with different parameters and use those system outputs to identify and troubleshoot issues.
You can use the volume top and volume profile commands to view vital performance information and identify bottlenecks on each brick of a volume.
You can also perform a statedump of the brick processes and NFS server process of a volume, and also view volume status and volume information.

Note

If you restart the server process, the existing profile and top information will be reset.

14.1. Running the Volume Profile Command

The volume profile command provides an interface to get the per-brick or NFS server I/O information for each File Operation (FOP) of a volume. This information helps in identifying the bottlenecks in the storage system.
This section describes how to use the volume profile command.

14.1.1. Start Profiling

To view the file operation information of each brick, start the profiling command:
# gluster volume profile VOLNAME start
For example, to start profiling on test-volume:
# gluster volume profile test-volume start
Profiling started on test-volume

Important

Running profile command can affect system performance while the profile information is being collected. Red Hat recommends that profiling should only be used for debugging.
When profiling is started on the volume, the following additional options are displayed when using the volume info command:
diagnostics.count-fop-hits: on

diagnostics.latency-measurement: on

14.1.2. Displaying the I/O Information

To view the I/O information of the bricks on a volume, use the following command:
# gluster volume profile VOLNAME info
For example, to view the I/O information of test-volume:
# gluster volume profile test-volume info
Brick: Test:/export/2
Cumulative Stats:

Block                     1b+           32b+           64b+
Size:
       Read:                0              0              0
       Write:             908             28              8

Block                   128b+           256b+         512b+
Size:
       Read:                0               6             4
       Write:               5              23            16

Block                  1024b+          2048b+        4096b+
Size:
       Read:                 0              52           17
       Write:               15             120          846

Block                   8192b+         16384b+      32768b+
Size:
       Read:                52               8           34
       Write:              234             134          286

Block                                  65536b+     131072b+
Size:
       Read:                               118          622
       Write:                             1341          594


%-latency  Avg-      Min-       Max-       calls     Fop
          latency   Latency    Latency  
___________________________________________________________
4.82      1132.28   21.00      800970.00   4575    WRITE
5.70       156.47    9.00      665085.00   39163   READDIRP
11.35      315.02    9.00     1433947.00   38698   LOOKUP
11.88     1729.34   21.00     2569638.00    7382   FXATTROP
47.35   104235.02 2485.00     7789367.00     488   FSYNC

------------------

------------------

Duration     : 335

BytesRead    : 94505058

BytesWritten : 195571980
To view the I/O information of the NFS server on a specified volume, use the following command:
# gluster volume profile VOLNAME info nfs
For example, to view the I/O information of the NFS server on test-volume:
# gluster volume profile test-volume info nfs
NFS Server : localhost
----------------------
Cumulative Stats:
   Block Size:              32768b+               65536b+ 
 No. of Reads:                    0                     0 
No. of Writes:                 1000                  1000 
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop
 ---------   -----------   -----------   -----------   ------------        ----
      0.01     410.33 us     194.00 us     641.00 us              3      STATFS
      0.60     465.44 us     346.00 us     867.00 us            147       FSTAT
      1.63     187.21 us      67.00 us    6081.00 us           1000     SETATTR
      1.94     221.40 us      58.00 us   55399.00 us           1002      ACCESS
      2.55     301.39 us      52.00 us   75922.00 us            968        STAT
      2.85     326.18 us      88.00 us   66184.00 us           1000    TRUNCATE
      4.47     511.89 us      60.00 us  101282.00 us           1000       FLUSH
      5.02    3907.40 us    1723.00 us   19508.00 us            147    READDIRP
     25.42    2876.37 us     101.00 us  843209.00 us           1012      LOOKUP
     55.52    3179.16 us     124.00 us  121158.00 us           2000       WRITE
 
    Duration: 7074 seconds
   Data Read: 0 bytes
Data Written: 102400000 bytes
 
Interval 1 Stats:
   Block Size:              32768b+               65536b+ 
 No. of Reads:                    0                     0 
No. of Writes:                 1000                  1000 
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop
 ---------   -----------   -----------   -----------   ------------        ----
      0.01     410.33 us     194.00 us     641.00 us              3      STATFS
      0.60     465.44 us     346.00 us     867.00 us            147       FSTAT
      1.63     187.21 us      67.00 us    6081.00 us           1000     SETATTR
      1.94     221.40 us      58.00 us   55399.00 us           1002      ACCESS
      2.55     301.39 us      52.00 us   75922.00 us            968        STAT
      2.85     326.18 us      88.00 us   66184.00 us           1000    TRUNCATE
      4.47     511.89 us      60.00 us  101282.00 us           1000       FLUSH
      5.02    3907.40 us    1723.00 us   19508.00 us            147    READDIRP
     25.41    2878.07 us     101.00 us  843209.00 us           1011      LOOKUP
     55.53    3179.16 us     124.00 us  121158.00 us           2000       WRITE
 
    Duration: 330 seconds
   Data Read: 0 bytes
Data Written: 102400000 bytes

14.1.3. Stop Profiling

To stop profiling on a volume, use the following command:
# gluster volume profile VOLNAME stop
For example, to stop profiling on test-volume:
# gluster volume profile test-volume stop 
Profiling stopped on test-volume

14.2. Running the Volume Top Command

The volume top command allows you to view the glusterFS bricks’ performance metrics, including read, write, file open calls, file read calls, file write calls, directory open calls, and directory real calls. The volume top command displays up to 100 results.
This section describes how to use the volume top command.

14.2.1. Viewing Open File Descriptor Count and Maximum File Descriptor Count

You can view the current open file descriptor count and the list of files that are currently being accessed on the brick with the volume top command. The volume top command also displays the maximum open file descriptor count of files that are currently open, and the maximum number of files opened at any given point of time since the servers are up and running. If the brick name is not specified, then the open file descriptor metrics of all the bricks belonging to the volume displays.
To view the open file descriptor count and the maximum file descriptor count, use the following command:
# gluster volume top VOLNAME open [nfs | brick BRICK-NAME] [list-cnt cnt]
For example, to view the open file descriptor count and the maximum file descriptor count on brick server:/export on test-volume, and list the top 10 open calls:
# gluster volume top test-volume open brick server:/export  list-cnt 10 
Brick: server:/export/dir1
Current open fd's: 34 Max open fd's: 209
             ==========Open file stats========

open            file name
call count     

2               /clients/client0/~dmtmp/PARADOX/
                COURSES.DB

11              /clients/client0/~dmtmp/PARADOX/
                ENROLL.DB

11              /clients/client0/~dmtmp/PARADOX/
                STUDENTS.DB

10              /clients/client0/~dmtmp/PWRPNT/
                TIPS.PPT

10              /clients/client0/~dmtmp/PWRPNT/
                PCBENCHM.PPT

9               /clients/client7/~dmtmp/PARADOX/
                STUDENTS.DB

9               /clients/client1/~dmtmp/PARADOX/
                STUDENTS.DB

9               /clients/client2/~dmtmp/PARADOX/
                STUDENTS.DB

9               /clients/client0/~dmtmp/PARADOX/
                STUDENTS.DB

9               /clients/client8/~dmtmp/PARADOX/
                STUDENTS.DB

14.2.2. Viewing Highest File Read Calls

You can view a list of files with the highest file read calls on each brick with the volume top command. If the brick name is not specified, a list of 100 files are displayed by default.
To view the highest read() calls, use the following command:
# gluster volume top VOLNAME read [nfs | brick BRICK-NAME] [list-cnt cnt]
For example, to view the highest read calls on brick server:/export of test-volume:
# gluster volume top test-volume read brick server:/export list-cnt 10
Brick: server:/export/dir1
          ==========Read file stats========

read              filename
call count

116              /clients/client0/~dmtmp/SEED/LARGE.FIL

64               /clients/client0/~dmtmp/SEED/MEDIUM.FIL

54               /clients/client2/~dmtmp/SEED/LARGE.FIL

54               /clients/client6/~dmtmp/SEED/LARGE.FIL

54               /clients/client5/~dmtmp/SEED/LARGE.FIL

54               /clients/client0/~dmtmp/SEED/LARGE.FIL

54               /clients/client3/~dmtmp/SEED/LARGE.FIL

54               /clients/client4/~dmtmp/SEED/LARGE.FIL

54               /clients/client9/~dmtmp/SEED/LARGE.FIL

54               /clients/client8/~dmtmp/SEED/LARGE.FIL

14.2.3. Viewing Highest File Write Calls

You can view a list of files with the highest file write calls on each brick with the volume top command. If the brick name is not specified, a list of 100 files displays by default.
To view the highest write() calls, use the following command:
# gluster volume top VOLNAME write [nfs | brick BRICK-NAME] [list-cnt cnt]
For example, to view the highest write calls on brick server:/export of test-volume:
# gluster volume top test-volume write brick server:/export/ list-cnt 10
Brick: server:/export/dir1

               ==========Write file stats========
write call count   filename

83                /clients/client0/~dmtmp/SEED/LARGE.FIL

59                /clients/client7/~dmtmp/SEED/LARGE.FIL

59                /clients/client1/~dmtmp/SEED/LARGE.FIL

59                /clients/client2/~dmtmp/SEED/LARGE.FIL

59                /clients/client0/~dmtmp/SEED/LARGE.FIL

59                /clients/client8/~dmtmp/SEED/LARGE.FIL

59                /clients/client5/~dmtmp/SEED/LARGE.FIL

59                /clients/client4/~dmtmp/SEED/LARGE.FIL

59                /clients/client6/~dmtmp/SEED/LARGE.FIL

59                /clients/client3/~dmtmp/SEED/LARGE.FIL

14.2.4. Viewing Highest Open Calls on a Directory

You can view a list of files with the highest open calls on the directories of each brick with the volume top command. If the brick name is not specified, the metrics of all bricks belonging to that volume displays.
To view the highest open() calls on each directory, use the following command:
# gluster volume top VOLNAME opendir [brick BRICK-NAME] [list-cnt cnt]
For example, to view the highest open calls on brick server:/export/ of test-volume:
# gluster volume top test-volume opendir brick server:/export/ list-cnt 10
Brick: server:/export/dir1 
         ==========Directory open stats========

Opendir count     directory name

1001              /clients/client0/~dmtmp

454               /clients/client8/~dmtmp

454               /clients/client2/~dmtmp
 
454               /clients/client6/~dmtmp

454               /clients/client5/~dmtmp

454               /clients/client9/~dmtmp

443               /clients/client0/~dmtmp/PARADOX

408               /clients/client1/~dmtmp

408               /clients/client7/~dmtmp

402               /clients/client4/~dmtmp

14.2.5. Viewing Highest Read Calls on a Directory

You can view a list of files with the highest directory read calls on each brick with the volume top command. If the brick name is not specified, the metrics of all bricks belonging to that volume displays.
To view the highest directory read() calls on each brick, use the following command:
# gluster volume top VOLNAME readdir [nfs | brick BRICK-NAME] [list-cnt cnt]
For example, to view the highest directory read calls on brick server:/export/ of test-volume:
# gluster volume top test-volume readdir brick server:/export/ list-cnt 10
Brick: server:/export/dir1
==========Directory readdirp stats========

readdirp count           directory name

1996                    /clients/client0/~dmtmp

1083                    /clients/client0/~dmtmp/PARADOX

904                     /clients/client8/~dmtmp

904                     /clients/client2/~dmtmp

904                     /clients/client6/~dmtmp

904                     /clients/client5/~dmtmp

904                     /clients/client9/~dmtmp

812                     /clients/client1/~dmtmp

812                     /clients/client7/~dmtmp

800                     /clients/client4/~dmtmp

14.2.6. Viewing Read Performance

You can view the read throughput of files on each brick with the volume top command. If the brick name is not specified, the metrics of all the bricks belonging to that volume is displayed. The output is the read throughput.
This command initiates a read() call for the specified count and block size and measures the corresponding throughput directly on the back-end export, bypassing glusterFS processes.
To view the read performance on each brick, use the command, specifying options as needed:
# gluster volume top VOLNAME read-perf [bs blk-size count count] [nfs | brick BRICK-NAME] [list-cnt cnt]
For example, to view the read performance on brick server:/export/ of test-volume, specifying a 256 block size, and list the top 10 results:
# gluster volume top test-volume read-perf bs 256 count 1 brick server:/export/ list-cnt 10
Brick: server:/export/dir1 256 bytes (256 B) copied, Throughput: 4.1 MB/s 
       ==========Read throughput file stats========

read         filename                         Time
through
put(MBp
s)

2912.00   /clients/client0/~dmtmp/PWRPNT/    -2012-05-09
           TRIDOTS.POT                   15:38:36.896486
                                           
2570.00   /clients/client0/~dmtmp/PWRPNT/    -2012-05-09
           PCBENCHM.PPT                  15:38:39.815310
                                           
2383.00   /clients/client2/~dmtmp/SEED/      -2012-05-09
           MEDIUM.FIL                    15:52:53.631499
                                           
2340.00   /clients/client0/~dmtmp/SEED/      -2012-05-09
           MEDIUM.FIL                    15:38:36.926198

2299.00   /clients/client0/~dmtmp/SEED/      -2012-05-09
           LARGE.FIL                     15:38:36.930445
                                                      
2259.00  /clients/client0/~dmtmp/PARADOX/    -2012-05-09
          COURSES.X04                    15:38:40.549919
                                           
2221.00  /clients/client9/~dmtmp/PARADOX/    -2012-05-09
          STUDENTS.VAL                   15:52:53.298766
                                           
2221.00  /clients/client8/~dmtmp/PARADOX/    -2012-05-09
         COURSES.DB                      15:39:11.776780
                                           
2184.00  /clients/client3/~dmtmp/SEED/       -2012-05-09
          MEDIUM.FIL                     15:39:10.251764
                                           
2184.00  /clients/client5/~dmtmp/WORD/       -2012-05-09
         BASEMACH.DOC                    15:39:09.336572

14.2.7. Viewing Write Performance

You can view the write throughput of files on each brick or NFS server with the volume top command. If brick name is not specified, then the metrics of all the bricks belonging to that volume will be displayed. The output will be the write throughput.
This command initiates a write operation for the specified count and block size and measures the corresponding throughput directly on back-end export, bypassing glusterFS processes.
To view the write performance on each brick, use the following command, specifying options as needed:
# gluster volume top VOLNAME write-perf [bs blk-size count count] [nfs | brick BRICK-NAME] [list-cnt cnt]
For example, to view the write performance on brick server:/export/ of test-volume, specifying a 256 block size, and list the top 10 results:
# gluster volume top test-volume write-perf bs 256 count 1 brick server:/export/ list-cnt 10
Brick: server:/export/dir1 256 bytes (256 B) copied, Throughput: 2.8 MB/s
       ==========Write throughput file stats========

write                filename                 Time
throughput
(MBps)
 
1170.00    /clients/client0/~dmtmp/SEED/     -2012-05-09
           SMALL.FIL                     15:39:09.171494

1008.00    /clients/client6/~dmtmp/SEED/     -2012-05-09
           LARGE.FIL                      15:39:09.73189

949.00    /clients/client0/~dmtmp/SEED/      -2012-05-09
          MEDIUM.FIL                     15:38:36.927426

936.00   /clients/client0/~dmtmp/SEED/       -2012-05-09
         LARGE.FIL                        15:38:36.933177    
897.00   /clients/client5/~dmtmp/SEED/       -2012-05-09
         MEDIUM.FIL                       15:39:09.33628

897.00   /clients/client6/~dmtmp/SEED/       -2012-05-09
         MEDIUM.FIL                       15:39:09.27713

885.00   /clients/client0/~dmtmp/SEED/       -2012-05-09
          SMALL.FIL                      15:38:36.924271

528.00   /clients/client5/~dmtmp/SEED/       -2012-05-09
         LARGE.FIL                        15:39:09.81893

516.00   /clients/client6/~dmtmp/ACCESS/    -2012-05-09
         FASTENER.MDB                    15:39:01.797317

14.3. Listing Volumes

You can list all volumes in the trusted storage pool using the following command:
# gluster volume list
For example, to list all volumes in the trusted storage pool:
# gluster volume list
test-volume
volume1
volume2
volume3

14.4. Displaying Volume Information

You can display information about a specific volume, or all volumes, as needed, using the following command:
# gluster volume info VOLNAME
For example, to display information about test-volume:
# gluster volume info test-volume
Volume Name: test-volume
Type: Distribute
Status: Created
Number of Bricks: 4
Bricks:
Brick1: server1:/exp1
Brick2: server2:/exp2
Brick3: server3:/exp3
Brick4: server4:/exp4

14.5. Performing Statedump on a Volume

Statedump is a mechanism through which you can get details of all internal variables and state of the glusterFS process at the time of issuing the command. You can perform statedumps of the brick processes and NFS server process of a volume using the statedump command. You can use the following options to determine what information is to be dumped:
  • mem - Dumps the memory usage and memory pool details of the bricks.
  • iobuf - Dumps iobuf details of the bricks.
  • priv - Dumps private information of loaded translators.
  • callpool - Dumps the pending calls of the volume.
  • fd - Dumps the open file descriptor tables of the volume.
  • inode - Dumps the inode tables of the volume.
  • history - Dumps the event history of the volume
To perform a statedump of a volume or NFS server, use the following command, specifying options as needed:
# gluster volume statedump VOLNAME [nfs] [all|mem|iobuf|callpool|priv|fd|inode|history]
For example, to perform a statedump of test-volume:
# gluster volume statedump test-volume
Volume statedump successful
The statedump files are created on the brick servers in the /var/run/gluster/ directory or in the directory set using server.statedump-path volume option. The naming convention of the dump file is brick-path.brick-pid.dump.
You can change the directory of the statedump file using the following command:
# gluster volume set VOLNAME server.statedump-path path
For example, to change the location of the statedump file of test-volume:
# gluster volume set test-volume server.statedump-path /usr/local/var/log/glusterfs/dumps/
Set volume successful
You can view the changed path of the statedump file using the following command:
# gluster volume info VOLNAME
To retrieve the statedump information for client processes:
kill -USR1 process_ID
For example, to retrieve the statedump information for the client process ID 4120:
kill -USR1 4120

14.6. Displaying Volume Status

You can display the status information about a specific volume, brick, or all volumes, as needed. Status information can be used to understand the current status of the brick, NFS processes, self-heal daemon and overall file system. Status information can also be used to monitor and debug the volume information. You can view status of the volume along with the details:
  • detail - Displays additional information about the bricks.
  • clients - Displays the list of clients connected to the volume.
  • mem - Displays the memory usage and memory pool details of the bricks.
  • inode - Displays the inode tables of the volume.
  • fd - Displays the open file descriptor tables of the volume.
  • callpool - Displays the pending calls of the volume.
Display information about a specific volume using the following command:
# gluster volume status [all|VOLNAME [nfs | shd | BRICKNAME]] [detail |clients | mem | inode | fd |callpool]
For example, to display information about test-volume:
# gluster volume status test-volume
Status of volume: test-volume
Gluster process                        Port    Online   Pid
------------------------------------------------------------
Brick arch:/export/rep1                24010   Y       18474
Brick arch:/export/rep2                24011   Y       18479
NFS Server on localhost                38467   Y       18486
Self-heal Daemon on localhost          N/A     Y       18491
The self-heal daemon status will be displayed only for replicated volumes.
Display information about all volumes using the command:
# gluster volume status all
# gluster volume status all
Status of volume: test
Gluster process                       Port    Online   Pid
-----------------------------------------------------------
Brick 192.168.56.1:/export/test       24009   Y       29197
NFS Server on localhost               38467   Y       18486

Status of volume: test-volume
Gluster process                       Port    Online   Pid
------------------------------------------------------------
Brick arch:/export/rep1               24010   Y       18474
Brick arch:/export/rep2               24011   Y       18479
NFS Server on localhost               38467   Y       18486
Self-heal Daemon on localhost         N/A     Y       18491
Display additional information about the bricks using the command:
# gluster volume status VOLNAME detail
For example, to display additional information about the bricks of test-volume:
# gluster volume status test-volume detail
Status of volume: test-vol
------------------------------------------------------------------------------
Brick                : Brick arch:/exp     
Port                 : 24012               
Online               : Y                   
Pid                  : 18649               
File System          : ext4                
Device               : /dev/sda1           
Mount Options        : rw,relatime,user_xattr,acl,commit=600,barrier=1,data=ordered
Inode Size           : 256                 
Disk Space Free      : 22.1GB              
Total Disk Space     : 46.5GB              
Inode Count          : 3055616             
Free Inodes          : 2577164
Detailed information is not available for NFS and the self-heal daemon.
Display the list of clients accessing the volumes using the command:
# gluster volume status VOLNAME clients
For example, to display the list of clients connected to test-volume:
# gluster volume status test-volume clients
Brick : arch:/export/1
Clients connected : 2
Hostname          Bytes Read   BytesWritten
--------          ---------    ------------
127.0.0.1:1013    776          676
127.0.0.1:1012    50440        51200
Client information is not available for the self-heal daemon.
Display the memory usage and memory pool details of the bricks on a volume using the command:
# gluster volume status VOLNAME mem
For example, to display the memory usage and memory pool details for the bricks on test-volume:
# gluster volume status test-volume mem
Memory status for volume : test-volume
----------------------------------------------
Brick : arch:/export/1
Mallinfo
--------
Arena    : 434176
Ordblks  : 2
Smblks   : 0
Hblks    : 12
Hblkhd   : 40861696
Usmblks  : 0
Fsmblks  : 0
Uordblks : 332416
Fordblks : 101760
Keepcost : 100400

Mempool Stats
-------------
Name                               HotCount ColdCount PaddedSizeof AllocCount MaxAlloc
----                               -------- --------- ------------ ---------- --------
test-volume-server:fd_t                0     16384           92         57        5
test-volume-server:dentry_t           59       965           84         59       59
test-volume-server:inode_t            60       964          148         60       60
test-volume-server:rpcsvc_request_t    0       525         6372        351        2
glusterfs:struct saved_frame           0      4096          124          2        2
glusterfs:struct rpc_req               0      4096         2236          2        2
glusterfs:rpcsvc_request_t             1       524         6372          2        1
glusterfs:call_stub_t                  0      1024         1220        288        1
glusterfs:call_stack_t                 0      8192         2084        290        2
glusterfs:call_frame_t                 0     16384          172       1728        6
Display the inode tables of the volume using the command:
# gluster volume status VOLNAME inode
For example, to display the inode tables of test-volume:
# gluster volume status test-volume inode
inode tables for volume test-volume
----------------------------------------------
Brick : arch:/export/1
Active inodes:
GFID                                            Lookups            Ref   IA type
----                                            -------            ---   -------
6f3fe173-e07a-4209-abb6-484091d75499                  1              9         2
370d35d7-657e-44dc-bac4-d6dd800ec3d3                  1              1         2

LRU inodes: 
GFID                                            Lookups            Ref   IA type
----                                            -------            ---   -------
80f98abe-cdcf-4c1d-b917-ae564cf55763                  1              0         1
3a58973d-d549-4ea6-9977-9aa218f233de                  1              0         1
2ce0197d-87a9-451b-9094-9baa38121155                  1              0         2
Display the open file descriptor tables of the volume using the command:
# gluster volume status VOLNAME fd
For example, to display the open file descriptor tables of test-volume:
# gluster volume status test-volume fd

FD tables for volume test-volume
----------------------------------------------
Brick : arch:/export/1
Connection 1:
RefCount = 0  MaxFDs = 128  FirstFree = 4
FD Entry            PID                 RefCount            Flags              
--------            ---                 --------            -----              
0                   26311               1                   2                  
1                   26310               3                   2                  
2                   26310               1                   2                  
3                   26311               3                   2                  
 
Connection 2:
RefCount = 0  MaxFDs = 128  FirstFree = 0
No open fds
 
Connection 3:
RefCount = 0  MaxFDs = 128  FirstFree = 0
No open fds
FD information is not available for NFS and the self-heal daemon.
Display the pending calls of the volume using the command:
# gluster volume status VOLNAME callpool
Note, each call has a call stack containing call frames.
For example, to display the pending calls of test-volume:
# gluster volume status test-volume callpool

Pending calls for volume test-volume
----------------------------------------------
Brick : arch:/export/1
Pending calls: 2
Call Stack1
 UID    : 0
 GID    : 0
 PID    : 26338
 Unique : 192138
 Frames : 7
 Frame 1
  Ref Count   = 1
  Translator  = test-volume-server
  Completed   = No
 Frame 2
  Ref Count   = 0
  Translator  = test-volume-posix
  Completed   = No
  Parent      = test-volume-access-control
  Wind From   = default_fsync
  Wind To     = FIRST_CHILD(this)->fops->fsync
 Frame 3
  Ref Count   = 1
  Translator  = test-volume-access-control
  Completed   = No
  Parent      = repl-locks
  Wind From   = default_fsync
  Wind To     = FIRST_CHILD(this)->fops->fsync
 Frame 4
  Ref Count   = 1
  Translator  = test-volume-locks
  Completed   = No
  Parent      = test-volume-io-threads
  Wind From   = iot_fsync_wrapper
  Wind To     = FIRST_CHILD (this)->fops->fsync
 Frame 5
  Ref Count   = 1
  Translator  = test-volume-io-threads
  Completed   = No
  Parent      = test-volume-marker
  Wind From   = default_fsync
  Wind To     = FIRST_CHILD(this)->fops->fsync
 Frame 6
  Ref Count   = 1
  Translator  = test-volume-marker
  Completed   = No
  Parent      = /export/1
  Wind From   = io_stats_fsync
  Wind To     = FIRST_CHILD(this)->fops->fsync
 Frame 7
  Ref Count   = 1
  Translator  = /export/1
  Completed   = No
  Parent      = test-volume-server
  Wind From   = server_fsync_resume
  Wind To     = bound_xl->fops->fsync

Chapter 15. Managing Red Hat Storage Volume Life-Cycle Extensions

Red Hat Storage allows automation of operations by user-written scripts. For every operation, you can execute a pre and a post script.
Pre Scripts: These scripts are run before the occurrence of the event. You can write a script to automate activities like managing system-wide services. For example, you can write a script to stop exporting the SMB share corresponding to the volume before you stop the volume.
Post Scripts: These scripts are run after execution of the event. For example, you can write a script to export the SMB share corresponding to the volume after you start the volume.
You can run scripts for the following events:
  • Creating a volume
  • Starting a volume
  • Adding a brick
  • Removing a brick
  • Tuning volume options
  • Stopping a volume
  • Deleting a volume
Naming Convention
While creating the file names of your scripts, you must follow the naming convention followed in your underlying file system like XFS.

Note

To enable the script, the name of the script must start with an S . Scripts run in lexicographic order of their names.

15.1. Location of Scripts

This section provides information on the folders where the scripts must be placed. When you create a trusted storage pool, the following directories are created:
  • /var/lib/glusterd/hooks/1/create/
  • /var/lib/glusterd/hooks/1/delete/
  • /var/lib/glusterd/hooks/1/start/
  • /var/lib/glusterd/hooks/1/stop/
  • /var/lib/glusterd/hooks/1/set/
  • /var/lib/glusterd/hooks/1/add-brick/
  • /var/lib/glusterd/hooks/1/remove-brick/
After creating a script, you must ensure to save the script in its respective folder on all the nodes of the trusted storage pool. The location of the script dictates whether the script needs to be executed before or after an event. Scripts are provided with the command line argument --volname=VOLNAME to specify the volume. Command-specific additional arguments are provided for the following volume operations:
  • Start volume
    • --first=yes, if the volume is the first to be started
    • --first=no, for otherwise
  • Stop volume
    • --last=yes, if the volume is to be stopped last.
    • --last=no, for otherwise
  • Set volume
    • -o key=value
      For every key, value is specified in volume set command.

15.2. Prepackaged Scripts

Red Hat provides scripts to export Samba (SMB) share when you start a volume and to remove the share when you stop the volume. These scripts are available at: /var/lib/glusterd/hooks/1/start/post and /var/lib/glusterd/hooks/1/stop/pre. By default, the scripts are enabled.
When you start a volume using the following command:
# gluster volume start VOLNAME
The S30samba-start.sh script performs the following:
  1. Adds Samba share configuration details of the volume to the smb.conf file
  2. Restarts Samba to run with updated configuration
When you stop the volume using the following command:
# gluster volume stop VOLNAME
The S30samba-stop.sh script performs the following:
  1. Removes the Samba share details of the volume from the smb.conf file
  2. Restarts Samba to run with updated configuration

Part III. Red Hat Storage Administration on Public Cloud

Chapter 16. Launching Red Hat Storage Server for Public Cloud

Red Hat Storage Server for Public Cloud is a pre-integrated, pre-verified and ready to run Amazon Machine Image (AMI) that provides a fully POSIX compatible highly available scale-out NAS and object storage solution for the Amazon Web Services (AWS) public cloud infrastructure.

Note

For information on obtaining access to AMI, see https://access.redhat.com/knowledge/articles/145693.
This chapter describes how to launch Red Hat Storage instances on Amazon Web Services.

16.1. Launching Red Hat Storage Instances

This section describes how to launch Red Hat Storage instances on Amazon Web Services.
To launch the Red Hat Storage Instance
  1. Navigate to the Amazon Web Services home page at http://aws.amazon.com. The Amazon Web Services home page appears.
    Home Page

    Figure 16.1. Home Page

  2. Login to Amazon Web Services. The Amazon Web Services main screen is displayed.
    Main Screen

    Figure 16.2. Main Screen

  3. Click the Amazon EC2 tab. The Amazon EC2 Console Dashboard is displayed.
    Dashboard

    Figure 16.3. Dashboard

  4. Click Launch Instance.
    The Choose an AMI screen is displayed.
    Choose AMI

    Figure 16.4. Choose AMI

    1. Choose an Amazon Machine Image (AMI) and click Select.
      AMI

      Figure 16.5. AMI

  5. In the General Purpose tab, choose an instance type in the corresponding screen and click Next: Configure Instance Details.
    General Purpose

    Figure 16.6. General Purpose

  6. Configure the instance details based on your requirements and click Next: Add Storage.
    Configure Instance Details

    Figure 16.7. Configure Instance Details

  7. In the Add Storage screen, specify your storage requirements and click Next: Tag Instance.
    Add Storage

    Figure 16.8. Add Storage

  8. In the Tag Instance screen, create a key-value pair and click Next: Configure Security Group.
  9. In the Configure Security Group screen, add rules to allow specific traffic to reach your instance and click Review and Launch.
    You must ensure to open the following TCP port numbers in the selected security group:
    • 22
    • 6000, 6001, 6002, 443, and 8080 ports if Red Hat Storage for OpenStack Swift is enabled
    Configure Security Group

    Figure 16.9. Configure Security Group

  10. In the Review Instance Launch screen, review your instance launch details. You can edit changes for each section. Click Launch to complete the launch process.
    Review Instance

    Figure 16.10. Review Instance

  11. In the Select an existing key pair or create a new key pair screen, select the key pair, check the acknowledgement check-box and click Launch Instances.
    Select Key Pair

    Figure 16.11. Select Key Pair

  12. The Launch Status screen is displayed. To monitor your instance's status, click View Instances.
    Launch Status

    Figure 16.12. Launch Status

16.2. Verifying that Red Hat Storage Instance is Running

You can verify that Red Hat Storage instance is running by performing a remote login to the Red Hat Storage instance and issuing a command.
To verify that Red Hat Storage instance is running
  1. In the Amazon EC2 Console Dashboard, click on Running Instances to check for the instances that are running.
    Running Instances

    Figure 16.13. Running Instances

  2. Select an instance from the list and click Connect.
    Check the Status column and verify that the instance is running. A yellow circle indicates a status of pending while a green circle indicates that the instance is running.
    Note the domain name in the Public DNS field. You can use this domain to perform a remote login to the instance
    Connect

    Figure 16.14. Connect

  3. The Connect To Your Instance screen is displayed. Click Close.
    Connection Complete

    Figure 16.15. Connection Complete

  4. Using SSH and the domain from the previous step, login to the Red Hat Amazon Machine Image instance. You must use the key pair that was selected or created when launching the instance.
    Example:
    Enter the following in command line:
    # ssh -i rhs-aws.pem root ec2-23-20-52-123.compute-1.amazonaws.com
  5. At the command line, enter the following command:
    # service glusterd status
    Verify that the command indicates that the glusterd daemon is running on the instance.

Chapter 17. Provisioning Storage

Amazon Elastic Block Storage (EBS) is designed specifically for use with Amazon EC2 instances. Amazon EBS provides storage that behaves like a raw, unformatted, external block device. Red Hat supports eight EBS volumes assembled into a software RAID 0 (stripe) array for use as a single brick. You can create a brick ranging from 8 GB to 8 TB. For example, if you create a brick of 128 GB, you must create 8 Amazon EBS volumes of size 16 GB each and then assemble them into a RAID 0 array.

Important

Single EBS volumes exhibit inconsistent I/O performance. The Red Hat supported configuration is eight Amazon EBS volumes of equal size on software RAID 0 (stripe), attached as a brick, which enables consistent I/O performance. Other configurations are not supported by Red Hat.

Procedure 17.1. To Add Amazon Elastic Block Storage Volumes

  1. Login to Amazon Web Services at http://aws.amazon.com and select the Amazon EC2 tab.
  2. In the Amazon EC2 Dashboard select the Elastic Block Store > Volumes option to add the Amazon Elastic Block Storage Volumes
  3. In order to support configuration as a brick, assemble the eight Amazon EBS volumes into a RAID 0 (stripe) array using the following command:
    # mdadm --create ARRAYNAME --level=0 --raid-devices=8 list of all devices
    For example, to create a software RAID 0 of eight volumes:
    # mdadm --create /dev/md0 --level=0 --raid-devices=8 /dev/xvdf1 /dev/xvdf2 /dev/xvdf3 /dev/xvdf4 /dev/xvdf5 /dev/xvdf6 /dev/xvdf7 /dev/xvdf8
    # mdadm --examine --scan > /etc/mdadm.conf
  4. Create a Logical Volume (LV) using the following commands:
    # pvcreate /dev/md0
    # vgcreate glustervg /dev/md0
    # vgchange -a y glustervg
    # lvcreate -a y -l 100%VG -n glusterlv glustervg
    In these commands, glustervg is the name of the volume group and glusterlv is the name of the logical volume. Red Hat Storage uses the logical volume created over EBS RAID as a brick. For more information about logical volumes, see the Red Hat Enterprise Linux Logical Volume Manager Administration Guide.
  5. Format the logical volume using the following command:
    # mkfs.xfs -i size=512 DEVICE
    For example, to format /dev/glustervg/glusterlv:
    # mkfs.xfs -i size=512 /dev/glustervg/glusterlv
  6. Mount the device using the following commands:
    # mkdir -p /export/glusterlv
    # mount /dev/glustervg/glusterlv /export/glusterlv
  7. Using the following command, add the device to /etc/fstab so that it mounts automatically when the system reboots:
    # echo "/dev/glustervg/glusterlv /export/glusterlv xfs defaults 0 2" >> /etc/fstab
After adding the EBS volumes, you can use the mount point as a brick with existing and new volumes. For more information on creating volumes, see Chapter 8, Red Hat Storage Volumes

Chapter 18. Stopping and Restarting Red Hat Storage Instance

When you stop and restart a Red Hat Storage instance, Amazon Web Services assigns the instance a new IP address and hostname. This results in the instance losing its association with the virtual hardware, causing disruptions to the trusted storage pool. To prevent errors, add the restarted Red Hat Storage instance to the trusted storage pool. See Section 7.1, “Adding Servers to the Trusted Storage Pool”.
Rebooting the Red Hat Storage instance preserves the IP address and hostname and does not lose its association with the virtual hardware. This does not cause any disruptions to the trusted storage pool.

Part IV. Data Access with Other Interfaces

Chapter 19. Managing Object Store

Object Store provides a system for data storage that enables users to access the same data, both as an object and as a file, thus simplifying management and controlling storage costs.
Red Hat Storage is based on glusterFS, an open source distributed file system. Object Store technology is built upon OpenStack Swift. OpenStack Swift allows users to store and retrieve files and content through a simple Web Service REST (Representational State Transfer) interface as objects. Red Hat Storage uses glusterFS as a back-end file system for OpenStack Swift. It also leverages on OpenStack Swift's REST interface for storing and retrieving files over the web combined with glusterFS features like scalability and high availability, replication, and elastic volume management for data management at disk level.
Object Store technology enables enterprises to adopt and deploy cloud storage solutions. It allows users to access and modify data as objects from a REST interface along with the ability to access and modify files from NAS interfaces. In addition to decreasing cost and making it faster and easier to access object data, it also delivers massive scalability, high availability and replication of object storage. Infrastructure as a Service (IaaS) providers can utilize Object Store technology to enable their own cloud storage service. Enterprises can use this technology to accelerate the process of preparing file-based applications for the cloud and simplify new application development for cloud computing environments.
OpenStack Swift is an open source software for creating redundant, scalable object storage using clusters of standardized servers to store petabytes of accessible data. It is not a file system or real-time data storage system, but rather a long-term storage system for a more permanent type of static data that can be retrieved, leveraged, and updated.

19.1. Architecture Overview

OpenStack Swift and Red Hat Storage integration consists of:
The following diagram illustrates OpenStack Object Storage integration with Red Hat Storage 2.1:
Object Store Architecture

Figure 19.1. Object Store Architecture

19.2. Components of Object Storage

The major components of Object Storage are:
Proxy Server
The Proxy Server is responsible for connecting to the rest of the OpenStack Object Storage architecture. For each request, it looks up the location of the account, container, or object in the ring and routes the request accordingly. The public API is also exposed through the proxy server. When objects are streamed to or from an object server, they are streamed directly through the proxy server to or from the user – the proxy server does not spool them.
The Ring
The Ring maps swift accounts to the appropriate Red Hat Storage volume. When other components need to perform any operation on an object, container, or account, they need to interact with the Ring to determine the correct Red Hat Storage volume.
Object and Object Server
An object is the basic storage entity and any optional metadata that represents the data you store. When you upload data, the data is stored as-is (with no compression or encryption).
The Object Server is a very simple storage server that can store, retrieve, and delete objects stored on local devices.
Container and Container Server
A container is a storage compartment for your data and provides a way for you to organize your data. Containers can be visualized as directories in a Linux system. However, unlike directories, containers cannot be nested. Data must be stored in a container and hence the objects are created within a container.
The Container Server’s primary job is to handle listings of objects. The listing is done by querying the glusterFS mount point with a path. This query returns a list of all files and directories present under that container.
Accounts and Account Servers
The OpenStack Swift system is designed to be used by many different storage consumers.
The Account Server is very similar to the Container Server, except that it is responsible for listing containers rather than objects. In Object Store, each Red Hat Storage volume is an account.
Authentication and Access Permissions
Object Store provides an option of using an authentication service to authenticate and authorize user access. Once the authentication service correctly identifies the user, it will provide a token which must be passed to Object Store for all subsequent container and object operations.
Other than using your own authentication services, the following authentication services are supported by Object Store:
  • Authenticate Object Store against an external OpenStack Keystone server.
    Each Red Hat Storage volume is mapped to a single account. Each account can have multiple users with different privileges based on the group and role they are assigned to. After authenticating using accountname:username and password, user is issued a token which will be used for all subsequent REST requests.
    Integration with Keystone
    When you integrate Red Hat Storage Object Store with Keystone authentication, you must ensure that the Swift account name and Red Hat Storage volume name are the same. It is common that Red Hat Storage volumes are created before exposing them through the Red Hat Storage Object Store.
    When working with Keystone, account names are defined by Keystone as the tenant id. You must create the Red Hat Storage volume using the Keystone tenant id as the name of the volume. This means, you must create the Keystone tenant before creating a Red Hat Storage Volume.

    Important

    Red Hat Storage does not contain any Keystone server components. It only acts as a Keystone client. After you create a volume for Keystone, ensure to export this volume for accessing it using the object storage interface. For more information on exporting volume, see Section 19.6.7, “Exporting the Red Hat Storage Volumes”.
    Integration with GSwauth
    GSwauth is a Web Server Gateway Interface (WGSI) middleware that uses a Red Hat Storage Volume itself as its backing store to maintain its metadata. The benefit in this authentication service is to have the metadata available to all proxy servers and saving the data to a Red Hat Storage volume.
    To protect the metadata, the Red Hat Storage volume should only be able to be mounted by the systems running the proxy servers. For more information on mounting volumes, see Section 9.2.3, “Mounting Red Hat Storage Volumes”.
    Integration with TempAuth
    You can also use the TempAuth authentication service to test Red Hat Storage Object Store in the data center.

19.3. Advantages of using Object Store

The advantages of using Object Store include:
  • Default object size limit of 1 TiB
  • Unified view of data across NAS and Object Storage technologies
  • High availability
  • Scalability
  • Replication
  • Elastic Volume Management

19.4. Limitations

This section lists the limitations of using Red Hat Storage Object Store:
  • Object Name
    Object Store imposes the following constraints on the object name to maintain the compatibility with network file access:
    • Object names must not be prefixed or suffixed by a '/' character. For example, a/b/
    • Object names must not have contiguous multiple '/' characters. For example, a//b
  • Account Management
    • Object Store does not allow account management even though OpenStack Swift allows the management of accounts. This limitation is because Object Store treats accounts equivalent to the Red Hat Storage volumes.
    • Object Store does not support account names (i.e. Red Hat Storage volume names) having an underscore.
    • In Object Store, every account must map to a Red Hat Storage volume.
  • Support for X-Delete-After or X-Delete-At API
    Object Store does not support X-Delete-After or X-Delete-At headers API listed at: http://docs.openstack.org/api/openstack-object-storage/1.0/content/Expiring_Objects-e1e3228.html.
  • Subdirectory Listing
    Headers X-Content-Type: application/directory and X-Content-Length: 0 can be used to create subdirectory objects under a container, but GET request on a subdirectory would not list all the objects under it.

19.5. Prerequisites

You must start memcached service using the following command:
# service memcached start
Ports
Port numbers of Object, Container, and Account servers are configurable and must be open for Object Store to work:

19.6. Configuring the Object Store

This section provides instructions on how to configure Object Store in your storage environment.

19.6.1. Configuring a Proxy Server

Create a new configuration file /etc/swift/proxy-server.conf by referencing the template file available at /etc/swift/proxy-server.conf-gluster.

19.6.1.1. Configuring a Proxy Server for HTTPS

By default, proxy server only handles HTTP requests. To configure the proxy server to process HTTPS requests, perform the following steps:
  1. Create self-signed cert for SSL using the following commands:
    # cd /etc/swift
    # openssl req -new -x509 -nodes -out cert.crt -keyout cert.key
  2. Add the following lines to /etc/swift/proxy-server.conf under [DEFAULT]
    bind_port = 443
     cert_file = /etc/swift/cert.crt
     key_file = /etc/swift/cert.key

Enabling Distributed Caching with Memcached

When Object Storage is deployed on two or more machines, not all nodes in your trusted storage pool are used. Installing a load balancer enables you to utilize all the nodes in your trusted storage pool by distributing the proxy server requests equally to all storage nodes.
Memcached allows nodes' states to be shared across multiple proxy servers. Edit the memcache_servers configuration option in the proxy-server.conf and list all memcached servers.
Following is an example listing the memcached servers in the proxy-server.conf file.
[filter:cache]
use = egg:swift#memcache
memcache_servers = 192.168.1.20:11211,192.168.1.21:11211,192.168.1.22:11211
The port number on which the memcached server is listening is 11211. You must ensure to use the same sequence for all configuration files.

19.6.2. Configuring the Authentication Service

This section provides information on configuring Keystone, GSwauth, and TempAuth authentication services.

19.6.2.1. Integrating with the Keystone Authentication Service

  • To configure Keystone, add authtoken and keystone to /etc/swift/proxy-server.conf pipeline as shown below:
    [pipeline:main]
    pipeline = catch_errors healthcheck proxy-logging cache authtoken keystoneauth proxy-logging proxy-server
  • Add the following sections to /etc/swift/proxy-server.conf file by referencing the example below as a guideline. You must substitute the values according to your setup:
    [filter:authtoken]
    paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
    signing_dir = /etc/swift
    auth_host = keystone.server.com
    auth_port = 35357
    auth_protocol = http
    auth_uri = http://keystone.server.com:5000
    # if its defined
    admin_tenant_name = services
    admin_user = swift
    admin_password = adminpassword
    delay_auth_decision = 1
    
    [filter:keystoneauth]
    use = egg:swift#keystoneauth
    operator_roles = admin, SwiftOperator
    is_admin = true
    cache = swift.cache
Verify the Integrated Setup
Verify that the Red Hat Storage Object Store has been configured successfully by running the following command:
$ swift -V 2 -A http://keystone.server.com:5000/v2.0 -U tenant_name:user -K password stat

19.6.2.2. Integrating with the GSwauth Authentication Service

Integrating GSwauth
Perform the following steps to integrate GSwauth:
  1. Create and start a Red Hat Storage volume to store metadata.
    # gluster volume create NEW-VOLNAME NEW-BRICK
    # gluster volume start NEW-VOLNAME
    For example:
    # gluster volume create gsmetadata server1:/exp1 
    # gluster volume start gsmetadata
  2. Run gluster-swift-gen-builders tool with all the volumes to be accessed using the Swift client including gsmetadata volume:
    # gluster-swift-gen-builders gsmetadata other volumes
  3. Edit the /etc/swift/proxy-server.conf pipeline as shown below:
    [pipeline:main]
    pipeline = catch_errors cache gswauth proxy-server
  4. Add the following section to /etc/swift/proxy-server.conf file by referencing the example below as a guideline. You must substitute the values according to your setup.
    [filter:gswauth]
    use = egg:gluster_swift#gswauth
    set log_name = gswauth
    super_admin_key = gswauthkey
    metadata_volume = gsmetadata
    auth_type = sha1
    auth_type_salt = swauthsalt

    Important

    You must ensure to secure the proxy-server.conf file and the super_admin_key option to prevent unprivileged access.
  5. Restart the proxy server by running the following command:
    # swift-init proxy restart
Advanced Options:
You can set the following advanced options for GSwauth WSGI filter:
  • default-swift-cluster: The default storage-URL for the newly created accounts. When you attempt to authenticate for the first time, the access token and the storage-URL where data for the given account is stored will be returned.
  • token_life: The set default token life. The default value is 86400 (24 hours).
  • max_token_life: The maximum token life. You can set a token lifetime when requesting a new token with header x-auth-token-lifetime. If the passed in value is greater than the max_token_life, then the max_token_life value will be used.
GSwauth Common Options of CLI Tools
GSwauth provides CLI tools to facilitate managing accounts and users. All tools have some options in common:
  • -A, --admin-url: The URL to the auth. The default URL is http://127.0.0.1:8080/auth/.
  • -U, --admin-user: The user with administrator rights to perform action. The default user role is .super_admin.
  • -K, --admin-key: The key for the user with administrator rights to perform the action. There is no default value.
Preparing Red Hat Storage Volumes to Save Metadata
Prepare the Red Hat Storage volume for gswauth to save its metadata by running the following command:
# gswauth-prep [option]
For example:
# gswauth-prep -A http://10.20.30.40:8080/auth/ -K gswauthkey
19.6.2.2.1. Managing Account Services in GSwauth
Creating Accounts
Create an account for GSwauth. This account is mapped to a Red Hat Storage volume.
# gswauth-add-account [option] <account_name>
For example:
# gswauth-add-account -K gswauthkey <account_name>
Deleting an Account
You must ensure that all users pertaining to this account must be deleted before deleting the account. To delete an account:
# gswauth-delete-account [option] <account_name>
For example:
# gswauth-delete-account -K gswauthkey test
Setting the Account Service
Sets a service URL for an account. User with reseller admin role only can set the service URL. This command can be used to change the default storage URL for a given account. All accounts will have the same storage-URL as default value, which is set using default-swift-cluster option.
# gswauth-set-account-service [options] <account> <service> <name> <value>
For example:
# gswauth-set-account-service -K gswauthkey test storage local http://newhost:8080/v1/AUTH_test
19.6.2.2.2. Managing User Services in GSwauth
User Roles
The following user roles are supported in GSwauth:
  • A regular user has no rights. Users must be given both read and write privileges using Swift ACLs.
  • The admin user is a super-user at the account level. This user can create and delete users for that account. These members will have both write and read privileges to all stored objects in that account.
  • The reseller admin user is a super-user at the cluster level. This user can create and delete accounts and users and has read and write privileges to all accounts under that cluster.
  • GSwauth maintains its own swift account to store all of its metadata on accounts and users. The .super_admin role provides access to GSwauth own swift account and has all privileges to act on any other account or user.
User Access Matrix
The following table provides user access right information.

Table 19.1. User Access Matrix

Role/Groupget list of accountsget Acccount Details Create AccountDelete AccountGet User DetailsCreate admin userCreate reseller_admin userCreate regular userDelete admin user
.super_admin (username)XXXXXXXXX
.reseller_admin (group)XXXXXX XX
.admin (group) X  XX XX
regular user (type)         
Creating Users
You can create an user for an account that does not exist. The account will be created before creating the user.
You must add -r flag to create a reseller admin user and -a flag to create an admin user. To change the password or role of the user, you can run the same command with the new option.
# gswauth-add-user [option] <account_name> <user> <password>
For example
# gswauth-add-user -K gswauthkey -a test ana anapwd
Deleting a User
Delete a user by running the following command:
gswauth-delete-user [option] <account_name> <user>
For example
gwauth-delete-user -K gswauthkey test ana
Authenticating a User with the Swift Client
There are two methods to access data using the Swift client. The first and simple method is by providing the user name and password everytime. The swift client will acquire the token from gswauth.
For example:
$ swift -A http://127.0.0.1:8080/auth/v1.0 -U test:ana -K anapwd upload container1 README.md
The second method is a two-step process, first you must authenticate with a username and password to obtain a token and the storage URL. Then, you can make the object requests to the storage URL with the given token.
It is important to remember that tokens expires, so the authentication process needs to be repeated very often.
Authenticate a user with the cURL command:
curl -v -H 'X-Storage-User: test:ana' -H 'X-Storage-Pass: anapwd' -k http://localhost:8080/auth/v1.0
...
< X-Auth-Token: AUTH_tk7e68ef4698f14c7f95af07ab7b298610
< X-Storage-Url: http://127.0.0.1:8080/v1/AUTH_test
...
Now, you use the given token and storage URL to access the object-storage using the Swift client:
$ swift --os-auth-token=AUTH_tk7e68ef4698f14c7f95af07ab7b298610 --os-storage-url=http://127.0.0.1:8080/v1/AUTH_test upload container1 README.md
README.md
bash-4.2$ 
bash-4.2$ swift --os-auth-token=AUTH_tk7e68ef4698f14c7f95af07ab7b298610 --os-storage-url=http://127.0.0.1:8080/v1/AUTH_test list container1
README.md

Important

Reseller admins must always use the second method to acquire a token to get access to other accounts other than his own. The first method of using the username and password will give them access only to their own accounts.
19.6.2.2.3. Managing Accounts and Users Information
Obtaining Accounts and User Information
You can obtain the accounts and users information including stored password.
# gswauth-list [options] [account] [user]
For example:
# gswauth-list -K gswauthkey test ana
+----------+
|  Groups  |
+----------+
| test:ana |
|   test   |
|  .admin  |
+----------+
  • If [account] and [user] are omitted, all the accounts will be listed.
  • If [account] is included but not [user], a list of users within that account will be listed.
  • If [account] and [user] are included, a list of groups that the user belongs to will be listed.
  • If the [user] is .groups, the active groups for that account will be listed.
The default output format is in tabular format. Adding -p option provides the output in plain text format, -j provides the output in JSON format.
Changing User Password
You can change the password of the user, account administrator, and reseller_admin roles.
  • Change the password of a regular user by running the following command:
    # gswauth-add-user -U account1:user1 -K old_passwd account1 user1 new_passwd
  • Change the password of an account administrator by running the following command:
    # gswauth-add-user -U account1:admin -K old_passwd -a account1 admin new_passwd
  • Change the password of the reseller_admin by running the following command:
    # gswauth-add-user -U account1:radmin -K old_passwd -r account1 radmin new_passwd
Cleaning Up Expired Tokens
Users with .super_admin role can delete the expired tokens.
You also have the option to provide the expected life of tokens, delete all tokens or delete all tokens for a given account.
# gswauth-cleanup-tokens [options]
For example
# gswauth-cleanup-tokens -K gswauthkey --purge test
The tokens will be deleted on the disk but it would still persist in memcached.
You can add the following options while cleaning up the tokens:
  • -t, --token-life: The expected life of tokens. The token objects modified before the give number of seconds will be checked for expiration (default: 86400).
  • --purge: Purges all the tokens for a given account whether the tokens have expired or not.
  • --purge-all: Purges all the tokens for all the accounts and users whether the tokens have expired or not.

19.6.2.3. Integrating with the TempAuth Authentication Service

Warning

TempAuth authentication service must only be used in test deployments and not for production.
TempAuth is automatically installed when you install Red Hat Storage. TempAuth stores user and password information as cleartext in a single proxy-server.conf file. In your /etc/swift/proxy-server.conf file, enable TempAuth in pipeline and add user information in TempAuth section by referencing the below example.
[pipeline:main]
pipeline = catch_errors healthcheck proxy-logging cache tempauth proxy-logging proxy-server

[filter:tempauth]
use = egg:swift#tempauth
user_admin_admin = admin.admin.reseller_admin
user_test_tester = testing .admin
user_test_tester2 = testing2
You can add users to the account in the following format:
user_accountname_username = password [.admin]
Here the accountname is the Red Hat Storage volume used to store objects.
You must restart the Object Store services for the configuration changes to take effect. For information on restarting the services, see Section 19.6.8, “Starting and Stopping Server”.

19.6.3. Configuring an Object Server

Create a new configuration file /etc/swift/object.server.conf by referencing the template file available at /etc/swift/object-server.conf-gluster.

19.6.4. Configuring a Container Server

Create a new configuration file /etc/swift/container-server.conf by referencing the template file available at /etc/swift/container-server.conf-gluster.

19.6.5. Configuring an Account Server

Create a new configuration file /etc/swift/account-server.conf by referencing the template file available at /etc/swift/account-server.conf-gluster.

19.6.6. Configuring Swift Object and Container Constrains

Create a new configuration file /etc/swift/swift.conf by referencing the template file available at /etc/swift/swift.conf-gluster.

19.6.7. Exporting the Red Hat Storage Volumes

After creating configuration files, you must now add configuration details for the system to identify the Red Hat Storage volumes to be accessible as Object Store. These configuration details are added to the ring files. The ring files provide the list of Red Hat Storage volumes to be accessible using the object storage interface to the Gluster for Swift component.
Create the ring files for the current configurations by running the following command:
# cd /etc/swift
# gluster-swift-gen-builders VOLUME [VOLUME...]
For example,
# cd /etc/swift
# gluster-swift-gen-builders testvol1 testvol2 testvol3
Here testvol1, testvol2, and testvol3 are the Red Hat Storage volumes which will be mounted locally under the directory mentioned in the object, container, and account configuration files (default value is /mnt/gluster-object).
Note that all the volumes required to be accessed using the Swift interface must be passed to the gluster-swift-gen-builders tool even if it was previously added. The gluster-swift-gen-builders tool creates new ring files every time it runs successfully.

Important

The name of the mount point and the name of the Red Hat Storage volume must be the same.
To remove a VOLUME, run gluster-swift-gen-builders only with the volumes which are required to be accessed using the Swift interface.
For example, to remove the testvol2 volume, run the following command:
# gluster-swift-gen-builders testvol1 testvol3
You must restart the Object Store services after creating the new ring files.

19.6.8. Starting and Stopping Server

You must start the server manually whenever you update or modify the configuration files.
  • To start the server, enter the following command:
    # swift-init main start
  • To stop the server, enter the following command:
    # swift-init main stop

19.7. Starting the Services Automatically

To automatically start the gluster-swift services every time the system boots, run the following commands:
# chkconfig memcached on
# chkconfig openstack-swift-proxy on
# chkconfig openstack-swift-account on
# chkconfig openstack-swift-container on
# chkconfig openstack-swift-object on

Important

You must restart all Object Store services servers whenever you change the configuration and ring files.

19.8. Working with the Object Store

For more information on Swift operations, see OpenStack Object Storage API Reference Guide available at http://docs.openstack.org/api/openstack-object-storage/1.0/content/ .

19.8.1. Creating Containers and Objects

Creating container and objects in Red Hat Storage Object Store is very similar to OpenStack swift. For more information on Swift operations, see OpenStack Object Storage API Reference Guide available at http://docs.openstack.org/api/openstack-object-storage/1.0/content/.

19.8.2. Creating Subdirectory under Containers

You can create a subdirectory object under a container using the headers Content-Type: application/directory and Content-Length: 0. However, the current behavior of Object Store returns 200 OK on a GET request on subdirectory but this does not list all the objects under that subdirectory.

19.8.3. Working with Swift ACLs

Swift ACLs work with users and accounts. ACLs are set at the container level and support lists for read and write access. For more information on Swift ACLs, see http://docs.openstack.org/user-guide/content/managing-openstack-object-storage-with-swift-cli.html.

Part V. Appendices

Chapter 20. Troubleshooting

This section describes how to manage logs and the most common troubleshooting scenarios related to Red Hat Storage.

20.1. Managing Red Hat Storage Logs

This section describes how to manage Red Hat Storage logs by performing the following operation:
  • Rotating Logs

20.1.1. Rotating Logs

Administrators can rotate the log file in a volume, as needed.
To rotate a log file
  • Rotate the log file using the following command:
    # gluster volume log rotate VOLNAME
    For example, to rotate the log file on test-volume:
    # gluster volume log rotate test-volume
    log rotate successful

    Note

    When a log file is rotated, the contents of the current log file are moved to log-file- name.epoch-time-stamp.

20.2. Troubleshooting File Locks

You can use the statedump command to list the locks held on files. The statedump output also provides information on each lock with its range, basename, and PID of the application holding the lock, and so on. You can analyze the output to find the locks whose owner/application is no longer running or interested in that lock. After ensuring that no application is using the file, you can clear the lock using the following clear-locks command:
# gluster volume clear-locks VOLNAME path kind {blocked | granted | all}{inode range | entry basename | posix range}
For more information on performing statedump, see Section 14.5, “Performing Statedump on a Volume ”
To identify locked file and clear locks
  1. Perform statedump on the volume to view the files that are locked using the following command:
    # gluster volume statedump VOLNAME
    For example, to display statedump of test-volume:
    # gluster volume statedump test-volume
    Volume statedump successful
    The statedump files are created on the brick servers in the /tmp directory or in the directory set using the server.statedump-path volume option. The naming convention of the dump file is brick-path.brick-pid.dump.
  2. Clear the entry lock using the following command:
    # gluster volume clear-locks VOLNAME path kind granted entry basename
    The following are the sample contents of the statedump file indicating entry lock (entrylk). Ensure that those are stale locks and no resources own them.
    [xlator.features.locks.vol-locks.inode]
    path=/
    mandatory=0
    entrylk-count=1
    lock-dump.domain.domain=vol-replicate-0
    xlator.feature.locks.lock-dump.domain.entrylk.entrylk[0](ACTIVE)=type=ENTRYLK_WRLCK on basename=file1, pid = 714782904, owner=ffffff2a3c7f0000, transport=0x20e0670, , granted at Mon Feb 27 16:01:01 2012
    
    conn.2.bound_xl./gfs/brick1.hashsize=14057
    conn.2.bound_xl./gfs/brick1.name=/gfs/brick1/inode
    conn.2.bound_xl./gfs/brick1.lru_limit=16384
    conn.2.bound_xl./gfs/brick1.active_size=2
    conn.2.bound_xl./gfs/brick1.lru_size=0
    conn.2.bound_xl./gfs/brick1.purge_size=0
    For example, to clear the entry lock on file1 of test-volume:
    # gluster volume clear-locks test-volume / kind granted entry file1
    Volume clear-locks successful
    test-volume-locks: entry blocked locks=0 granted locks=1
  3. Clear the inode lock using the following command:
    # gluster volume clear-locks VOLNAME path kind granted inode range
    The following are the sample contents of the statedump file indicating there is an inode lock (inodelk). Ensure that those are stale locks and no resources own them.
    [conn.2.bound_xl./gfs/brick1.active.1]
    gfid=538a3d4a-01b0-4d03-9dc9-843cd8704d07
    nlookup=1
    ref=2
    ia_type=1
    [xlator.features.locks.vol-locks.inode]
    path=/file1
    mandatory=0
    inodelk-count=1
    lock-dump.domain.domain=vol-replicate-0
    inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 714787072, owner=00ffff2a3c7f0000, transport=0x20e0670, , granted at Mon Feb 27 16:01:01 2012
    For example, to clear the inode lock on file1 of test-volume:
    # gluster  volume clear-locks test-volume /file1 kind granted inode 0,0-0
    Volume clear-locks successful
    test-volume-locks: inode blocked locks=0 granted locks=1
  4. Clear the granted POSIX lock using the following command:
    # gluster volume clear-locks VOLNAME path kind granted posix range
    The following are the sample contents of the statedump file indicating there is a granted POSIX lock. Ensure that those are stale locks and no resources own them.
    xlator.features.locks.vol1-locks.inode] 
    path=/file1 
    mandatory=0 
    posixlk-count=15 
    posixlk.posixlk[0](ACTIVE)=type=WRITE, whence=0, start=8, len=1, pid = 23848, owner=d824f04c60c3c73c, transport=0x120b370, , blocked at Mon Feb 27 16:01:01 2012 
    , granted at Mon Feb 27 16:01:01 2012 
    
    posixlk.posixlk[1](ACTIVE)=type=WRITE, whence=0, start=7, len=1, pid = 1, owner=30404152462d436c-69656e7431, transport=0x11eb4f0, , granted at Mon Feb 27 16:01:01 2012 
    
    posixlk.posixlk[2](BLOCKED)=type=WRITE, whence=0, start=8, len=1, pid = 1, owner=30404152462d436c-69656e7431, transport=0x11eb4f0, , blocked at Mon Feb 27 16:01:01 2012 
    
    posixlk.posixlk[3](ACTIVE)=type=WRITE, whence=0, start=6, len=1, pid = 12776, owner=a36bb0aea0258969, transport=0x120a4e0, , granted at Mon Feb 27 16:01:01 2012 
    ...
    For example, to clear the granted POSIX lock on file1 of test-volume:
    # gluster volume clear-locks test-volume /file1 kind granted posix 0,8-1
    Volume clear-locks successful
    test-volume-locks: posix blocked locks=0 granted locks=1
    test-volume-locks: posix blocked locks=0 granted locks=1
    test-volume-locks: posix blocked locks=0 granted locks=1
  5. Clear the blocked POSIX lock using the following command:
    # gluster volume clear-locks VOLNAME path kind blocked posix range
    The following are the sample contents of the statedump file indicating there is a blocked POSIX lock. Ensure that those are stale locks and no resources own them.
    [xlator.features.locks.vol1-locks.inode] 
    path=/file1 
    mandatory=0 
    posixlk-count=30 
    posixlk.posixlk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=1, pid = 23848, owner=d824f04c60c3c73c, transport=0x120b370, , blocked at Mon Feb 27 16:01:01 2012 
    , granted at Mon Feb 27 16:01:01 
    
    posixlk.posixlk[1](BLOCKED)=type=WRITE, whence=0, start=0, len=1, pid = 1, owner=30404146522d436c-69656e7432, transport=0x1206980, , blocked at Mon Feb 27 16:01:01 2012 
    
    posixlk.posixlk[2](BLOCKED)=type=WRITE, whence=0, start=0, len=1, pid = 1, owner=30404146522d436c-69656e7432, transport=0x1206980, , blocked at Mon Feb 27 16:01:01 2012 
    
    posixlk.posixlk[3](BLOCKED)=type=WRITE, whence=0, start=0, len=1, pid = 1, owner=30404146522d436c-69656e7432, transport=0x1206980, , blocked at Mon Feb 27 16:01:01 2012 
    
    posixlk.posixlk[4](BLOCKED)=type=WRITE, whence=0, start=0, len=1, pid = 1, owner=30404146522d436c-69656e7432, transport=0x1206980, , blocked at Mon Feb 27 16:01:01 2012 
    
    ...
    For example, to clear the blocked POSIX lock on file1 of test-volume:
    # gluster volume clear-locks test-volume /file1 kind blocked posix 0,0-1
    Volume clear-locks successful
    test-volume-locks: posix blocked locks=28 granted locks=0
    test-volume-locks: posix blocked locks=1 granted locks=0
    No locks cleared.
  6. Clear all POSIX locks using the following command:
    # gluster volume clear-locks VOLNAME path kind all posix range
    The following are the sample contents of the statedump file indicating that there are POSIX locks. Ensure that those are stale locks and no resources own them.
    [xlator.features.locks.vol1-locks.inode] 
    path=/file1 
    mandatory=0 
    posixlk-count=11 
    posixlk.posixlk[0](ACTIVE)=type=WRITE, whence=0, start=8, len=1, pid = 12776, owner=a36bb0aea0258969, transport=0x120a4e0, , blocked at Mon Feb 27 16:01:01 2012 
    , granted at Mon Feb 27 16:01:01 2012 
    
    posixlk.posixlk[1](ACTIVE)=type=WRITE, whence=0, start=0, len=1, pid = 12776, owner=a36bb0aea0258969, transport=0x120a4e0, , granted at Mon Feb 27 16:01:01 2012 
    
    posixlk.posixlk[2](ACTIVE)=type=WRITE, whence=0, start=7, len=1, pid = 23848, owner=d824f04c60c3c73c, transport=0x120b370, , granted at Mon Feb 27 16:01:01 2012 
    
    posixlk.posixlk[3](ACTIVE)=type=WRITE, whence=0, start=6, len=1, pid = 1, owner=30404152462d436c-69656e7431, transport=0x11eb4f0, , granted at Mon Feb 27 16:01:01 2012 
    
    posixlk.posixlk[4](BLOCKED)=type=WRITE, whence=0, start=8, len=1, pid = 23848, owner=d824f04c60c3c73c, transport=0x120b370, , blocked at Mon Feb 27 16:01:01 2012 
    ...
    For example, to clear all POSIX locks on file1 of test-volume:
    # gluster volume clear-locks test-volume /file1 kind all posix 0,0-1
    Volume clear-locks successful
    test-volume-locks: posix blocked locks=1 granted locks=0
    No locks cleared.
    test-volume-locks: posix blocked locks=4 granted locks=1
You can perform statedump on test-volume again to verify that all the above locks are cleared.

Appendix A. Revision History

Revision History
Revision 2.1-109Fri Oct 9 2015Divya Muntimadugu
Updated chapter Accessing Data - Setting Up Clients to resolve BZ# 1268720.
Revision 2.1-108Wed May 13 2015Divya Muntimadugu
Updated chapter Managing Red Hat Stogare Logs to resolve BZ# 1205887,1205881, 1205992, and 1205995.
Revision 2.1-106Mon Mar 09 2015Divya Muntimadugu
Updated Section Prerequisites and Section Disaster Recovery of Managing Geo-replication chapter to resolve BZ# 1198080 and BZ# 1195379.
Revision 2.1-105Mon Dec 22 2014Divya Muntimadugu
Bug Fix.
Revision 2.1-104Tue Dec 9 2014Bhavana Mohan
Bug Fix.
Revision 2.1-95Thu Sep 18 2014Bhavana Mohan
Bug Fix.
Revision 2.1-77Tue Aug 26 2014Divya Muntimadugu
Bug Fix.
Revision 2.1-75Fri Aug 22 2014Pavithra Srinivasan
Bug Fix.
Revision 2.1-71Tue Aug 12 2014Divya Muntimadugu
Bug Fixes.
Revision 2.1-70Wed Jun 25 2014Divya Muntimadugu
Added Cross Protocol Data Access Matrix.
Revision 2.1-69Mon Jun 12 2014Divya Muntimadugu
Updated Managing Geo-replication chapter.
Revision 2.1-67Mon Mar 17 2014Divya Muntimadugu
Bug fixes.
Revision 2.1-64Thu Mar 06 2014Divya Muntimadugu
Bug fixes.
Revision 2.1-63Fri Feb 28 2014Divya Muntimadugu
Added Recovering from File Split-brain section.
Revision 2.1-63Tue Feb 24 2014Divya Muntimadugu
Publishing for Red Hat Storage 2.1 Update 2 release.
Revision 2.1-40Wed Nov 27 2013Divya Muntimadugu
Publishing for Red Hat Storage 2.1 Update 1 release.
Revision 2.1-37Tue Nov 26 2013Divya Muntimadugu
Bug fixes for Red Hat Storage 2.1 Update 1 release.
Revision 2.1-8Wed Sep 25 2013Bhavana Mohan
Bug fixes.
Revision 2.1-7Tue Sep 17 2013Bhavana Mohanraj
Updated SMB section.
Revision 2.1-4Mon Sep 16 2013Divya Muntimadugu
Added a section on upgrading Native Client.
Revision 2.1-3Mon Sep 16 2013Bhavana Mohan
Updated chapter 10.
Revision 2.1-2Fri Sep 13 2013Divya Muntimadugu
Publishing for Red Hat Storage 2.1 GA release.