-
Language:
English
-
Language:
English
Red Hat Training
A Red Hat training course is available for Red Hat Gluster Storage
Console Administration Guide
System Administration of Red Hat Storage Environments using the Administration Portal
Abstract
Chapter 1. Introduction
- Support to quickly create and manage Red Hat Storage trusted storage pool and volumes.
- Multilevel administration to enable administration of physical infrastructure and virtual objects.
1.1. System Components
1.1.1. Components
1.1.2. The Console
1.1.3. Hosts
1.2. Red Hat Storage Console Resources
- Hosts - A host is a physical host (a physical machine) running Red Hat Storage 3.0. Servers are grouped into storage clusters. Red Hat Storage volumes are created on these clusters. The system and all its components are managed through a centralized management system.
- Clusters - A cluster is a group of linked computers that work together closely, thus in many respects forming a single computer. Hosts in a cluster share the same network infrastructure and the same storage.
- User - Red Hat Storage supports multiple levels of administrators and users with distinct levels of permissions. System administrators can manage and administer objects of the physical infrastructure, such as clusters, hosts, and volume.
- Events and Monitors - Alerts, warnings, and other notices about activities within the system help the administrator to monitor the performance and operation of various resources.
1.3. Administration of the Red Hat Storage Console
- Configuring a new logical cluster is the most important task of the system administrator. Designing a new cluster requires an understanding of capacity planning and definition of requirements. This is typically determined by the solution architect, who provides the requirements to the system architect. Preparing to set up the storage environment is a significant part of the setup, and is usually part of the system administrator's role.
- Maintaining the cluster, including performing updates and monitoring usage and performance to keep the cluster responsive to changing needs and loads.
1.3.1. Maintaining the Red Hat Storage Console
- Managing hosts and other physical resources.
- Managing the storage environment. This includes creating, deleting, expanding and shrinking volumes and clusters.
- Monitoring overall system resources for potential problems such as an extreme load on one of the hosts, insufficient memory or disk space, and taking any necessary actions.
- Managing user setup and access, and setting user and administrator permission levels. This includes assigning or customizing roles to suit the needs of the enterprise.
- Troubleshooting for specific users or hosts or for overall system functionality.
Part I. The Red Hat Storage Console Interface
Chapter 2. Getting Started
2.1. Graphical User Interface
Figure 2.1. Graphical User Interface Elements of the Administration Portal
Graphical User Interface Elements
- Header
The Header bar contains the name of the current logged-in user, the Sign Out button, the About button, and the Configure button. The About button provides access to version information. The Configure button allows you to configure user roles. - Search Bar
The Search bar allows you to quickly search for resources such as hosts and volumes. You can build queries to find the resources that you need. Queries can be as simple as a list of all the hosts in the system, or much more complex. As you type each part of the search query, you will be offered choices to assist you in building the search. The star icon can be used to save the search as a bookmark. - Resource Tabs
All resources, such as hosts and clusters, can be managed using the appropriate tab. Additionally, the Events tab allows you to manage and view events across the entire system. Clicking a tab displays the results of the most recent search query on the selected object. For example, if you recently searched for all hosts starting with "M", clicking the Hosts tab displays a list of all hosts starting with "M". The Administration Portal provides the following tabs: Clusters, Hosts, Volumes, Users, and Events. - Results List
Perform a task on an individual item, multiple items, or all the items in the results list, by selecting the items and then clicking the relevant action button. If multiple selection is not possible, the button is disabled. Details of a selected item display in the details pane. - Details Pane
The Details pane displays detailed information about a selected item in the Results Grid. If multiple items are selected, the Details pane displays information on the first selected item only. - Bookmarks Pane
Bookmarks are used to save frequently used or complicated searches for repeated use. Bookmarks can be added, edited, or removed. - Alerts/Events Pane
The Alerts pane lists all events with a severity of Error or Warning. The system records all events, which are listed as audits in the Alerts section. Like events, alerts can also be viewed in the lowermost panel of the Events tab by resizing the panel and clicking the Alerts tab. This tabbed panel also appears in other tabs, such as the Hosts tab.
Important
2.1.1. Tree Mode and Flat Mode
Figure 2.2. Tree Mode
Figure 2.3. Flat Mode
2.2. Search
2.2.1. Search Syntax
result-type: {criteria} [sortby sort_spec]
The following examples describe how search queries are used, and help you to understand how Red Hat Storage Console assists with building search queries.
Table 2.1. Example Search Queries
Example | Result |
---|---|
Volumes: status = up | Displays a list of all volumes that are up. |
Volumes: cluster = data | Displays a list of all volumes of the cluster data. |
Events: severity > normal sortby time | Displays the list of all events whose severity is higher than Normal, sorted by time. |
2.2.1.1. Auto-Completion
Volumes: status = down
Table 2.2. Example Search Queries using Auto-Completion
Input | List Items Displayed | Action |
---|---|---|
v | Volumes (1 option only) |
Select
Volumes or;
Type
Volumes
|
Volumes: |
All volumes properties
| Type s |
Volumes: s | volume properties starting with s | Select status or type status |
Volumes: status | =
!=
| Select or type |
Volumes: status = | All status values | Select or type down |
2.2.1.2. Result-Type Options
- Host for a list of hosts
- Event for a list of events
- Users for a list of users
- Cluster for a list of clusters
- Volumes for a list of volumes
2.2.1.3. Search Criteria
{criteria}
is as follows:
<prop> <operator> <value>
<obj-type>.<prop> <operator> <value>
The following table describes the parts of the syntax:
Table 2.3. Example Search Criteria
Part | Description | Values | Example | Note |
---|---|---|---|---|
prop | The property of the searched-for resource. Can also be the property of a resource type (see obj-type ), or tag (custom tag). | See the information for each of the search types in Section 2.2.1.3.1, “Wildcards and Multiple Criteria”. | Status | -- |
obj-type | A resource type that can be associated with the searched-for resource. | See the explanation of each of the search types in Section 2.2.1.3.1, “Wildcards and Multiple Criteria”. | Users | -- |
operator | Comparison operators. |
=
!= (not equal)
>
<
>=
<=
| -- | Value options depend on obj-type. |
Value | What the expression is being compared to. |
String
Integer
Ranking
Date (formatted according to regional settings)
|
Jones
256
normal
|
|
2.2.1.3.1. Wildcards and Multiple Criteria
<value>
part of the syntax for strings. For example, to find all users beginning with m, enter m*
.
AND
and OR
. For example:
Volumes: name = m* AND status = Up
AND
or OR
, AND
is implied. AND
precedes OR
, and OR
precedes implied AND
.
2.2.1.4. Determining Sort Order
sortby
. Sort direction (asc
for ascending, desc
for descending) can be included.
events: severity > normal sortby time desc
2.2.2. Saving and Accessing Queries as Bookmarks
2.2.2.1. Creating Bookmarks
Procedure 2.1. Saving a query string as a bookmark
- Enter the search query in the Search bar (see Section 2.2.1, “Search Syntax”).
- Click the star-shaped Bookmark button to the right of the Search bar.The New Bookmark dialog box displays. The query displays in the Search String field. You can edit it if required.
- Specify a descriptive name for the search query in Name.
- Click OK to save the query as a bookmark.
- The search query is saved and displays in the Bookmarks pane.
2.2.2.2. Editing Bookmarks
Procedure 2.2. Editing a bookmark
- Select the Bookmark pane by clicking the Bookmarks tab on the far left side of the screen.
- Select a bookmark from the Bookmark pane.
- The results list displays the items according to the criteria. Click the Edit button on the Bookmark pane.The Edit Bookmark dialog box displays. The query displays in the Search String field. Edit the search string to your requirements.
- Change Name and Search String as necessary.
- Click OK to save the edited bookmark.
2.2.2.3. Deleting Bookmarks
Procedure 2.3. Deleting a bookmark
- Select one or more bookmark from the Bookmarks pane.
- The results list displays the items according to the criteria. Click the Remove button at the top of the Bookmark pane.The Remove Bookmark dialog box displays, prompting you to confirm your decision to remove the bookmark.
- Click OK to remove the selected bookmarks.
2.3. Tags
Procedure 2.4. Creating a tag
- In tree mode or flat mode, click the resource tab for which you wish to create a tag. For example, Hosts.
- Click the Tags tab. Select the node under which you wish to create the tag. For example, click the root node to create it at the highest level. The New button is enabled.
- Click New at the top of the Tags pane. The New Tag dialog box displays.
- Enter the Name and Description of the new tag.
- Click OK. The new tag is created and displays on the Tags tab.
Procedure 2.5. Modifying a tag
- Click the Tags tab. Select the tag that you wish to modify. The buttons on the Tags tab are enabled.
- Click Edit on the Tags pane. The Edit Tag dialog box displays.
- You can change the Name and Description of the tag.
- Click OK. The changes in the tag display on the Tags tab.
Procedure 2.6. Deleting a tag
- Click the Tags tab. The list of tags will display.
- Select the tags to be deleted and click Remove. The Remove Tag(s) dialog box displays.
- The tags are displayed in the dialog box. Check that you are sure about the removal. The message warns you that removing the tags will also remove all descendants of the tags.
- Click OK. The tags are removed and no longer display on the Tags tab. The tags are also removed from all the objects to which they were attached.
Procedure 2.7. Adding or removing a tag to or from one or more object instances
- Search for the objects that you wish to tag or untag so that they are among the objects displayed in the results list.
- Select one or more objects on the results list.
- Click the Assign Tags button on the tool bar or right-click menu option.
- A dialog box provides a list of tags. Select the check box to assign a tag to the object, or deselect the check box to detach the tag from the object.
- Click OK. The specified tag is now added or removed as a custom property of the selected objects.
- Follow the search instructions in Section 2.2, “Search” , and enter a search query using “tag” as the property and the desired value or set of values as criteria for the search.The objects tagged with the tag criteria that you specified are listed in the results list.
Part II. Managing System Components
Chapter 3. Managing Clusters
3.1. Cluster Properties
Figure 3.1. Clusters Tab
Table 3.1. Cluster Properties
Field
|
Description
|
---|---|
Name
|
The name of the cluster. This must be a unique name and may use any combination of uppercase or lowercase letters, numbers, hyphens and underscores. Maximum length is 40 characters. The name can start with a number and this field is mandatory.
|
Description
|
The description of the cluster. This field is optional, but recommended.
|
Compatibility Version
|
The version of Red Hat Storage Console with which the cluster is compatible. All hosts in the cluster must support the indicated version.
Note
The default compatibility version is 3.4.
|
Table 3.2. Compatibility Matrix
Feature | Compatibility Version 3.2 | Compatibility Version 3.3 | Compatibility Version 3.4 |
---|---|---|---|
View advanced details of a particular brick of the volume through the Red Hat Storage Console.
|
Supported
|
Supported
|
Supported
|
Synchronize brick status with the engine database.
|
Supported
|
Supported
|
Supported
|
Manage glusterFS hooks through the Red Hat Storage Console. View the list of hooks available in the hosts, view the contents and status of hooks, enable or disable hooks, and resolve hook conflicts.
|
Supported
|
Supported
|
Supported
|
Display Services tab with NFS and SHD service status.
|
Supported
|
Supported
|
Supported
|
Manage volume rebalance through the Red Hat Storage Console. Rebalance volume, stop rebalance, and view rebalance status.
|
Not Supported
|
Supported
|
Supported
|
Manage remove-brick operations through the Red Hat Storage Console. Remove-brick, stop remove-brick, view remove-brick status, and retain the brick being removed.
|
Not Supported
|
Supported
|
Supported
|
Allow using system's root partition for bricks and and re-using the bricks by clearing the extended attributes.
|
Not Supported
|
Supported
|
Supported
|
Addition of RHS U2 nodes
|
Not Supported
|
Supported
|
Supported
|
Viewing Nagios Monitoring Trends
|
Not Supported
|
Not Supported
|
Supported
|
3.2. Cluster Operations
3.2.1. Creating a New Cluster
Procedure 3.1. To Create a New Cluster
- Open the Clusters view by expanding the System tab and selecting the Cluster tab in the Tree pane. Alternatively, select Clusters from the Details pane.
- Click New to open the New Cluster dialog box.
Figure 3.2. New Cluster Dialog Box
- Enter the cluster Name, Description and Compatibility Version. The name cannot include spaces.
- Click OK to create the cluster. The new cluster displays in the Clusters tab.
- Click Guide Me to configure the cluster. The Guide Me window lists the entities you need to configure for the cluster. Configure these entities or postpone configuration by clicking Configure Later. You can resume the configuration process by selecting the cluster and clicking Guide Me. To import an existing cluster, see Section 3.2.2, “Importing an Existing Cluster”.
3.2.2. Importing an Existing Cluster
peer status
command executes on that host through SSH, then displays a list of hosts that are part of the cluster. You must manually verify the fingerprint of each host and provide passwords for them. If some hosts are not reachable, then import cluster will not add these hosts to the cluster during import.
Procedure 3.2. To Import an Existing Cluster
- In the Tree pane, click System tab, then click the Clusters tab.
- Click New to open the New Cluster dialog box.
- Enter the cluster Name, Description and Compatibility Version. The name cannot include spaces.
- Select Import existing gluster configuration to import the cluster.
- In the Address field, enter the host name or IP address of a host in the cluster.The host Fingerprint displays to indicate the connection host. If a host in unreachable or if there is a network error, Error in fetching fingerprint displays in the Fingerprint field.
- Enter the Root Password for the host in the Password field and click OK.
- The Add Hosts window opens, and a list of hosts that are part of the cluster displays.
- For each host, enter the Name and Root Password. If you wish to use the same password for all hosts, check Use a common password and enter a password.
- Click Apply to set the password for all hosts then click OK to submit the changes.
3.2.3. Editing a Cluster
Procedure 3.3. To Edit a Cluster
- Click the Clusters tab to display the list of host clusters. Select the cluster that you want to edit.
- Click Edit to open the Edit Cluster dialog box.
- Enter a Name and Description for the cluster and select the compatibility version from the Compatibility Version drop down list.
- Click OK to confirm the changes and display the host cluster details.
3.2.4. Viewing Hosts in a Cluster
Procedure 3.4. To View Hosts in a Cluster
- Click the Clusters tab to display a list of host clusters. Select the desired cluster to display the Details pane.
- Click the Hosts tab to display a list of hosts.
Figure 3.3. The Hosts tab on the Cluster Details pane
3.2.5. Removing a Cluster
Warning
Procedure 3.5. To Remove a Cluster
- Click the Clusters tab to display a list of clusters. If the required cluster is not visible, perform a search.
- Select the cluster to be removed. Ensure that there are no running hosts or volumes.
- Click the Remove button.
- A dialog box lists all the clusters selected for removal. Click OK to confirm the removal.
3.3. Cluster Entities
A cluster is a collection of hosts. The Hosts tab displays all information related to the hosts in a cluster.
Table 3.3. Host Tab Properties
Field
|
Description
|
---|---|
Name
|
The name of the host.
|
Hostname/IP
|
The name of the host/IP address.
|
Status
|
The status of the host.
|
Logical networks enable hosts to communicate with other hosts, and for the Console to communicate with cluster entities. You must define logical networks for each cluster.
Table 3.4. Cluster Logical Networks Tab Properties
Field
|
Description
|
---|---|
Name
| The name of the logical networks in a cluster. |
Status
| The status of the logical networks. |
Role
| The hierarchical permissions available to the logical network. |
Description
| The description of the logical networks. |
Cluster permissions define which users and roles can work in a cluster, and what operations the users and roles can perform.
Table 3.5. Cluster Permissions Tab Properties
Field
|
Description
|
---|---|
User
| The user name of an existing user in the directory services. |
Role
| The role of the user. The role comprises of user, permission level and object. Roles can be default or customized roles. |
Inherited Permissions
| The hierarchical permissions available to the user. |
Gluster Hooks are volume lifecycle extensions. You can manage the Gluster Hooks from Red Hat Storage Console.
Table 3.6. Gluster Hooks Tab Properties
Field
|
Description
|
---|---|
Name
| The name of the hook. |
Volume Event
| Events are instances in the execution of volume commands like create, start, stop, add-brick, remove-brick, set and so on. Each of the volume commands have two instances during their execution, namely Pre and Post.
Pre and Post refers to the time just before and after the corresponding volume command has taken effect on a peer respectively.
|
Stage
| When the event should be executed. For example, if the event is Start Volume and the Stage is Post, the hook will be executed after the start of the volume. |
Status
|
Status of the gluster hook.
|
Content Type
|
Content type of the gluster hook.
|
The service running on a host can be searched using the Services tab.
Table 3.7. Services Tab Properties
Field
|
Description
|
---|---|
Host
| The ip of the host. |
Service
| The name of the service. |
Port
| The port number of the host. |
Status
|
The status of the host.
|
Process Id
|
The process id of the host.
|
3.4. Cluster Permissions
- Creation and removal of specific clusters.
- Addition and removal of hosts.
- Permission to attach users to hosts within a single cluster.
Note
Procedure 3.6. To Add a Cluster Administrator Role
- Click the Clusters tab to display the list of clusters. If the required cluster is not visible, perform a search.
- Select the cluster that you want to edit. Click the Permissions tab in the Details pane to display a list of existing users and their current roles and inherited permissions.
- Click Add to display the Add Permission to User dialog box. Enter all or part of a name or user name in the Search box, then click Go. A list of possible matches displays in the results list.
- Select the user you want to modify. Scroll through the Role to Assign list and select ClusterAdmin.
- Click OK to display the name of the user and their assigned role in the Permissions tab.
Procedure 3.7. To Remove a Cluster Administrator Role
- Click the Clusters tab to display a list of clusters. If the required cluster is not visible, perform a search.
- Select the cluster that you want to edit. Click the Permissions tab in the Details pane to display a list of existing users and their current roles and inherited permissions.
- Select the user you want to modify and click Remove. This removes the user from the Permissions tab and from associated hosts and volumes.
Chapter 4. Managing Red Hat Storage Hosts
- Must belong to only one cluster in the system.
- Can have an assigned system administrator with system permissions.
Important
4.1. Hosts Properties
Figure 4.1. Hosts Details Pane
Table 4.1. Hosts Properties
Field
|
Description
|
---|---|
Cluster
| The selected cluster. |
Name
| The host name. |
Address
| The IP address or resolvable hostname of the host. |
4.2. Hosts Operations
4.2.1. Adding Hosts
Important
Before you can add a host to Red Hat Storage, ensure your environment meets the following criteria:
- The host hardware is Red Hat Enterprise Linux certified. See https://hardware.redhat.com/ to confirm that the host has Red Hat certification.
- The host should have a resolvable hostname or static IP address.
Procedure 4.1. To Add a Host
- Click the Hosts tab to list available hosts.
- Click New to open the New Host window.
Figure 4.2. New Host Window
- Select the Host Cluster for the new host from the drop-down menu.
Table 4.2. Add Hosts Properties
FieldDescriptionHost Cluster
The cluster to which the host belongs. Name
The name of the host. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.If Nagios is enabled, the host name given in Name field of Add Host window should match the host name given while configuring Nagios.Address
The IP address or resolvable hostname of the host. Root Password
The password of the host's root user. This can only be given when you add the host, it cannot be edited afterwards. SSH Public Key
Copy the contents in the text box to the /root/.ssh/authorized_keys file on the host if you'd like to use the Manager's ssh key instead of using a password to authenticate with the host. Automatically configure host firewall
When adding a new host, the Manager can open the required ports on the host's firewall. This is enabled by default. This is an Advanced Parameter.The required ports are opened if this option is selected.SSH Fingerprint
You can fetch the host's ssh fingerprint, and compare it with the fingerprint you expect the host to return, ensuring that they match. This is an Advanced Parameter. - Enter the Name, and Address of the new host.
- Select an authentication method to use with the host:
- Enter the root user's password to use password authentication.
- Copy the key displayed in the SSH PublicKey field to
/root/.ssh/authorized_keys
on the host to use public key authentication.
- The mandatory steps for adding a Red Hat Storage host are complete. Click Advanced Parameters to show the advanced host settings:
- Optionally disable automatic firewall configuration.
- Optionally add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
- Click OK to add the host and close the window.The new host displays in the list of hosts with a status of "Installing" and then it goes to "Initialization" state and the host comes up.
Note
The host will be in Up status after the status of "Installing" and "Initialization" state. The host will have Non-Operational status when the host is not compatible with the cluster compatibility version. The Non-Responsive status will be displayed if the host is down or is unreachable.You can view the progress of the host installation in the Details pane.
4.2.2. Activating Hosts
Procedure 4.2. To Activate a Host
- In the Hosts tab, select the host you want to activate.
- Click Activate. The host status changes to Up.
4.2.3. Managing Host Network Interfaces
Note
ovirtmgmt
is supported.
4.2.3.1. Editing Host Network Interfaces
Procedure 4.3. To Edit a Host Network Interface
- Click the Hosts tab to display a list of hosts. Select the desired host to display the Details pane.
- Click Setup Host Networks to open the Setup Host Networks window.
Figure 4.3. Setup Host Networks Window
- Edit the management networks by hovering over an assigned logical network and clicking the pencil icon to open the Edit Management Network window.
- If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box.
- Select a Boot Protocol:
- None
- DHCP
- Static - provide the IP and Subnet Mask.
- Click OK.
- Select Verify connectivity between Host and Engine to run a network check.
- Select Save network configuration if you want the network changes to be persistent when you reboot the environment.
- Click OK to implement the changes and close the window.
4.2.3.2. Editing Management Network Interfaces
Important
Procedure 4.4. To Edit a Management Network Interface
- Click the Hosts tab to display a list of hosts. Select the desired host to display the Details pane.
- Edit the logical networks by hovering over an assigned logical network and clicking the pencil icon to open the Edit Management Network window.
Figure 4.4. Edit Management Network Dialog Box
- If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box.
- Select a Boot Protocol:
- None
- DHCP
- Static - provide the IP and Subnet Mask.
- Make the required changes to the management network interface:
- To attach the ovirtmgmt management network to a different network interface card, select a different interface from the Interface drop-down list.
- Select the network setting from None, DHCP or Static. For the Static setting, provide the IP, Subnet and Default Gateway information for the host.
- Click OK to confirm the changes.
- Select Verify connectivity between host and engine if required.
- Select Save network configuration to make the changes persistent when you reboot the environment.
- Click OK.
- Activate the host. See Section 4.2.2, “Activating Hosts”.
4.2.4. Configuring Network Interfaces
4.2.4.1. Saving Host Network Configurations
Procedure 4.5. To Save a Host Network Configuration
- Click the Hosts tab to display a list of hosts. Select the desired host to display the Details pane.
- Click Maintenance to place the host into maintenance. Click OK to confirm the action. The Status field of the host changes to Preparing for Maintenance, followed by Maintenance. The icon changes to indicate that the host is in maintenance mode.
- Click the Network Interfaces tab in the Details pane to display the list of network interface cards in the host, their addresses, and other specifications. Select the network interface card that you want to edit.
- Click Save Network Configuration to save the host network configuration. The Events pane displays the message:
Network Changes were saved on Host HOSTNAME
.
4.2.4.2. Deleting Hosts
Note
Procedure 4.6. To Delete a Host
- Click the Hosts tab to display a list of hosts. Select the host you want to remove. If the required host is not visible, perform a search.
- Click Maintenance to place the host into maintenance. Click OK to confirm the action. The Status field of the host changes to Preparing for Maintenance, followed by Maintenance. The icon changes to indicate that the host is in maintenance mode.
- Click Remove.
- Click OK to confirm.
4.2.4.3. Managing Gluster Sync
Procedure 4.7. To Import a Host to a Cluster
- Click the Cluster tab and select a cluster to display the General tab with details of the cluster.
- In Action Items, click Import to display the Add Hosts window.
Figure 4.5. Add Hosts Window
- Enter the Name and Root Password. Select Use a common password if you want to use the same password for all hosts.
- Click Apply.
- Click OK to add the host to the cluster.
Procedure 4.8. To Detach a Host from a Cluster
- Click the Cluster tab and select a cluster to display the General tab with details of the cluster.
- In Action Items, click Detach to display the Detach Hosts window.
- Select the host you want to detach and click OK. Select Force Detach if you want to perform force removal of the host from the cluster.
4.3. Maintaining Hosts
Warning
4.3.1. Moving Hosts into Maintenance Mode
Procedure 4.9. To Move a Host into Maintenance Mode
- Click the Hosts tab to display a list of hosts.
- Click Maintenance to place the host into maintenance. Click OK to confirm the action. The Status field of the host changes to Preparing for Maintenance, followed by Maintenance. The icon changes to indicate that the host is in maintenance mode.
- Perform required tasks. When you are ready to reactivate the host, click Activate.
- After the host reactivates, the Status field of the host changes to Up. If the Red Hat Storage Console is unable to contact or control the host, the Status field displays Non-responsive.
4.3.2. Editing Host Details
Procedure 4.10. To Edit Host Details
- Click the Hosts tab to display a list of hosts.
- If you are moving the host to a different cluster, first place it in maintenance mode by clicking Maintenance. Click OK to confirm the action. The Status field of the host changes to Preparing for Maintenance, followed by Maintenance. The icon changes to indicate that the host is in maintenance mode.
- Click Edit to open the Edit Host dialog box.
- To move the host to a different cluster, select the cluster from the Host Cluster drop-down list.
- Make the required edits and click OK. Activate the host to start using it. See Section 4.2.2, “Activating Hosts”
4.3.3. Customizing Hosts
Note
Procedure 4.11. To Tag a Host
- Click the Hosts tab to display a list of hosts. Select the desired host to display the Details pane.
- Click Assign Tags to open the Assign Tags dialog box.
- Select the required tags and click OK.
4.4. Hosts Entities
4.4.1. Viewing General Host Information
Procedure 4.12. To View General Host Information
- Click the Hosts tab to display a list of hosts. If the required host is not visible, perform a search.
- Select the desired host to display general information, network interface information and host information in the Details pane.
- Click General to display the following information:
- Version information for OS, Kernel, VDSM, and RHS.
- Status of memory page sharing (Active/Inactive) and automatic large pages (Always).
- CPU information: number of CPUs attached, CPU name and type, total physical memory allocated to the selected host, swap size, and shared memory.
- An alert if the host is in Non-Operational or Install-Failed state.
4.4.2. Viewing Network Interfaces on Hosts
Procedure 4.13. To View Network Interfaces on a Host
- Click the Hosts tab to display a list of hosts. If the required host is not visible, perform a search.
- Select the desired host to display the Details pane.
- Click the Network Interfaces tab.
4.4.3. Viewing Permissions on Hosts
Procedure 4.14. To View Permissions on a Host
- Click the Hosts tab to display a list of hosts. If the required host is not visible, perform a search.
- Select the desired host to display the Details pane.
- Click the Permissions tab.
4.4.4. Viewing Bricks from a Host
Procedure 4.15. To View Bricks from a Host
- Click the Hosts tab to display a list of hosts. If the required host is not visible, perform a search.
- Select the desired host to display the Details pane.
- Click the Events tab.
4.4.5. Viewing Bricks
Procedure 4.16. To View Bricks on a Host
- Click the Hosts tab to display a list of hosts. If the required host is not visible, perform a search.
- Select the desired host to display the Details pane.
- Click the Bricks tab.
4.5. Hosts Permissions
Note
Procedure 4.17. To Add a Host Administrator Role
- Click the Hosts tab to display a list of hosts. If the required host is not visible, perform a search.
- Select the desired host to display the Details pane.
- Click the Permissions tab to display a list of users and their current roles.
Figure 4.6. Host Permissions Window
- Click Add to display the Add Permission to User dialog box. Enter all or part of a name or user name in the Search box, then click Go. A list of possible matches displays in the results list.
- Select the user you want to modify. Scroll through the Role to Assign list and select HostAdmin.
- Click OK to display the name of the user and their assigned role in the Permissions tab.
Procedure 4.18. To Remove a Host Administrator Role
- Click the Hosts tab to display a list of hosts. If the required host is not visible, perform a search.
- Select the desired host to display the Details pane.
- Click the Permissions tab to display a list of users and their current roles.
- Select the desired user and click Remove
Chapter 5. Managing Volumes
Note
5.1. Creating a Volume
Procedure 5.1. Creating a Volume
- Click the Volumes tab. The Volumes tab lists all volumes in the system.
- Click the New. The New Volume window is displayed.
Figure 5.1. New Volume
- Select the cluster from the Volume Cluster drop-down list.
- In the Name field, enter the name of the volume.
Note
You can not create a volume with the name volume. - Select the type of the volume from the Type drop-down list. You can set the volume type to Distribute, Replicate, Stripe, Distributed Replicate, Distributed Stripe, Striped Replicate, or Distributed Striped Replicate.
Note
- The Stripe, Distributed Stripe, Striped Replicate, and Distributed Striped Replicate volume types are under technology preview. Technology Preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.
- Creating replicated volumes with replica count more than 2 is under technology preview.
- As necessary, click Add Bricks to add bricks to your volume.
Note
At least one brick is required to create a volume. The number of bricks required depends on the type of the volume.For more information on adding bricks to a volume, see Section 5.6.1, “Adding Bricks”. - Configure the Access Protocol for the new volume by selecting NFS, or CIFS, or both check boxes.
- In the Allow Access From field, specify the volume access control as a comma-separated list of IP addresses or hostnames.You can use wildcards to specify ranges of addresses. For example, an asterisk (*) specifies all IP addresses or hostnames. You need to use IP-based authentication for Gluster Filesystem and NFS exports.You can optimize volumes for virt-store by selecting Optimize for Virt Store.
- Click OK to create the volume. The new volume is added and displays on the Volume tab. The volume is configured, and group and storage-owner-gid options are set.
5.2. Starting Volumes
Procedure 5.2. Starting a Volume
- In the Volumes tab, select the volume to be started.You can select multiple volumes to start by using the
Shift
orCtrl
key. - Click the Start button.
5.3. Configuring Volume Options
Procedure 5.3. Configuring Volume Options
- Click the Volumes tab.A list of volumes displays.
- Select the volume to tune, and click the Volume Options tab from the Details pane.The Volume Options tab lists the options set for the volume.
- Click Add to set an option. The Add Option window is displayed. Select the option key from the drop-down list and enter the option value.
Figure 5.2. Add Option
- Click OK.The option is set and displays in the Volume Options tab.
5.3.1. Edit Volume Options
Procedure 5.4. Editing Volume Options
- Click the Volumes tab.A list of volumes displays.
- Select the volume to edit, and click the Volume Options tab from the Details pane.The Volume Options tab lists the options set for the volume.
- Select the option to edit. Click Edit. The Edit Option window is displayed. Enter a new value for the option in the Option Value field.
- Click OK.The edited option displays in the Volume Options tab.
5.3.2. Resetting Volume Options
Procedure 5.5. Resetting Volume Options
- Click the Volumes tab.A list of volumes is displayed.
- Select the volume and click the Volume Options tab from the Details pane.The Volume Options tab lists the options set for the volume.
- Select the option to reset. Click Reset. Reset Option window is displayed, prompting to confirm the reset.
- Click OK.The selected option is reset. The name of the volume option reset is displayed in the Events tab.
Note
5.4. Stopping Volumes
Note
Procedure 5.6. Stopping a Volume
- In the Volumes tab, select the volume to be stopped.You can select multiple volumes to stop by using the
Shift
orCtrl
key. - Click Stop. A window is displayed, prompting to confirm the stop.
Note
Stopping volume will make its data inaccessible. - Click OK.
5.5. Deleting Volumes
Procedure 5.7. Deleting a Volume
- In the Volumes tab, select the volume to be deleted.
- Click Stop. The volume stops.
- Click Remove. A window is displayed, prompting to confirm the deletion. Click OK. The volume is removed from the cluster.
5.6. Managing Bricks
Note
5.6.1. Adding Bricks
Procedure 5.8. Adding a Brick
- Click the Volumes tab.A list of volumes displays.
- Select the volume to which the new bricks are to be added. Click the Bricks tab from the Details pane.The Bricks tab lists the bricks of the selected volume.
- Click Add to add new bricks. The Add Bricks window is displayed.
Figure 5.3. Add Bricks
Table 5.1. Add Bricks Tab Properties
Field/TabDescription/ActionVolume Type
The type of volume.Replica Count
Number of replicas to keep for each stored item.Stripe Count
Number of bricks to stripe each file across.Host
The selected host from which new bricks are to be added.Brick Directory
The directory in the host. - Use the Host drop-down menu to select the host on which the brick resides .
- Select the Allow bricks in root partition and re-use the bricks by clearing xattrs to use the system's root partition for storage and to re-use the existing bricks by clearing the extended attributes.
Note
It is recommended that you don't use the system's root partition for storage backend. - Enter the path of the Brick Directory. The directory must already exist.
- Click Add and click OK. The new bricks are added to the volume and is displayed in the Bricks tab.
5.6.2. Removing Bricks
Note
- When shrinking distributed replicated or distributed striped volumes, the number of bricks being removed must be a multiple of the replica or stripe count. For example, to shrink a distributed striped volume with a stripe count of 2, you need to remove bricks in multiples of 2 (such as 2, 4, 6, 8). In addition, the bricks you are removing must be from the same replica set or stripe set. In a non-replicated volume, all bricks must be available in order to migrate data and perform the remove brick operation. In a replicated volume, at least one of the bricks in the replica must be available.
- You can monitor the status of Remove Bricks operation from the Tasks pane.
- You can perform Commit, Retain, view Status and Stop from remove-brick icon in the Activities column of Volumes and Bricks sub-tab.
Procedure 5.9. Removing Bricks from an Existing Volume
- Click the Volumes tab.A list of volumes is displayed.
- Select the volume from which bricks are to be removed. Click the Bricks tab from the Details pane.The Bricks tab lists the bricks for the volume.
- Select the brick to remove. Click Remove. The Remove Bricks window is displayed, prompting to confirm the removal of the bricks.
Warning
If the brick is removed without selecting theMigrate Data from the bricks
check box, the data on the brick which is being removed will not be accessible on the glusterFS mount point. If theMigrate Data from the bricks
check box is selected, the data is migrated to other bricks and on a successful commit, the information of the removed bricks is deleted from the volume configuration. Data can still be accessed directly from the brick. - Click OK, remove brick starts.
Note
- Once remove-brick starts, remove-brick icon is displayed in Activities column of both Volumes and Bricks sub-tab.
- After completion of the remove brick operation, the remove brick icon disappears after 10 minutes.
- In the Activities column, ensure that data migration is completed and select the drop down of the remove-brick icon corresponding to the volume from which bricks are to be removed.
- Click Commit to perform the remove brick operation.
Figure 5.4. Remove Bricks Commit
Note
TheCommit
option is enabled only if the data migration is completed.The remove brick operation is completed and the status is displayed in the Activities column. You can check the status of the remove brick operation by selecting Status from the activities column.
5.6.2.1. Stopping a Remove Brick Operation
Note
- Stop remove-brick operation is a technology preview feature. Technology Preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process. As Red Hat considers making future iterations of Technology Preview features generally available, we will provide commercially reasonable efforts to resolve any reported issues that customers experience when using these features.
- Files which were migrated during Remove Brick operation are not migrated to the same brick when the operation is stopped.
Procedure 5.10. Stopping a Remove Brick Operation
- Click the Volumes tab. A list of volumes displays.
- In the Activities column, select the drop down of the remove-brick icon corresponding to the volume to stop remove brick.
- Click Stop to stop remove brick operation. The remove brick operation is stopped and remove-brick icon in the activities column is updated. The remove brick status is displayed after stopping the remove brick.You can also view the status of the Remove Brick operation by selecting Status from the drop down of the remove-brick icon in the Activities column of Volumes and Bricks sub-tab..
5.6.2.2. Viewing Remove Brick Status
Procedure 5.11. Viewing Remove Brick Status
- Click the Volumes tab. A list of volumes displays.
- In the Activities column, click the arrow corresponding to the volume.
- Click Status to view the status of the remove brick operation. The Remove Bricks Status window displays.
Figure 5.5. Remove Brick Status
- Click one of the options below for the corresponding results
- Stop to stop the remove brick operation
- Commit to commit the remove brick operation
- Retain to retain the brick selected for removal
- Close to close the remove-brick status popup
5.6.2.3. Retaining a brick selected for Removal
Note
Procedure 5.12. Retaining a Brick selected for Removal
- Click the Volumes tab. A list of volumes displays.
- In the Activities column, click the arrow corresponding to the volume.
- Click Retain to retain the brick selected for removal. The brick is not removed and the status of the operation is displayed in the remove brick icon in the Activities column.You can also check the status by selecting the Status option from the drop down of remove-brick icon in the activities column.
5.6.3. Viewing Advanced Details
Procedure 5.13. Viewing Advanced Details
- Click the Volumes tab. A list of volumes displays.
- Select the required volume and click the Bricks tab from the Details pane.
- Select the brick and click Advanced Details. The Brick Advanced Details window displays.
Figure 5.6. Brick Advanced Details
Table 5.2. Brick Details
Field/Tab
|
Description/Action
|
---|---|
General
|
Displays additional information about the bricks.
|
Clients
|
Displays a list of clients accessing the volumes.
|
Memory Statistics/Memory Pool
|
Displays the details of memory usage and memory pool for the bricks.
|
5.7. Volumes Permissions
Procedure 5.14. Assigning a System Administrator Role for a Volume
- Click the Volumes tab. A list of volumes displays.
- Select the volume to edit, and click the Permissions tab from the Details pane.The Permissions tab lists users and their current roles and permissions, if any.
Figure 5.7. Volume Permissions
- Click Add to add an existing user. The Add Permission to User window is displayed. Enter a name, a user name, or part thereof in the Search text box, and click Go. A list of possible matches displays in the results list.
- Select the check box of the user to be assigned the permissions. Scroll through the Role to Assign list and select
GlusterAdmin
.Figure 5.8. Assign GlusterAdmin Permission
- Click OK.The name of the user displays in the Permissions tab, with an icon and the assigned role.
Note
Procedure 5.15. Removing a System Administrator Role
- Click the Volumes tab. A list of volumes displays.
- Select the required volume and click the Permissions tab from the Details pane.The Permissions tab lists users and their current roles and permissions, if any. The Super User and Cluster Administrator, if any, will display in the Inherited Permissions tab. However, none of these higher level roles can be removed.
- Select the appropriate user.
- Click Remove. A window is displayed, prompting to confirm removing the user. Click OK. The user is removed from the Permissions tab.
5.8. Rebalancing Volume
- Start Rebalance
- Stop Rebalance
- View Rebalance Status
Note
5.8.1. Start Rebalance
- Click the Volumes tab. The Volumes tab is displayed with the list of all volumes in the system.
- Select the volume that you want to Rebalance.
- Click the Rebalance. The Rebalance process starts and the rebalance icon is displayed in the Activities column of the volume. A mouseover script is displayed mentioning that the rebalance is in progress. You can view the rebalance status by selecting status from the rebalance drop-down list .
Note
After completion of the rebalance operation, the rebalance icon disappears after 10 minutes.
5.8.2. Stop Rebalance
- Click the Volumes tab. The Volumes tab is displayed with the list of all volumes in the system.
- Select a volume on which Rebalance need to be stopped.
Note
- You can not stop rebalance for multiple volumes.
- Rebalance can be stopped for volumes only if it is in progress
- In the Activities column, select the drop-down of the Rebalance icon corresponding to the volume.
- Click Stop. The Stop Rebalance window is displayed.
- Click OK to stop rebalance. The Rebalance is stopped and the status window is displayed.You can also check the status of the Rebalance operation by selecting Status option from the drop down of Rebalance icon in the activities column.
5.8.3. View Rebalance Status
- Click the Volumes tab. The Volumes tab is displayed with the list of all volumes in the system.
- Select the volume on which Rebalance is in progress, stopped, completed.
- Click Status option from the Rebalance icon drop down list. The Rebalance Status page is displayed.
Figure 5.9. Rebalance Status
Note
If the Rebalance Status window is open while Rebalance is stopped using the CLI, the status is displayed asStopped
. If the Rebalance Status window is not open, the task status is displayed asUnknown
as the status update depends on gluster CLI.You can also stop Rebalance operation by clicking Stop in the Rebalance Status window.
Chapter 6. Managing Gluster Hooks
- View a list of hooks available in the hosts.
- View the content and status of hooks.
- Enable or disable hooks.
- Resolve hook conflicts.
6.1. Viewing the list of Hooks
Figure 6.1. Gluster Hooks
6.2. Viewing the Content of Hooks
Procedure 6.1. Viewing the Content of a Hook
- Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
- Select a hook with content type Text and click View Content. The Hook Content window displays with the content of the hook.
Figure 6.2. Hook Content
6.3. Enabling or Disabling Hooks
Procedure 6.2. Enabling or Disabling a Hook
- Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
- Select the hook and click Enable or Disable.If Disable is selected, Disable Gluster Hooks dialog box displays, prompting you to confirm disabling hook. Click OK to confirm disabling.The hook is enabled or disabled on all nodes of the cluster.The enabled or disabled hooks status update displays in the Gluster Hooks sub-tab.
6.4. Refreshing Hooks
Procedure 6.3. Refreshing a Hook
- Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
- Click Sync. The hooks are synchronized and displayed.
6.5. Resolving Conflicts
- Content Conflict - the content of the hook is different across servers.
- Status Conflict - the status of the hook is different across servers.
- Missing Conflict - one or more servers of the cluster do not have the hook.
- Content + Status Conflict - both the content and status of the hook are different across servers.
- Content + Status + Missing Conflict - both the content and status of the hook are different across servers, or one or more servers of the cluster do not have the hook.
6.5.1. Resolving Missing Hook Conflicts
Procedure 6.4. Resolving a Missing Hook Conflict
- Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
- Select a hook causing a conflict and click Resolve Conflicts. The Resolve Conflicts window displays.
Figure 6.3. Missing Hook Conflict
- Select one of the options give below:
- Copy the hook to all the servers to copy the hook to all servers.
- Remove the missing hook to remove the hook from all servers and the engine.
- Click OK. The conflict is resolved.
6.5.2. Resolving Content Conflicts
Procedure 6.5. Resolving a Content Conflict
- Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
- Select the conflicted hook and click Resolve Conflicts. The Resolve Conflicts window displays.
Figure 6.4. Content Conflict
- Select an option from the Use Content from drop-down list:
- Select a server to copy the content of the hook from the selected server.Or
- Select Engine (Master) to copy the content of the hook from the engine copy.
Note
The content of the hook will be overwritten in all servers and in the engine. - Click OK. The conflict is resolved.
6.5.3. Resolving Status Conflicts
Procedure 6.6. Resolving a Status Conflict
- Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
- Select the conflicted hook and click Resolve Conflicts. The Resolve Conflicts window displays.
Figure 6.5. Status Conflict
- Set Hook Status to Enable or Disable.
- Click OK. The conflict is resolved.
6.5.4. Resolving Content and Status Conflicts
Procedure 6.7. Resolving a Content and Status Conflict
- Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
- Select a hook causing a conflict and click Resolve Conflicts. The Resolve Conflicts window displays.
- Select an option from the Use Content from drop-down list to resolve the content conflict:
- Select a server to copy the content of the hook from the selected server.Or
- Select Engine (Master) to copy the content of the hook from the engine copy.
Note
The content of the hook will be overwritten in all the servers and in Engine. - Set Hook Status to Enable or Disable to resolve the status conflict.
- Click OK. The conflict is resolved.
6.5.5. Resolving Content, Status, and Missing Conflicts
Procedure 6.8. Resolving a Content, Status and Missing Conflict
- Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
- Select the conflicted hook and click Resolve Conflicts. The Resolve Conflicts window displays.
- Select one of the options given below to resolve the missing conflict:
- Copy the hook to all the servers.
- Remove the missing hook.
- Select an option from the Use Content from drop-down list to resolve the content conflict:
- Select a server to copy the content of the hook from the selected server.Or
- Select Engine (Master) to copy the content of the hook from the engine copy.
Note
The content of the hook will be overwritten in all the servers and in Engine. - Set Hook Status to Enable or Disable to resolve the status conflict.
- Click OK. The conflict is resolved.
Chapter 7. Users
Note
7.1. Directory Services Support in Red Hat Storage Console
admin
. This account is intended for use when initially configuring the environment, and for troubleshooting. To add other users to Red Hat Storage Console you will need to attach a directory server to the Console using the Domain Management Tool, rhsc-manage-domains
.
user@domain
. Attachment of more than one directory server to the Console is also supported.
- Active Directory;
- Identity Management (IdM); and
- Red Hat Directory Server(RHDS).
- A valid pointer record (PTR) for the directory server's reverse look-up address.
- A valid service record (SRV) for LDAP over TCP port
389
. - A valid service record (SRV) for Kerberos over TCP port
88
. - A valid service record (SRV) for Kerberos over UDP port
88
.
rhsc-manage-domains
.
- Active Directory - http://technet.microsoft.com/en-us/windowsserver/dd448614.
- Red Hat Directory Server (RHDS) Documentation - https://access.redhat.com/site/documentation/en-US/Red_Hat_Directory_Server/
Important
Important
Note
- Configure the
memberOf
plug-in for RHDS to allow group membership. In particular ensure that the value of thememberofgroupattr
attribute of thememberOf
plug-in is set touniqueMember
.Consult the Red Hat Directory Server Plug-in Guide for more information on configuring thememberOf
plug-in. - Define the directory server as a service of the form
ldap/hostname@REALMNAME
in the Kerberos realm. Replace hostname with the fully qualified domain name associated with the directory server and REALMNAME with the fully qualified Kerberos realm name. The Kerberos realm name must be specified in capital letters. - Generate a
keytab
file for the directory server in the Kerberos realm. Thekeytab
file contains pairs of Kerberos principals and their associated encrypted keys. These keys will allow the directory server to authenticate itself with the Kerberos realm.Consult the documentation for your Kerberos principle for more information on generating akeytab
file. - Install the
keytab
file on the directory server. Then configure RHDS to recognize thekeytab
file and accept Kerberos authentication using GSSAPI.Consult the Red Hat Directory Server Administration Guide for more information on configuring RHDS to use an externalkeytab
file. - Test the configuration on the directory server by using the
kinit
command to authenticate as a user defined in the Kerberos realm. Once authenticated run theldapsearch
command against the directory server. Use the-Y GSSAPI
parameters to ensure the use of Kerberos for authentication.
7.2. Authorization Model
- The user performing the action
- The type of action being performed
- The object on which the action is being performed
For an action to be successfully performed, the user
must have the appropriate permission
for the object
being acted upon. Each type of action corresponds to a permission
. There are many different permissions in the system, so for simplicity they are grouped together in roles
.
Figure 7.1. Actions
Permissions enable users to perform actions on objects, where objects are either individual objects or container objects.
Figure 7.2. Permissions & Roles
Important
7.3. User Properties
7.3.1. Roles
administrator
role. The privileges provided by this role are shown in this section.
Note
Administrator Role
- Allows access to the Administration Portal for managing servers and volumes.For example, if a user has an
administrator
role on a cluster, they could manage all servers in the cluster using the Administration Portal.
Table 7.1. Red Hat Storage Console System Administrator Roles
Role | Privileges | Notes |
---|---|---|
SuperUser | Full permissions across all objects and levels | Can manage all objects across all clusters. |
ClusterAdmin | Cluster Administrator | Can use, create, delete, and manage all resources in a specific cluster, including servers and volumes. |
GlusterAdmin | Gluster Administrator | Can create, delete, configure and manage a specific volume. Can also add or remove host. |
HostAdmin | Host Administrator | Can configure, manage, and remove a specific host. Can also perform network-related operations on a specific host. |
NetworkAdmin | Network Administrator | Can configure and manage networks attached to servers. |
7.3.2. Permissions
Table 7.2. Permissions Actions on Objects
Object | Action |
---|---|
System - Configure RHS-C | Manipulate Users, Manipulate Permissions, Manipulate Roles, Generic Configuration |
Cluster - Configure Cluster | Create, Delete, Edit Cluster Properties, Edit Network |
Server - Configure Server | Create, Delete, Edit Host Properties, Manipulate Status, Edit Network |
Gluster Storage - Configure Gluster Storage | Create, Delete, Edit Volumes, Volume Options, Manipulate Status |
7.4. Users Operations
Note
7.4.1. Adding Users and Groups
Adding Users
- Click the Users tab. The list of authorized users for Red Hat Storage Console displays.
- Click Add. The Add Users and Groups dialog box displays.
Figure 7.3. Add Users and Groups Dialog Box
- The default Search domain displays. If there are multiple search domains, select the appropriate search domain. Enter a name or part of a name in the search text field, and click GO. Alternatively, click GO to view a list of all users and groups.
- Select the group, user or users check boxes. The added user displays on the Users tab.
Users are not created from within the Red Hat Storage; Red Hat Storage Console accesses user information from the organization's Directory Service. This means that you can only assign roles to users who already exist in your Directory Services domain. To assign permissions to users, use the Permissions tab on the Details pane of the relevant resource.
Example 7.1. Assigning a user permissions to use a particular server
To view general user information:
- Click the Users tab. The list of authorized users for Red Hat Storage Console displays.
- Select the user, or perform a search if the user is not visible on the results list.
- The Details pane displays for the selected user, usually with the General tab displaying general information, such as the domain name, email, and status of the user.
- The other tabs allow you to view groups, permissions, and events for the user.For example, to view the groups to which the user belongs, click the Directory Groups tab.
7.4.2. Removing Users
To remove a user:
- Click the Users tab. The list of authorized users for Red Hat Storage Console displays.
Figure 7.4. Users Tab
- Select the user to be removed.
- Click the Remove button. A message displays prompting you to confirm the removal.
- Click OK.
- The user is removed from Red Hat Storage Console.
Note
7.5. Event Notifications
7.5.1. Managing Event Notifiers
To set up event notifications:
- Click the Users tab. The list of authorized users for Red Hat Storage Console displays.
- Select the user who requires notification, or perform a search if the user is not visible on the results list.
- Click the Event Notifier tab. The Event Notifier tab displays a list of events for which the user will be notified, if any.
- Click the Manage Events button. The Add Event Notification dialog box displays a list of events for Services, Hosts, Volumes, Hooks, and General Management events. You can select all, or pick individual events from the list. Click the Expand All button to see complete lists of events.
Figure 7.5. The Add Events Dialog Box
- Enter an email address in the Mail Recipient: field.
- Click OK to save changes and close the window. The selected events display on the Event Notifier tab for the user.
- Configure the ovirt-engine-notifier service on the Red Hat Storage Console.
Important
The MAIL_SERVER parameter is mandatory.The event notifier configuration file can be found in /usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf. The parameters for event notifications in ovirt-engine-notifier.conf are listed in Table 7.3, “ovirt-engine-notifier.conf variables”Table 7.3. ovirt-engine-notifier.conf variables
Variable name Default Remarks INTERVAL_IN_SECONDS 120 The interval in seconds between instances of dispatching messages to subscribers. MAIL_SERVER none The SMTP mail server address. Required. MAIL_PORT 25 The default port of a non-secured SMTP server is 25. The default port of a secured SMTP server (one with SSL enabled) is 465. MAIL_USER none If SSL is enabled to authenticate the user, then this variable must be set. This variable is also used to specify the "from" user address when the MAIL_FROM variable is not set. Some mail servers do not support this functionality. The address is in RFC822 format. MAIL_PASSWORD none This variable is required to authenticate the user if the mail server requires authentication or if SSL is enabled. MAIL_ENABLE_SSL false This indicates whether SSL should be used to communicate with the mail server. HTML_MESSAGE_FORMAT false The mail server sends messages in HTML format if this variable is set to "true". MAIL_FROM none This variable specifies a "from" address in RFC822 format, if supported by the mail server. MAIL_REPLY_TO none This variable specifies "reply-to" addresses in RFC822 format on sent mail, if supported by the mail server. DAYS_TO_KEEP_HISTORY none This variable sets the number of days dispatched events will be preserved in the history table. If this variable is not set, events remain on the history table indefinitely. DAYS_TO_SEND_ON_STARTUP 0 This variable specifies the number of days of old events that are processed and sent when the notifier starts. If set to 2, for example, the notifier will process and send the events of the last two days. Older events will just be marked as processed and won't be sent. The default is 0, so no old messages will be sent at all during startup. - Start the ovirt-engine-notifier service on the Red Hat Storage Console. This activates the changes you have made:
# /etc/init.d/ovirt-engine-notifier start
To cancel event notification:
- In the Users tab, select the user or the user group.
- Select the Event Notifier tab. The Details pane displays the events for which the user will receive notifications.
- Click the Manage Events button. The Add Event Notification dialog box displays a list of events for Servers, Gluster Volume, and General Management events. To remove an event notification, deselect events from the list. Click the Expand All button to see the complete lists of events.
- Click OK. The deselected events are removed from the display on the Event Notifier tab for the user.
Part III. Monitoring
Chapter 8. Monitoring Red Hat Storage Console
8.1. Viewing the Event List
Figure 8.1. Event List - Advanced View
Event
list:
Column | Description |
---|---|
Event |
The type of event. The possible event types are:
Audit notification (e.g. log on).
Warning notification.
Error notification.
|
Time |
The time that the event occurred.
|
Message |
The message describing that an event occurred.
|
User |
The user that received the event.
|
Host |
The host on which the event occurred.
|
Cluster |
The cluster on which the event occurred.
|
8.2. Viewing Alert Information
Chapter 9. Monitoring Red Hat Storage using Nagios
Figure 9.1. Nagios on Red Hat Storage Console Server
9.1. Configuring Nagios
Note
configure-gluster-nagios
command, ensure that all the Red Hat Storage nodes are configured.
- Execute
configure-gluster-nagios
command manually only the first time with cluster name and host address on the Nagios server using the following command:# configure-gluster-nagios -c cluster-name -H HostName-or-IP-address
For-c
, provide a cluster name (a logical name for the cluster) and for-H
, provide the host name or ip address of a node in the Red Hat Storage trusted storage pool. - Perform the steps given below when
configure-gluster-nagios
command runs:- Confirm the configuration when prompted.
- Enter the current Nagios server host name or IP address to be configured all the nodes.
- Confirm restarting Nagios server when prompted.
# configure-gluster-nagios -c demo-cluster -H HostName-or-IP-address Cluster configurations changed Changes : Hostgroup demo-cluster - ADD Host demo-cluster - ADD Service - Volume Utilization - vol-1 -ADD Service - Volume Self-Heal - vol-1 -ADD Service - Volume Status - vol-1 -ADD Service - Volume Utilization - vol-2 -ADD Service - Volume Status - vol-2 -ADD Service - Cluster Utilization -ADD Service - Cluster - Quorum -ADD Service - Cluster Auto Config -ADD Host Host_Name - ADD Service - Brick Utilization - /bricks/vol-1-5 -ADD Service - Brick - /bricks/vol-1-5 -ADD Service - Brick Utilization - /bricks/vol-1-6 -ADD Service - Brick - /bricks/vol-1-6 -ADD Service - Brick Utilization - /bricks/vol-2-3 -ADD Service - Brick - /bricks/vol-2-3 -ADD Are you sure, you want to commit the changes? (Yes, No) [Yes]: Enter Nagios server address [Nagios_Server_Address]: Cluster configurations synced successfully from host ip-address Do you want to restart Nagios to start monitoring newly discovered entities? (Yes, No) [Yes]: Nagios re-started successfully
All the hosts, volumes and bricks are added and displayed.
- Login to the Nagios server GUI using the following URL.
https://NagiosServer-HostName-or-IPaddress/nagios
Note
- The default Nagios user name and password is nagiosadmin / nagiosadmin.
- You can manually update/discover the services by executing the
configure-gluster-nagios
command or by runningCluster Auto Config
service through Nagios Server GUI. - If the node with which auto-discovery was performed is down or removed from the cluster, run the
configure-gluster-nagios
command with a different node address to continue discovering or monitoring the nodes and services. - If new nodes or services are added, removed, or if snapshot restore was performed on Red Hat Storage node, run
configure-gluster-nagios
command.
9.2. Configuring Nagios Server to Send Mail Notifications
- In the
/etc/nagios/gluster/gluster-contacts.cfg
file, add contacts to send mail in the format shown below:Modifycontact_name
,alias
, andemail
.define contact { contact_name Contact1 alias ContactNameAlias email email-address service_notification_period 24x7 service_notification_options w,u,c,r,f,s service_notification_commands notify-service-by-email host_notification_period 24x7 host_notification_options d,u,r,f,s host_notification_commands notify-host-by-email } define contact { contact_name Contact2 alias ContactNameAlias2 email email-address service_notification_period 24x7 service_notification_options w,u,c,r,f,s service_notification_commands notify-service-by-email host_notification_period 24x7 host_notification_options d,u,r,f,s host_notification_commands notify-host-by-email }
Theservice_notification_options
directive is used to define the service states for which notifications can be sent out to this contact. Valid options are a combination of one or more of the following:w
: Notify on WARNING service statesu
: Notify on UNKNOWN service statesc
: Notify on CRITICAL service statesr
: Notify on service RECOVERY (OK states)f
: Notify when the service starts and stops FLAPPINGn (none)
: Do not notify the contact on any type of service notifications
Thehost_notification_options
directive is used to define the host states for which notifications can be sent out to this contact. Valid options are a combination of one or more of the following:d
: Notify on DOWN host statesu
: Notify on UNREACHABLE host statesr
: Notify on host RECOVERY (UP states)f
: Notify when the host starts and stops FLAPPINGs
: Send notifications when host or service scheduled downtime starts and endsn (none)
: Do not notify the contact on any type of host notifications.
Note
By default, a contact and a contact group are defined for administrators incontacts.cfg
and all the services and hosts will notify the administrators. Add suitable email id for administrator incontacts.cfg
file. - To add a group to which the mail need to be sent, add the details as given below:
define contactgroup{ contactgroup_name Group1 alias GroupAlias members Contact1,Contact2 }
- In the
/etc/nagios/gluster/gluster-templates.cfg
file specify the contact name and contact group name for the services for which the notification need to be sent, as shown below:Addcontact_groups
name andcontacts
name.define host{ name gluster-generic-host use linux-server notifications_enabled 1 notification_period 24x7 notification_interval 120 notification_options d,u,r,f,s register 0 contact_groups Group1 contacts Contact1,Contact2 } define service { name gluster-service use generic-service notifications_enabled 1 notification_period 24x7 notification_options w,u,c,r,f,s notification_interval 120 register 0 _gluster_entity Service contact_groups Group1 contacts Contact1,Contact2 }
You can configure notification for individual services by editing the corresponding node configuration file. For example, to configure notification for brick service, edit the corresponding node configuration file as shown below:define service { use brick-service _VOL_NAME VolumeName __GENERATED_BY_AUTOCONFIG 1 notes Volume : VolumeName host_name RedHatStorageNodeName _BRICK_DIR brickpath service_description Brick Utilization - brickpath contact_groups Group1 contacts Contact1,Contact2 }
- To receive detailed information on every update when Cluster Auto-Config is run, edit
/etc/nagios/objects/commands.cfg
file add$NOTIFICATIONCOMMENT$\n
after$SERVICEOUTPUT$\n
option innotify-service-by-email
andnotify-host-by-email
command definition as shown below:# 'notify-service-by-email' command definition define command{ command_name notify-service-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n $NOTIFICATIONCOMMENT$\n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$ }
- Restart the Nagios server using the following command:
# service nagios restart
Note
- By default, the system ensures three occurances of the event before sending mail notifications.
- By default, Nagios Mail notification is sent using
/bin/mail
command. To change this, modify the definition fornotify-host-by-email
command andnotify-service-by-email
command in/etc/nagios/objects/commands.cfg
file and configure the mail server accordingly.
9.3. Verifying the Configuration
- Verify the updated configurations using the following command:
#
nagios -v /etc/nagios/nagios.cfgIf error occurs, verify the parameters set in/etc/nagios/nagios.cfg
and update the configuration files. - Restart Nagios server using the following command:
#
service nagios restart - Log into the Nagios server GUI using the following URL with the Nagios Administrator user name and password.
https://NagiosServer-HostName-or-IPaddress/nagios
Note
To change the default password, see Changing Nagios Password section in Red Hat Storage Administration Guide. - Click Services in the left pane of the Nagios server GUI and verify the list of hosts and services displayed.
Figure 9.2. Nagios Services
9.4. Using Nagios Server GUI
https://NagiosServer-HostName-or-IPaddress/nagios
Figure 9.3. Nagios Login
To view the overview of the hosts and services being monitored, click Tactical Overview in the left pane. The overview of Network Outages, Hosts, Services, and Monitoring Features are displayed.
Figure 9.4. Tactical Overview
To view the status summary of all the hosts, click Summary under Host Groups in the left pane.
Figure 9.5. Host Groups Summary
Figure 9.6. Host Status
To view the list of all hosts and their service status click Services in the left pane.
Figure 9.7. Service Status
Note
- Click
Hosts
in the left pane. The list of hosts are displayed. - Click corresponding to the host name to view the host details.
- Select the service name to view the Service State Information. You can view the utilization of the following services:
- Memory
- Swap
- CPU
- Network
- Brick
- DiskThe Brick/Disk Utilization Performance data has four sets of information for every mount point which are brick/disk space detail, inode detail of a brick/disk, thin pool utilization and thin pool metadata utilization if brick/disk is made up of thin LV.The Performance data for services is displayed in the following format: value[UnitOfMeasurement];warningthreshold;criticalthreshold;min;max.For Example,Performance Data: /bricks/brick2=31.596%;80;90;0;0.990 /bricks/brick2.inode=0.003%;80;90;0;1048064 /bricks/brick2.thinpool=19.500%;80;90;0;1.500 /bricks/brick2.thinpool-metadata=4.100%;80;90;0;0.004As part of disk utilization service, the following mount points will be monitored:
/ , /boot, /home, /var and /usr
if available.
- To view the utilization graph, click corresponding to the service name. The utilization graph is displayed.
Figure 9.8. CPU Utilization
- To monitor status, click on the service name. You can monitor the status for the following resources:
- Disk
- Network
- To monitor process, click on the process name. You can monitor the following processes:
- NFS(NetworkFileSystem)
- Self-Heal(SelfHeal)
- GlusterManagement(glusterd)
- Quota(Quota daemon)
- CTDB
- SMB
Note
Monitoring Openstack Swift operations is not supported.
- Click
Hosts
in the left pane. The list of hosts and clusters are displayed. - Click corresponding to the cluster name to view the cluster details.
- To view utilization graph, click corresponding to the service name. You can monitor the following utilizations:
- Cluster
- Volume
Figure 9.9. Cluster Utilization
- To monitor status, click on the service name. You can monitor the status for the following resources:
- Host
- Volume
- Brick
- To monitor cluster services, click on the service name. You can monitor the following:
- Volume Quota
- Volume Geo-replication
- Volume Self Heal
- Cluster Quorum (A cluster quorum service would be present only when there are volumes in the cluster.)
If new nodes or services are added or removed, or if snapshot restore is performed on Red Hat Storage node, reschedule the Cluster Auto config
service using Nagios Server GUI or execute the configure-gluster-nagios
command. To synchronize the configurations using Nagios Server GUI, perform the steps given below:
- Login to the Nagios Server GUI using the following URL in your browser with nagiosadmin user name and password.
https://NagiosServer-HostName-or-IPaddress/nagios
- Click Services in left pane of Nagios server GUI and click Cluster Auto Config.
Figure 9.10. Nagios Services
- In Service Commands, click Re-schedule the next check of this service. The Command Options window is displayed.
Figure 9.11. Service Commands
- In Command Options window, click Commit.
Figure 9.12. Command Options
You can enable or disable Host and Service notifications through Nagios GUI.
- To enable and disable Host Notifcations:
- Login to the Nagios Server GUI using the following URL in your browser with nagiosadmin user name and password.
https://NagiosServer-HostName-or-IPaddress/nagios
- Click Hosts in left pane of Nagios server GUI and select the host.
- Click Enable notifications for this host or Disable notifications for this host in Host Commands section.
- Click Commit to enable or disable notification for the selected host.
- To enable and disable Service Notification:
- Login to the Nagios Server GUI.
- Click Services in left pane of Nagios server GUI and select the service to enable or disable.
- Click Enable notifications for this service or Disable notifications for this service from the Service Commands section.
- Click Commit to enable or disable the selected service notification.
- To enable and disable all Service Notifications for a host:
- Login to the Nagios Server GUI.
- Click Hosts in left pane of Nagios server GUI and select the host to enable or disable all services notifications.
- Click Enable notifications for all services on this host or Disable notifications for all services on this host from the Service Commands section.
- Click Commit to enable or disable all service notifications for the selected host.
- To enable or disable all Notifications:
- Login to the Nagios Server GUI.
- Click Process Info under Systems section from left pane of Nagios server GUI.
- Click Enable notifications or Disable notifications in Process Commands section.
- Click Commit.
You can enable a service to monitor or disable a service you have been monitoring using the Nagios GUI.
- To enable Service Monitoring:
- Login to the Nagios Server GUI using the following URL in your browser with nagiosadmin user name and password.
https://NagiosServer-HostName-or-IPaddress/nagios
- Click Services in left pane of Nagios server GUI and select the service to enable monitoring.
- Click Enable active checks of this service from the Service Commands and click Commit.
- Click Start accepting passive checks for this service from the Service Commands and click Commit.Monitoring is enabled for the selected service.
- To disable Service Monitoring:
- Login to the Nagios Server GUI using the following URL in your browser with nagiosadmin user name and password.
https://NagiosServer-HostName-or-IPaddress/nagios
- Click Services in left pane of Nagios server GUI and select the service to disable monitoring.
- Click Disable active checks of this service from the Service Commands and click Commit.
- Click Stop accepting passive checks for this service from the Service Commands and click Commit.Monitoring is disabled for the selected service.
Note
Table 9.1.
Service Name | Status | Messsage | Description |
---|---|---|---|
SMB | OK | OK: No gluster volume uses smb | When no volumes are exported through smb. |
OK | Process smb is running | When SMB service is running and when volumes are exported using SMB. | |
CRITICAL | CRITICAL: Process smb is not running | When SMB service is down and one or more volumes are exported through SMB. | |
CTDB | UNKNOWN | CTDB not configured | When CTDB service is not running, and smb or nfs service is running. |
CRITICAL | Node status: BANNED/STOPPED | When CTDB service is running but Node status is BANNED/STOPPED. | |
WARNING | Node status: UNHEALTHY/DISABLED/PARTIALLY_ONLINE | When CTDB service is running but Node status is UNHEALTHY/DISABLED/PARTIALLY_ONLINE. | |
OK | Node status: OK | When CTDB service is running and healthy. | |
Gluster Management | OK | Process glusterd is running | When glusterd is running as unique. |
WARNING | PROCS WARNING: 3 processes | When there are more then one glusterd is running. | |
CRITICAL | CRITICAL: Process glusterd is not running | When there is no glusterd process running. | |
UNKNOWN | NRPE: Unable to read output | When unable to communicate or read output | |
NFS | OK | OK: No gluster volume uses nfs | When no volumes are configured to be exported through NFS. |
OK | Process glusterfs-nfs is running | When glusterfs-nfs process is running. | |
CRITICAL | CRITICAL: Process glusterfs-nfs is not running | When glusterfs-nfs process is down and there are volumes which requires NFS export. | |
Self-Heal | OK | Gluster Self Heal Daemon is running | When self-heal process is running. |
OK | OK: Process Gluster Self Heal Daemon | ||
CRITICAL | CRITICAL: Gluster Self Heal Daemon not running | When gluster self heal process is not running. | |
Auto-Config | OK | Cluster configurations are in sync | When auto-config has not detected any change in Gluster configuration. This shows that Nagios configuration is already in synchronization with the Gluster configuration and auto-config service has not made any change in Nagios configuration. |
OK | Cluster configurations synchronized successfully from host host-address | When auto-config has detected change in the Gluster configuration and has successfully updated the Nagios configuration to reflect the change Gluster configuration. | |
CRITICAL | Can't remove all hosts except sync host in 'auto' mode. Run auto discovery manually. | When the host used for auto-config itself is removed from the Gluster peer list. Auto-config will detect this as all host except the synchronized host is removed from the cluster. This will not change the Nagios configuration and the user need to manually run the auto-config. | |
QUOTA | OK | OK: Quota not enabled | When quota is not enabled in any volumes. |
OK | Process quotad is running | When glusterfs-quota service is running. | |
CRITICAL | CRITICAL: Process quotad is not running | When glusterfs-quota service is down and quota is enabled for one or more volumes. | |
CPU Utilization | OK | CPU Status OK: Total CPU:4.6% Idle CPU:95.40% | When CPU usage is less than 80%. |
WARNING | CPU Status WARNING: Total CPU:82.40% Idle CPU:17.60% | When CPU usage is more than 80%. | |
CRITICAL | CPU Status CRITICAL: Total CPU:97.40% Idle CPU:2.6% | When CPU usage is more than 90%. | |
Memory Utilization | OK | OK- 65.49% used(1.28GB out of 1.96GB) | When used memory is below warning threshold. (Default warning threshold is 80%) |
WARNING | WARNING- 85% used(1.78GB out of 2.10GB) | When used memory is below critical threshold (Default critical threshold is 90%) and greater than or equal to warning threshold (Default warning threshold is 80%). | |
CRITICAL | CRITICAL- 92% used(1.93GB out of 2.10GB) | When used memory is greater than or equal to critical threshold (Default critical threshold is 90% ) | |
Brick Utilization | OK | OK | When used space of any of the four parameters, space detail, inode detail, thin pool, and thin pool-metadata utilizations, are below threshold of 80%. |
WARNING | WARNING:mount point /brick/brk1 Space used (0.857 / 1.000) GB | If any of the four parameters, space detail, inode detail, thin pool utilization, and thinpool-metadata utilization, crosses warning threshold of 80% (Default is 80%). | |
CRITICAL | CRITICAL : mount point /brick/brk1 (inode used 9980/1000) | If any of the four parameters, space detail, inode detail, thin pool utilization, and thinpool-metadata utilizations, crosses critical threshold 90% (Default is 90%). | |
Disk Utilization | OK | OK | When used space of any of the four parameters, space detail, inode detail, thin pool utilization, and thinpool-metadata utilizations, are below threshold of 80%. |
WARNING | WARNING:mount point /boot Space used (0.857 / 1.000) GB | When used space of any of the four parameters, space detail, inode detail, thin pool utilization, and thinpool-metadata utilizations, are above warning threshold of 80%. | |
CRITICAL | CRITICAL : mount point /home (inode used 9980/1000) | If any of the four parameters, space detail, inode detail, thin pool utilization, and thinpool-metadata utilizations, crosses critical threshold 90% (Default is 90%). | |
Network Utilization | OK | OK: tun0:UP,wlp3s0:UP,virbr0:UP | When all the interfaces are UP. |
WARNING | WARNING: tun0:UP,wlp3s0:UP,virbr0:DOWN | When any of the interfaces is down. | |
UNKNOWN | UNKNOWN | When network utilization/status is unknown. | |
Swap Utilization | OK | OK- 0.00% used(0.00GB out of 1.00GB) | When used memory is below warning threshold (Default warning threshold is 80%). |
WARNING | WARNING- 83% used(1.24GB out of 1.50GB) | When used memory is below critical threshold (Default critical threshold is 90%) and greater than or equal to warning threshold (Default warning threshold is 80%). | |
CRITICAL | CRITICAL- 83% used(1.42GB out of 1.50GB) | When used memory is greater than or equal to critical threshold (Default critical threshold is 90%). | |
Cluster- Quorum | PENDING | When cluster.quorum-type is not set to server; or when there are no problems in the cluster identified. | |
OK | Quorum regained for volume | When quorum is regained for volume. | |
CRITICAL | Quorum lost for volume | When quorum is lost for volume. | |
Volume Geo-replication | OK | "Session Status: slave_vol1-OK .....slave_voln-OK. | When all sessions are active. |
Session status :No active sessions found | When Geo-replication sessions are deleted. | ||
CRITICAL | Session Status: slave_vol1-FAULTY slave_vol2-OK | If one or more nodes are Faulty and there's no replica pair that's active. | |
WARNING | Session Status: slave_vol1-NOT_STARTED slave_vol2-STOPPED slave_vol3- PARTIAL_FAULTY |
| |
WARNING | Geo replication status could not be determined. | When there's an error in getting Geo replication status. This error occurs when volfile is locked as another transaction is in progress. | |
UNKNOWN | Geo replication status could not be determined. | When glusterd is down. | |
Volume Quota | OK | QUOTA: not enabled or configured | When quota is not set |
OK | QUOTA:OK | When quota is set and usage is below quota limits. | |
WARNING | QUOTA:Soft limit exceeded on path of directory | When quota exceeds soft limit. | |
CRITICAL | QUOTA:hard limit reached on path of directory | When quota reaches hard limit. | |
UNKNOWN | QUOTA: Quota status could not be determined as command execution failed | When there's an error in getting Quota status. This occurs when
| |
Volume Status | OK | Volume : volume type - All bricks are Up | When all volumes are up. |
WARNING | Volume :volume type Brick(s) - list of bricks is|are down, but replica pair(s) are up | When bricks in the volume are down but replica pairs are up. | |
UNKNOWN | Command execution failed Failure message | When command execution fails. | |
CRITICAL | Volume not found. | When volumes are not found. | |
CRITICAL | Volume: volume-type is stopped. | When volumes are stopped. | |
CRITICAL | Volume : volume type - All bricks are down. | When all bricks are down. | |
CRITICAL | Volume : volume type Bricks - brick list are down, along with one or more replica pairs | When bricks are down along with one or more replica pairs. | |
Volume Self-Heal | OK | When volume is not a replicated volume, there is no self-heal to be done. | |
OK | No unsynced entries present | When there are no unsynched entries in a replicated volume. | |
WARNING | Unsynched entries present : There are unsynched entries present. | If self-heal process is turned on, these entries may be auto healed. If not, self-heal will need to be run manually. If unsynchronized entries persist over time, this could indicate a split brain scenario. | |
WARNING | Self heal status could not be determined as the volume was deleted | When self-heal status can not be determined as the volume is deleted. | |
UNKNOWN | When there's an error in getting self heal status. This error occurs when:
| ||
Cluster Utilization | OK | OK : 28.0% used (1.68GB out of 6.0GB) | When used % is below the warning threshold (Default warning threshold is 80%). |
WARNING | WARNING: 82.0% used (4.92GB out of 6.0GB) | Used% is above the warning limit. (Default warning threshold is 80%) | |
CRITICAL | CRITICAL : 92.0% used (5.52GB out of 6.0GB) | Used% is above the warning limit. (Default critical threshold is 90%) | |
UNKNOWN | Volume utilization data could not be read | When volume services are present, but the volume utilization data is not available as it's either not populated yet or there is error in fetching volume utilization data. | |
Volume Utilization | OK | OK: Utilization: 40 % | When used % is below the warning threshold (Default warning threshold is 80%). |
WARNING | WARNING - used 84% of available 200 GB | When used % is above the warning threshold (Default warning threshold is 80%). | |
CRITICAL | CRITICAL - used 96% of available 200 GB | When used % is above the critical threshold (Default critical threshold is 90%). | |
UNKNOWN | UNKNOWN - Volume utilization data could not be read | When all the bricks in the volume are killed or if glusterd is stopped in all the nodes in a cluster. |
9.5. Monitoring Host and Cluster Utilization
9.5.1. Monitoring Host and Cluster Utilization
Note
Procedure 9.1. To Monitor Cluster Utilization
- Click System and select Clusters in the Tree pane.
- Click Trends tab.
Figure 9.13. Trends
- Select the date and time duration to view the cluster utilization report.
- Click Submit. The Cluster Utilization graph of all clusters for the selected period is displayed.You can refresh the status by clicking the refresh button and also print the report or save as a pdf file by clicking the print button. Click Glusterfs Monitoring Home to view the Nagios Home page.
Procedure 9.2. To Monitor Utilization for Hosts
- Click System and select Clusters in the Tree pane.
- Click Hosts in the tree pane and click Trends tab to view the CPU Utilization for all the hosts.To view CPU Utilization, Network Interface Utilization, Disk Utilization,Memory Uttilization and Swap Utilization for each host, select the Host name from the tree pane and click Trends tab.
Figure 9.14. Utilization for selected Host
- Select the date and time to view the Host Utilization report.
- Click Submit. The CPU Utilization graph for all the Hosts for the selected period is displayed.You can refresh the status by clicking the refresh button and also print the report or save as a pdf file by clicking the print button. To view the Nagios Home page, click Glusterfs Monitoring Home.
Procedure 9.3. To monitor Volume and Brick Utilization
- Open the Volumes view in the tree pane and select Volumes.
- Click Trends tab.
- Select the date and time duration to view the volume and brick utilization report.
- Click Submit. The Volume Utilization graph and Brick Utilization graph for the selected period is displayed.
Figure 9.15. Volume and Brick Utilization
You can refresh the status by clicking the refresh button and also print the report or save as a pdf file by clicking the print button. To view the Nagios Home page, click Glusterfs Monitoring Home.
9.5.2. Enabling and Disabling Monitoring
Important
- To enable monitoring, run the following command in the Red Hat Storage Console Server :
# rhsc-monitoring enable Setting the monitoring flag... Starting nagios: done. Starting nsca: [ OK ] INFO: Move the nodes of existing cluster (with compatibilty version >= 3.4) to maintenance and re-install them.
The Trends tab is displayed in the Red Hat Storage Console Administrator portal with the host and cluster utilization details. - To disable monitoring, run the following command in the Red Hat Storage Console Server:
#rhsc-monitoring disable Setting the monitoring flag... Stopping nagios: .done. Shutting down nsca: [ OK ]
The Trends tab is not displayed in the Red Hat Storage Console Administrator portal and the user cannot view host and cluster utilization details. Receiving email and SNMP notifications are disabled. Disabling monitoring also stops Nagios and NSCA services.Disabling monitoring does not stop theglusterpmd
service. Run the following commands on all the Red Hat Storage nodes to stopglusterpmd
service and to removechkconfig
for glusterpmd service:# service glusterpmd stop # chkconfig glusterpmd off
9.6. Troubleshooting Nagios
9.6.1. Troubleshooting NSCA and NRPE Configuration Issues
- Check Firewall and Port Settings on Nagios ServerIf port 5667 is not opened on the server host's firewall, a timeout error is displayed. Ensure that port 5667 is opened.
- Log in as root and run the following command on the Red Hat Storage node to get the list of current iptables rules:
# iptables -L
- The output is displayed as shown below:
ACCEPT tcp -- anywhere anywhere tcp dpt:5667
- If the port is not opened, add an iptables rule by adding the following line in
/etc/sysconfig/iptables
file:-A INPUT -m state --state NEW -m tcp -p tcp --dport 5667 -j ACCEPT
- Restart the iptables service using the following command:
# service iptables restart
- Restart the NSCA service using the following command:
# service nsca restart
- Check the Configuration File on Red Hat Storage NodeMessages cannot be sent to the NSCA server, if Nagios server IP or FQDN, cluster name and hostname (as configured in Nagios server) are not configured correctly.Open the Nagios server configuration file /etc/nagios/nagios_server.conf and verify if the correct configurations are set as shown below:
# NAGIOS SERVER # The nagios server IP address or FQDN to which the NSCA command # needs to be sent [NAGIOS-SERVER] nagios_server=NagiosServerIPAddress # CLUSTER NAME # The host name of the logical cluster configured in Nagios under which # the gluster volume services reside [NAGIOS-DEFINTIONS] cluster_name=cluster_auto # LOCAL HOST NAME # Host name given in the nagios server [HOST-NAME] hostname_in_nagios=NagiosServerHostName
If Host name is updated, restart the NSCA service using the following command:# service nsca restart
- CHECK_NRPE: Error - Could Not Complete SSL HandshakeThis error occurs if the IP address of the Nagios server is not defined in the
nrpe.cfg
file of the Red Hat Storage node. To fix this issue, follow the steps given below:- Add the Nagios server IP address in
/etc/nagios/nrpe.cfg
file in theallowed_hosts
line as shown below:allowed_hosts=127.0.0.1, NagiosServerIP
Theallowed_hosts
is the list of IP addresses which can execute NRPE commands. - Save the
nrpe.cfg
file and restart the NRPE service using the following command:# service nrpe restart
- CHECK_NRPE: Socket Timeout After n SecondsTo resolve this issue perform the steps given below:On Nagios Server:The default timeout value for the NRPE calls is 10 seconds and if the server does not respond within 10 seconds, Nagios GUI displays an error that the NRPE call has timed out in 10 seconds. To fix this issue, change the timeout value for NRPE calls by modifying the command definition configuration files.
- Changing the NRPE timeout for services which directly invoke check_nrpe.For the services which directly invoke check_nrpe (check_disk_and_inode, check_cpu_multicore, and check_memory), modify the command definition configuration file
/etc/nagios/gluster/gluster-commands.cfg
by adding -t Time in Seconds as shown below:define command { command_name check_disk_and_inode command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c check_disk_and_inode -t TimeInSeconds }
- Changing the NRPE timeout for the services in
nagios-server-addons
package which invoke NRPE call through code.The services which invoke/usr/lib64/nagios/plugins/gluster/check_vol_server.py
(check_vol_utilization, check_vol_status, check_vol_quota_status, check_vol_heal_status, and check_vol_georep_status) make NRPE call to the Red Hat Storage nodes for the details through code. To change the timeout for the NRPE calls, modify the command definition configuration file/etc/nagios/gluster/gluster-commands.cfg
by adding -t No of seconds as shown below:define command { command_name check_vol_utilization command_line $USER1$/gluster/check_vol_server.py $ARG1$ $ARG2$ -w $ARG3$ -c $ARG4$ -o utilization -t TimeInSeconds }
The auto configuration servicegluster_auto_discovery
makes NRPE calls for the configuration details from the Red Hat Storage nodes. To change the NRPE timeout value for the auto configuration service, modify the command definition configuration file/etc/nagios/gluster/gluster-commands.cfg
by adding -t TimeInSeconds as shown below:define command{ command_name gluster_auto_discovery command_line sudo $USER1$/gluster/configure-gluster-nagios.py -H $ARG1$ -c $HOSTNAME$ -m auto -n $ARG2$ -t TimeInSeconds }
- Restart Nagios service using the following command:
#
service nagios restart
On Red Hat Storage node:- Add the Nagios server IP address as described in CHECK_NRPE: Error - Could Not Complete SSL Handshake section in Troubleshooting NRPE Configuration Issues section.
- Edit the
nrpe.cfg
file using the following command:# vi /etc/nagios/nrpe.cfg
- Search for the
command_timeout
andconnection_timeout
settings and change the value. Thecommand_timeout
value must be greater than or equal to the timeout value set in Nagios server.The timeout on checks can be set as connection_timeout=300 and the command_timeout=60 seconds. - Restart the NRPE service using the following command:
#
service nrpe restart
- Check the NRPE Service StatusThis error occurs if the NRPE service is not running. To resolve this issue perform the steps given below:
- Verify the status of NRPE service by logging into the Red Hat Storage node as root and running the following command:
# service nrpe status
- If NRPE is not running, start the service using the following command:
# service nrpe start
- Check Firewall and Port SettingsThis error is associated with firewalls and ports. The timeout error is displayed if the NRPE traffic is not traversing a firewall, or if port 5666 is not open on the Red Hat Storage node.Enure that port 5666 is open on the Red Hat Storage node.
- Run
check_nrpe
command from the Nagios server to verify if the port is open and if NRPE is running on the Red Hat Storage Node . - Log into the Nagios server as root and run the following command:
# /usr/lib64/nagios/plugins/check_nrpe -H RedHatStorageNodeIP
- The output is displayed as given below:
NRPE v2.14
If not, ensure the that port 5666 is opened on the Red Hat Storage node.- Run the following command on the Red Hat Storage node as root to get a listing of the current iptables rules:
# iptables -L
- The output is displayed as shown below:
ACCEPT - tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5666
- If the port is not open, add iptables rule for it.
- To add iptables rule, edit the
iptables
file as shown below:# vi /etc/sysconfig/iptables
- Add the following line in the file:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 5666 -j ACCEPT
- Restart the iptables service using the following command:
# service iptables restart
- Save the file and restart the NRPE service:
# service nrpe restart
- Checking Port 5666 From the Nagios Server with TelnetUse telnet to verify the Red Hat Storage node's ports. To verify the ports of the Red Hat Storage node, perform the steps given below:
- Log in as root on Nagios server.
- Test the connection on port 5666 from the Nagios server to the Red Hat Storage node using the following command:
# telnet RedHatStorageNodeIP 5666
- The output displayed is similar to:
telnet 10.70.36.49 5666 Trying 10.70.36.49... Connected to 10.70.36.49. Escape character is '^]'.
- Connection Refused By HostThis error is due to port/firewall issues or incorrectly configured allowed_hosts directives. See the sections CHECK_NRPE: Error - Could Not Complete SSL Handshake and CHECK_NRPE: Socket Timeout After n Seconds for troubleshooting steps.
9.6.2. Troubleshooting General Issues
Set SELinux
to permissive and restart the Nagios server.
Ensure that the host name given in Name field of Add Host window matches the host name given while configuring Nagios. The host name of the node is used while configuring Nagios server using auto-discovery.
Part IV. Managing Advanced Functionality
Chapter 10. Managing Multilevel Administration
Note
10.1. Configuring Roles
10.1.1. Roles
administrator
role. This role allows access to the Administration Portal for managing server resources. For example, if a user has an administrator
role on a cluster, they can manage all servers in the cluster using the Administration Portal.
10.1.2. Creating Custom Roles
Procedure 10.1. Creating a New Role
- On the header bar of the Red Hat Storage Console menu, click Configure. The Configure dialog box displays. The dialog box includes a list of Administrator roles, and any custom roles.
- Click New. The New Role dialog box displays.
- Enter the Name and Description of the new role. This name will display in the list of roles.
- Select Admin as the Account Type. If Admin is selected, this role displays with the administrator icon in the list.
- Use the Expand All or Collapse All buttons to view more or fewer of the permissions for the listed objects in the Check Boxes to Allow Action list. You can also expand or collapse the options for each object.
- For each of the objects, select or deselect the actions you wish to permit/deny for the role you are setting up.
- Click OK to apply the changes you have made. The new role displays on the list of roles.
10.1.3. Editing Roles
Procedure 10.2. Editing a Role
- On the header bar of the Red Hat Storage Console menu, click Configure. The Configure dialog box displays. The dialog box below shows the list of administrator roles.
- Click Edit. The Edit Role dialog box displays.
Figure 10.1. The Edit Role Dialog Box
- If necessary, edit the Name and Description of the role. This name will display in the list of roles.
- Use the Expand All or Collapse All buttons to view more or fewer of the permissions for the listed objects. You can also expand or collapse the options for each object.
- For each of the objects, select or deselect the actions you wish to permit/deny for the role you are editing.
- Click OK to apply the changes you have made.
10.1.4. Copying Roles
Procedure 10.3. Copying a Role
- On the header bar of the Red Hat Storage Console, click Configure. The Configure dialog box displays. The dialog box includes a list of default roles, and any custom roles that exist on the Red Hat Storage Console.
Figure 10.2. The Configure Dialog Box
- Click Copy. The Copy Role dialog box displays.
- Change the Name and Description of the new role. This name will display in the list of roles.
- Use the Expand All or Collapse All buttons to view more or fewer of the permissions for the listed objects. You can also expand or collapse the options for each object.
- For each of the objects, select or deselect the actions you wish to permit/deny for the role you are editing.
- Click Close to apply the changes you have made.
Chapter 11. Backing Up and Restoring the Red Hat Storage Console
11.1. Backing Up and Restoring the Red Hat Storage Console
11.1.1. Backing up Red Hat Storage Console - Overview
engine-backup
command - can be used to rapidly back up the engine database and configuration files into a single file that can be easily stored.
11.1.2. Syntax for the engine-backup Command
engine-backup
command works in one of two basic modes:
# engine-backup --mode=backup
# engine-backup --mode=restore
Basic Options
-
--mode
- Specifies whether the command will perform a backup operation or a restore operation. Two options are available -
backup
, andrestore
. This is a required parameter. -
--file
- Specifies the path and name of a file into which backups are to be taken in backup mode, and the path and name of a file from which to read backup data in restore mode. This is a required parameter in both backup mode and restore mode.
-
--log
- Specifies the path and name of a file into which logs of the backup or restore operation are to be written. This parameter is required in both backup mode and restore mode.
-
--scope
- Specifies the scope of the backup or restore operation. There are two options -
all
, which backs up both the engine database and configuration data, anddb
, which backs up only the engine database.
Database Options
-
--change-db-credentials
- Allows you to specify alternate credentials for restoring the engine database using credentials other than those stored in the backup itself. Specifying this parameter allows you to add the following parameters.
-
--db-host
- Specifies the IP address or fully qualified domain name of the host on which the database resides. This is a required parameter.
-
--db-port
- Specifies the port by which a connection to the database will be made.
-
--db-user
- Specifies the name of the user by which a connection to the database will be made. This is a required parameter.
-
--db-passfile
- Specifies a file containing the password by which a connection to the database will be made. Either this parameter or the
--db-password
parameter must be specified. -
--db-password
- Specifies the plain text password by which a connection to the database will be made. Either this parameter or the
--db-passfile
parameter must be specified. -
--db-name
- Specifies the name of the database to which the database will be restored. This is a required parameter.
-
--db-secured
- Specifies that the connection with the database is to be secured.
-
--db-secured-validation
- Specifies that the connection with the host is to be validated.
Help
-
--help
- Provides an overview of the available modes, parameters, sample usage, how to create a new database and configure the firewall in conjunction with backing up and restoring the Red Hat Storage Console.
11.1.3. Creating a Backup with the engine-backup Command
The process for creating a backup of the engine database and the configuration data for the Red Hat Storage Console using the engine-backup
command is straightforward and can be performed while the Manager is active.
Procedure 11.1. Backing up the Red Hat Storage Console
- Log on to the machine running the Red Hat Storage Console.
- Run the following command to create a full backup:Alternatively, run the following command to back up only the engine database:
Example 11.1. Creating a Full Backup
# engine-backup --scope=all --mode=backup --log=[file name] --file=[file name]
Example 11.2. Creating an engine database Backup
# engine-backup --scope=db --mode=backup --log=[file name] --file=[file name]
A tar
file containing a backup of the engine database, or the engine database and the configuration data for the Red Hat Storage Console, is created using the path and file name provided.
11.1.4. Restoring a Backup with the engine-backup Command
engine-backup
command is straightforward, it involves several additional steps in comparison to that for creating a backup depending on the destination to which the backup is to be restored. For example, the engine-backup
command can be used to restore backups to fresh installations of Red Hat Storage Console, on top of existing installations of Red Hat Storage Console, and using local or remote databases.
Important
version
file located in the root directory of the unpacked files.
11.1.5. Restoring a Backup to a Fresh Installation
The engine-backup
command can be used to restore a backup to a fresh installation of the Red Hat Storage Console. The following procedure must be performed on a machine on which the base operating system has been installed and the required packages for the Red Hat Storage Console have been installed, but the engine-setup
command has not yet been run. This procedure assumes that the backup file can be accessed from the machine on which the backup is to be restored.
Note
engine-cleanup
command used to prepare a machine prior to restoring a backup only cleans the engine database, and does not drop the database, delete the user that owns that database, create engine database or perform the initial configuration of the postgresql
service. Therefore, these tasks must be performed manually as outlined below when restoring a backup to a fresh installation.
Procedure 11.2. Restoring a Backup to a Fresh Installation
- Log on to the machine on which the Red Hat Storage Console is installed.
- Manually create an empty database to which the database in the backup can be restored and configure the
postgresql
service:- Run the following commands to initialize the
postgresql
database, start thepostgresql
service and ensure this service starts on boot:# service postgresql initdb # service postgresql start # chkconfig postgresql on
- Run the following commands to enter the postgresql command line:
# su postgres $ psql
- Run the following command to create a new user:
postgres=# CREATE USER [user name] PASSWORD '[password]';
The password used while creating the database should be same as the one used while taking backup. If the password is different, follow step 3 in Section 11.1.7, “Restoring a Backup with Different Credentials” - Run the following command to create the new database:
postgres=# create database [database name] owner [user name] template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
- Edit the
/var/lib/pgsql/data/pg_hba.conf
file and add the following lines under the'local'
section near the end of the file:- For local databses:
host [database name] [user name] 0.0.0.0/0 md5 host [database name] [user name] ::0/0 md5
- For remote databases:
host [database name] [user name] X.X.X.X/32 md5
Replace X.X.X.X with the IP address of the Manager.
- Run the following command to restart the
postgresql
service:# service postgresql restart
- Restore the backup using the
engine-backup
command:# engine-backup --mode=restore --file=[file name] --log=[file name]
If successful, the following output displays:Restoring... Note: you might need to manually fix: - iptables/firewalld configuration - autostart of ovirt-engine service You can now start the engine service and then restart httpd Done.
- Run the following command and follow the prompts to set up the Manager as per a fresh installation, selecting to manually configure the database when prompted:
# engine-setup
The engine database and configuration files for the Red Hat Storage Console have been restored to the version in the backup.
11.1.6. Restoring a Backup to an Existing Installation
The engine-backup
command can restore a backup to a machine on which the Red Hat Storage Console has already been installed and set up.
Note
engine-cleanup
command used to prepare a machine prior to restoring a backup only cleans the engine database, and does not drop the database, delete the user that owns that database, create engine database or perform the initial configuration of the postgresql
service. Therefore, these tasks must be performed manually as outlined below when restoring a backup to an existing installation.
Procedure 11.3. Restoring a Backup to an Existing Installation
- Log on to the machine on which the Red Hat Storage Console is installed.
- Run the following command and follow the prompts to remove the configuration files for and clean the database associated with the Manager:
# engine-cleanup
Manually drop the database and create an empty database to which the database in the backup can be restored and configure thepostgresql
service- Run the following commands to enter the postgresql command line:
# su postgres $ psql
- Run the following command to drop the database:
# postgres=# DROP DATABASE [database name]
- Run the following command to create the new database:
# postgres=# create database [database name] owner [user name] template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
- Restore the backup using the
engine-backup
command:# engine-backup --mode=restore --file=[file name] --log=[file name]
If successful, the following output displays:Restoring... Note: you might need to manually fix: - iptables/firewalld configuration - autostart of ovirt-engine service You can now start the engine service and then restart httpd Done.
- Run the following command and follow the prompts to re-configure the firewall and ensure the
ovirt-engine
service is correctly configured:# engine-setup
The engine database and configuration files for the Red Hat Storage Console have been restored to the version in the backup.
11.1.7. Restoring a Backup with Different Credentials
The engine-backup
command can restore a backup to a machine on which the Red Hat Storage Console has already been installed and set up, but the credentials of the database in the backup are different to those of the database on the machine on which the backup is to be restored.
Note
engine-cleanup
command used to prepare a machine prior to restoring a backup only cleans the engine database, and does not drop the database, delete the user that owns that database, create engine database or perform the initial configuration of the postgresql
service. Therefore, these tasks must be performed manually as outlined below when restoring a backup with different credentials.
Procedure 11.4. Restoring a Backup with Different Credentials
- Log on to the machine on which the Red Hat Storage Console is installed.
- Run the following command and follow the prompts to remove the configuration files for and clean the database associated with the Manager:
# engine-cleanup
Manually drop the database and create an empty database to which the database in the backup can be restored and configure thepostgresql
service:- Run the following commands to enter the postgresql command line:
# su postgres $ psql
- Run the following command to drop the database:
# postgres=# DROP DATABASE [database name]
- Run the following command to create the new database:
# postgres=# create database [database name] owner [user name] template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
- Restore the backup using the
engine-backup
command with the--change-db-credentials
parameter:# engine-backup --mode=restore --file=[file name] --log=[file name] --change-db-credentials --db-host=[database location] --db-name=[database name] --db-user=[user name] --db-password=[password]
If successful, the following output displays:Restoring... Note: you might need to manually fix: - iptables/firewalld configuration - autostart of ovirt-engine service You can now start the engine service and then restart httpd Done.
- Run the following command and follow the prompts to re-configure the firewall and ensure the
ovirt-engine
service is correctly configured:# engine-setup
The engine database and configuration files for the Red Hat Storage Console have been restored to the version in the backup using the supplied credentials.
Appendix A. Utilities
A.1. Domain Management Tool
internal
that is only used to store the admin
user. To add and remove other users from the system, you must first add the directory services in which they are found.
rhsc-manage-domains
, to add and remove domains provided by this service. In this way, you can grant access to the Red Hat Storage environment to users stored across multiple domains.
rhsc-manage-domains
command on the machine on which Red Hat Storage Console was installed. The rhsc-manage-domains
command must be run as the root user.
A.1.1. Syntax
# rhsc-manage-domains -action=ACTION [options]
-
add
- Add a domain to the console directory services configuration.
-
edit
- Edit a domain in the console directory services configuration.
-
delete
- Delete a domain from the console directory services configuration.
-
validate
- Validate the console directory services configuration. The command attempts to authenticate to each domain in the configuration using the configured user name and password.
-
list
- List the current directory services configuration of the console.
- -
domain
=DOMAIN - Specifies the domain on which the action must be performed. The
-domain
parameter is mandatory foradd
,edit
, anddelete
. - -
user
=USER - Specifies the domain user to use. The
-user
parameter is mandatory foradd
, and optional foredit
. - -
interactive
- Specifies that the domain user's password is to be provided interactively. This option or the -
passwordFile
-
-passwordFile=FILE
- Specifies that the domain user's password is on the first line of the provided file. This option or the
-interactive
option must be used to provide the password for use with theadd
action. -
-configFile=FILE
- Specifies an alternative configuration file that the command must load. The
-configFile
parameter is always optional. -
-report
- Specifies that all validation errors encountered while performing the validate action will be reported in full.
rhsc-manage-domains
command help output:
# rhsc-manage-domains --help
A.1.2. Examples
rhsc-manage-domains
to perform basic manipulation of the Red Hat Storage Console domain configuration.
Appendix B. Changing Passwords in Red Hat Storage Console
B.1. Changing the Password for the Administrator User
admin@internal
user account is automatically created on installing and configuring Red Hat Storage Console. This account is stored locally in the Red Hat Storage Console PostgreSQL database and exists separately from other directory services. Unlike IPA domains, users cannot be added to or deleted from the internal domain. The admin@internal
user is the SuperUser for Red Hat Storage Console, and has administrator privileges over the environment via the Administration Portal.
admin@internal
user. However, if you have forgotten the password or choose to reset the password, you can use the rhsc-config utility.
Procedure B.1. Resetting the Password for the admin@internal User
- Log in to the Red Hat Storage Console server as the
root
user. - Use the rhsc-config utility to set a new password for the
admin@internal
user. Run the following command:# rhsc-config -s AdminPassword=interactive
After typing the above command, a password prompt displays for you to enter the new password.You do not need to use quotes. However, use escape shell characters if you include them in the password. - Restart the
ovirt-engine
service to apply the changes. Run the following command:# service ovirt-engine restart
Appendix C. Search Parameters
C.1. Search Query Syntax
Example | Result |
---|---|
Hosts: cluster = cluster name | Displays a list of all servers in the cluster. |
Volumes: status = up | Displays a list of all volumes with status up. |
Events: severity > normal sortby time | Displays the list of all events whose severity is higher than Normal, sorted by time. |
Hosts: cluster = down
Input | List Items Displayed | Action |
---|---|---|
h | Hosts (1 option only) |
Select
Hosts or;
Type
Hosts
|
Hosts: |
All host properties
| Type c |
Hosts: c | host properties starting with c | Select cluster or type cluster |
Hosts: cluster | =
=!
| Select or type = |
Hosts: cluster = | Select or type cluster name |
C.2. Searching for Resources
C.2.1. Searching for Clusters
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
name | String | The unique name that identifies the clusters on the network. |
description | String | The description of the cluster. |
initialized | String | A Boolean True or False indicating the status of the cluster. |
sortby | List | Sorts the returned results by one of the resource properties. |
page | Integer | The page number of results to display. |
Clusters: initialized = true or name = Default
- Initialized; or
- Named Default
C.2.2. Searching for Hosts
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
Events.events-prop | See property types in Section C.2.5, “Searching for Events” | The property of the events associated with the host. |
Users.users-prop | See property types in Section C.2.4, “Searching for Users” | The property of the users associated with the host. |
name | String | The name of the host. |
status | List | The availability of the host. |
cluster | String | The cluster to which the host belongs. |
address | String | The unique name that identifies the host on the network. |
cpu_usage | Integer | The percent of processing power usage. |
mem_usage | Integer | The percentage of memory usage. |
network_usage | Integer | The percentage of network usage. |
load | Integer | Jobs waiting to be executed in the run-queue per processor, in a given time slice. |
version | Integer | The version number of the operating system. |
cpus | Integer | The number of CPUs on the host. |
memory | Integer | The amount of memory available. |
cpu_speed | Integer | The processing speed of the CPU. |
cpu_model | String | The type of CPU. |
committed_mem | Integer | The percentage of committed memory. |
tag | String | The tag assigned to the host. |
type | String | The type of host. |
sortby | List | Sorts the returned results by one of the resource properties. |
page | Integer | The page number of results to display. |
Host: cluster = Default
- Are part of the Default cluster.
C.2.3. Searching for Volumes
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
Clusters.clusters prop | See property types in Section C.2.1, “Searching for Clusters” | The property of the clusters associated with the volume. |
name | String | The name of the volume. |
status | List | The availability of the volume. |
type | List | The type of the volume. |
transport_type | List | The transport type of the volume. |
replica_count | Integer | The replicate count of the volume. |
stripe_count | Integer | The stripe count of the volume. |
sortby | List | Sorts the returned results by one of the resource properties. |
page | Integer | The page number of results to display. |
Volumes: Cluster.name = Default and Status = Up
- Belong to the Default cluster and the status of the volume is Up.
C.2.4. Searching for Users
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
Hosts.hosts- prop | See property types in Section C.2.2, “Searching for Hosts” | The property of the hosts associated with the user. |
Events.events-prop | See property types in Section C.2.5, “Searching for Events” | The property of the events associated with the user. |
name | String | The name of the user. |
lastname | String | The last name of the user. |
usrname | String | The unique name of the user. |
department | String | The department to which the user belongs. |
group | String | The group to which the user belongs. |
title | String | The title of the user. |
status | String | The status of the user. |
role | String | The role of the user. |
tag | String | The tag to which the user belongs. |
pool | String | The pool to which the user belongs. |
sortby | List | Sorts the returned results by one of the resource properties. |
page | Integer | The page number of results to display. |
Users: Events.severity > normal and Hosts.name = Server name
- Events of a severity greater than Normal have occurred on their hosts.
C.2.5. Searching for Events
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
Hosts.hosts-prop | See property types in Section C.2.2, “Searching for Hosts” | The property of the hosts associated with the event. |
Users.users-prop | See property types in Section C.2.4, “Searching for Users” | The property of the users associated with the event. |
type | List | Type of the event. |
severity | List | The severity of the Event: Warning/Error/Normal |
message | String | Description of the event type. |
time | Integer | Time at which the event occurred. |
usrname | usrname | The user name associated with the event. |
event_host | String | The host associated with the event. |
sortby | List | Sorts the returned results by one of the resource properties. |
page | Integer | The page number of results to display. |
Events: event_host = gonzo.example.com
- The event occurred on the server named
gonzo.example.com
.
C.3. Saving and Accessing Queries as Bookmarks
C.3.1. Creating Bookmarks
Procedure C.1. Saving a Query String as a Bookmark
- Enter the search query in the Search bar (see Appendix D).
- Click the Bookmark button to the right of the Search bar.The New Bookmark dialog box displays. The query displays in the Search String field. You can edit the query if required.
- In Name, specify a descriptive name for the search query.
- Click OK to save the query as a bookmark.
- The search query is saved and displays in the Bookmarks pane.
C.3.2. Editing Bookmarks
Procedure C.2. Editing a Bookmark
- Select a bookmark from the Bookmarks pane.
- The results list displays the items according to the criteria. Click the Edit button on the Bookmark pane.The Edit Bookmark dialog box displays. The query displays in the Search String field. Edit the search string as required.
- Change the Name and Search String as necessary.
- Click OK to save the edited bookmark.
C.3.3. Deleting Bookmarks
Procedure C.3. Deleting a Bookmark
- Select one or more bookmark from the Bookmarks pane.
- The results list displays the items according to the criteria. Click the Remove button on the Bookmark pane.The Remove Bookmark dialog box displays.
- Click OK to remove the selected bookmarks.
Appendix D. Red Hat Access Plug-in
Note
D.1. Using Red Hat Access Plug-in
admin@internal
user. However, if you have forgotten the password or choose to reset the password, you can use the rhsc-config utility.
Procedure D.1. Using Red Hat Access Plug-in
- In the Red Hat Storage Console, open the Clusters view by expanding the System tab and selecting the Cluster tab in the Tree pane. Alternatively, select Clusters tab.
- Click Red Hat Access:Support to open the Red Hat Access: Support dialog box.You can select Red Hat Access:Support from Hosts tab also.
Note
Red Hat Support Plug-in is available in the details pane as well as in several context menus in the Red Hat Storage Administration Portal.Figure D.1. Selecting Red Hat Access:Support option
- Log in to Red Hat Access:Support with the Red Hat credentials.
- In the Search field you can search for solutions in the knowledge base.
Note
You can also search the Red Hat Access database using the Red Hat Search: field. Search results display in the left-hand navigation list in the details pane. - To open a new support case select the Open New Support Case radio button in Red Hat Access:Support dialog box.
Figure D.2. Red Hat Access: Support Window
- Enter the Summary, Description and select Product, Version, and Attachments and click Submit button.
- To modify an existing case, select Modify Existing Case radio button.
- Update the details and click Submit.
- To view the documentation relevant to the part of the Administration Portal currently on the screen, click Red Hat Documentation.
Figure D.3. Red Hat Documentation
Appendix E. Nagios Configuration Files
- In
/etc/nagios/gluster/
directory, a new directoryCluster-Name
is created with the name provided asCluster-Name
while executingconfigure-gluster-nagios
command for auto-discovery. All configurations created by auto-discovery for the cluster are added in this folder. - In
/etc/nagios/gluster/Cluster-Name
directory, a configuration file,Cluster-Name.cfg
is generated. This file has the host and hostgroup configurations for the cluster. This also contains service configuration for all the cluster/volume level services.The following Nagios object definitions are generated inCluster-Name.cfg
file:- A hostgroup configuration with
hostgroup_name
as cluster name. - A host configuration with
host_name
as cluster name. - The following service configurations are generated for cluster monitoring:
- A Cluster - Quorum service to monitor the cluster quorum.
- A Cluster Utilization service to monitor overall utilization of volumes in the cluster. This is created only if there is any volume present in the cluster.
- A Cluster Auto Config service to periodically synchronize the configurations in Nagios with Red Hat Storage trusted storage pool.
- The following service configurations are generated for each volume in the trusted storage pool:
- A Volume Status- Volume-Name service to monitor the status of the volume.
- A Volume Utilization - Volume-Name service to monitor the utilization statistics of the volume.
- A Volume Quota - Volume-Name service to monitor the Quota status of the volume, if Quota is enabled for the volume.
- A Volume Self-Heal - Volume-Name service to monitor the Self-Heal status of the volume, if the volume is of type replicate or distributed-replicate.
- A Volume Geo-Replication - Volume-Name service to monitor the Geo Replication status of the volume, if Geo-replication is configured for the volume.
- In
/etc/nagios/gluster/Cluster-Name
directory, a configuration file with nameHost-Name.cfg
is generated for each node in the cluster. This file has the host configuration for the node and service configuration for bricks from the particular node. The following Nagios object definitions are generated inHost-name.cfg
.- A host configuration which has Cluster-Name in the
hostgroups
field. - The following services are created for each brick in the node:
- A Brick Utilization - brick-path service to monitor the utilization of the brick.
- A Brick - brick-path service to monitor the brick status.
Table E.1. Nagios Configuration Files
File Name | Description |
---|---|
/etc/nagios/nagios.cfg
|
Main Nagios configuration file.
|
/etc/nagios/cgi.cfg
|
CGI configuration file.
|
/etc/httpd/conf.d/nagios.conf
|
Nagios configuration for httpd.
|
/etc/nagios/passwd
|
Password file for Nagios users.
|
/etc/nagios/nrpe.cfg
|
NRPE configuration file.
|
/etc/nagios/gluster/gluster-contacts.cfg
|
Email notification configuration file.
|
/etc/nagios/gluster/gluster-host-services.cfg
|
Services configuration file that's applied to every Red Hat Storage node.
|
/etc/nagios/gluster/gluster-host-groups.cfg
|
Host group templates for a Red Hat Storage trusted storage pool.
|
/etc/nagios/gluster/gluster-commands.cfg
|
Command definitions file for Red Hat Storage Monitoring related commands.
|
/etc/nagios/gluster/gluster-templates.cfg
|
Template definitions for Red Hat Storage hosts and services.
|
/etc/nagios/gluster/snmpmanagers.conf
|
SNMP notification configuration file with the IP address and community name of SNMP managers where traps need to be sent.
|
Appendix F. Revision History
Revision History | |||
---|---|---|---|
Revision 3-41 | Thu Mar 26 2015 | Divya Muntimadugu | |
| |||
Revision 3-40 | Thu Mar 19 2015 | Shalaka Harne | |
| |||
Revision 3-39 | Thu Mar 19 2015 | Shalaka Harne | |
| |||
Revision 3-38 | Thu Mar 19 2015 | Shalaka Harne | |
| |||
Revision 3-37 | Thu Mar 19 2015 | Shalaka Harne | |
| |||
Revision 3-36 | Tue Feb 24 2015 | Shalaka Harne | |
| |||
Revision 3-35 | Thu Feb 19 2015 | Shalaka Harne | |
| |||
Revision 3-32 | Tue Dec 23 2014 | Shalaka Harne | |
| |||
Revision 3-30 | Mon Nov 17 2014 | Shalaka Harne | |
| |||
Revision 3-29 | Wed Nov 12 2014 | Shalaka Harne | |
|