Installing and Configuring OpenStack environments manually
Edition 1
Legal Notice
1801 Varsity Drive
Raleigh, NC 27606-2072 USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701
Abstract
- Preface
- I. Introduction
- II. Installing OpenStack
- 3. Installing the Database Server
- 4. Installing the Message Broker
- 5. Installing the OpenStack Identity Service
- 5.1. Identity Service Requirements
- 5.2. Installing the Packages
- 5.3. Creating the Identity Database
- 5.4. Configuring the Service
- 5.5. Starting the Identity Service
- 5.6. Creating the Identity Service Endpoint
- 5.7. Creating an Administrator Account
- 5.8. Creating a Regular User Account
- 5.9. Creating the Services Tenant
- 5.10. Validating the Identity Service Installation
- 6. Installing the OpenStack Object Storage Service
- 7. Installing the OpenStack Image Service
- 8. Installing OpenStack Block Storage
- 9. Installing the OpenStack Networking Service
- 9.1. OpenStack Networking Installation Overview
- 9.2. Networking Prerequisite Configuration
- 9.3. Common Networking Configuration
- 9.4. Configuring the Networking Service
- 9.5. Configuring the DHCP Agent
- 9.6. Configuring a Provider Network
- 9.7. Configuring the Plug-in Agent
- 9.8. Configuring the L3 Agent
- 9.9. Validating the OpenStack Networking Installation
- 10. Installing the OpenStack Compute Service
- 11. Installing the Dashboard
- III. Validating the Installation
- IV. Monitoring the OpenStack Environment
- V. Managing OpenStack Environment Expansion
- A. Installation Checklist
- B. Troubleshooting the OpenStack Environment
- C. Service Log Files
- D. Example Configuration Files
- E. Revision History
Mono-spaced Bold
To see the contents of the filemy_next_bestselling_novelin your current working directory, enter thecat my_next_bestselling_novelcommand at the shell prompt and press Enter to execute the command.
Press Enter to execute the command.Press Ctrl+Alt+F2 to switch to a virtual terminal.
mono-spaced bold. For example:
File-related classes includefilesystemfor file systems,filefor files, anddirfor directories. Each class has its own associated set of permissions.
Choose → → from the main menu bar to launch Mouse Preferences. In the Buttons tab, select the Left-handed mouse check box and click to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).To insert a special character into a gedit file, choose → → from the main menu bar. Next, choose → from the Character Map menu bar, type the name of the character in the Search field and click . The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the button. Now switch back to your document and choose → from the gedit menu bar.
Mono-spaced Bold Italic or Proportional Bold Italic
To connect to a remote machine using ssh, typesshat a shell prompt. If the remote machine isusername@domain.nameexample.comand your username on that machine is john, typessh john@example.com.Themount -o remountcommand remounts the named file system. For example, to remount thefile-system/homefile system, the command ismount -o remount /home.To see the version of a currently installed package, use therpm -qcommand. It will return a result as follows:package.package-version-release
Publican is a DocBook publishing system.
mono-spaced roman and presented thus:
books Desktop documentation drafts mss photos stuff svn books_tests Desktop1 downloads images notes scripts svgs
mono-spaced roman but add syntax highlighting as follows:
static int kvm_vm_ioctl_deassign_device(struct kvm *kvm,
struct kvm_assigned_pci_dev *assigned_dev)
{
int r = 0;
struct kvm_assigned_dev_kernel *match;
mutex_lock(&kvm->lock);
match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head,
assigned_dev->assigned_dev_id);
if (!match) {
printk(KERN_INFO "%s: device hasn't been assigned before, "
"so cannot be deassigned\n", __func__);
r = -EINVAL;
goto out;
}
kvm_deassign_device(kvm, match);
kvm_free_assigned_device(kvm, match);
out:
mutex_unlock(&kvm->lock);
return r;
}Note
Important
Warning
- search or browse through a knowledge base of technical support articles about Red Hat products.
- submit a support case to Red Hat Global Support Services (GSS).
- access other product documentation.
Table of Contents
- Fully distributed object storage
- Persistent block-level storage
- Virtual-machine provisioning engine and image storage
- Authentication and authorization mechanism
- Integrated networking
- Web browser-based GUI for both users and administration.

Table 1.1. Services
| Service | Codename | Description | |
|---|---|---|---|
| Dashboard | horizon |
A web-based dashboard for managing OpenStack services.
|
| Identity | keystone | A centralized identity service that provides authentication and authorization for other services, and manages users, tenants, and roles. |
| OpenStack Networking | quantum | A networking service that provides connectivity between the interfaces of other OpenStack services. |
| Block Storage | cinder | A service that manages persistent block storage volumes for virtual machines. |
| Compute | nova | A service that launches and schedules networks of machines running on nodes. |
| Image | glance | A registry service for virtual machine images. |
| Object Storage | swift | A service providing object storage which allows users to store and retrieve files (arbitrary data). |
|
Metering
(Technical Preview)
| ceilometer | A service providing measurements of cloud resources. |
|
Orchestration
(Technical Preview)
| heat | A service providing a template-based orchestration engine, which supports the automatic creation of resource stacks. |
The Service Details section provides more detailed information about the Openstack service components. Each OpenStack service is comprised of a collection of Linux services, MySQL databases, or other components, which together provide a functional group. For example, the
glance-api and glance-registry Linux services, together with a MySQL database, implement the Image service.
Important

adminURL, the URL for the administrative endpoint for the service. Only the Identity service might use a value here that is different from publicURL; all other services will use the same value.internalURL, the URL of an internal-facing endpoint for the service (typically same as the publicURL).publicURL, the URL of the public-facing endpoint for the service.region, in which the service is located. By default, if a region is not specified, the 'RegionOne' location is used.
- Users, which have associated information (such as a name and password). In addition to custom users, a user must be defined for each cataloged service (for example, the 'glance' user for the Image service).
- Tenants, which are generally the user's group, project, or organization.
- Roles, which determine a user's permissions.
- Users can create networks, control traffic, and connect servers and devices to one or more networks.
- OpenStack offers flexible networking models, so that administrators can change the networking model to adapt to their volume and tenancy.
- IPs can be dedicated or floating; floating IPs allow dynamic traffic rerouting.
Table 1.4. Networking Service components
| Component | Description |
|---|---|
|
quantum-server
|
A Python daemon, which manages user requests (and exposes the API). It is configured with a plugin that implements the OpenStack Networking API operations using a specific set of networking mechanisms. A wide choice of plugins are also available. For example, the
openvswitch and linuxbridge plugins utilize native Linux networking mechanisms, while other plugins interface with external devices or SDN controllers.
|
|
quantum-l3-agent
|
An agent providing L3/NAT forwarding.
|
|
quantum-*-agent
|
A plug-in agent that runs on each node to perform local networking configuration for the node's VMs and networking services.
|
|
quantum-dhcp-agent
|
An agent providing DHCP services to tenant networks.
|
|
Database
|
Provides persistent storage.
|
- Create, list, and delete volumes.
- Create, list, and delete snapshots.
- Attach and detach volumes to running virtual machines.
Table 1.5. Block Storage Service components
| Component | Description |
|---|---|
|
openstack-cinder-volume
|
Carves out storage for virtual machines on demand. A number of drivers are included for interaction with storage providers.
|
|
openstack-cinder-api
|
Responds to and handles requests, and places them in the message queue.
|
|
openstack-cinder-scheduler
|
Assigns tasks to the queue and determines the provisioning volume server.
|
|
Database
|
Provides state information.
|
See Also:
Table 1.6. Ways to Segregate the Cloud
| Concept | Description |
|---|---|
|
Regions
|
Each service cataloged in the Identity service is identified by its region, which typically represents a geographical location, and its endpoint. In a cloud with multiple Compute deployments, regions allow for the discrete separation of services, and are a robust way to share some infrastructure between Compute installations, while allowing for a high degree of failure tolerance.
|
|
Cells
(Technology Preview)
|
A cloud's Compute hosts can be partitioned into groups called cells (to handle large deployments or geographically separate installations). Cells are configured in a tree. The top-level cell ('API cell') runs the
nova-api service, but no nova-compute services. In contrast, each child cell runs all of the other typical nova-* services found in a regular installation, except for the nova-api service. Each cell has its own message queue and database service, and also runs nova-cells, which manages the communication between the API cell and its child cells.
This means that:
|
|
Host Aggregates and Availability Zones
|
A single Compute deployment can be partitioned into logical groups (for example, into multiple groups of hosts that share common resources like storage and network, or which have a special property such as trusted computing hardware).
If the user is:
Aggregates, or zones, can be used to:
|
Important
Table 1.7. Compute Service components
| Component | Description |
|---|---|
|
openstack-nova-api
|
Handles requests and provides access to the Compute services (such as booting an instance).
|
|
openstack-nova-cert
|
Provides the certificate manager.
|
|
openstack-nova-compute
|
Creates and terminates virtual instances. Interacts with the Hypervisor to bring up new instances, and ensures that the state is maintained in the Compute database.
|
|
openstack-nova-conductor
|
Provides database-access support for Compute nodes (thereby reducing security risks).
|
|
openstack-nova-consoleauth
|
Handles console authentication.
|
|
openstack-nova-network
|
Handles Compute network traffic (both private and public access). Handles such tasks as assigning an IP address to a new virtual instance, and implementing security group rules.
|
|
openstack-nova-novncproxy
|
Provides a VNC proxy for browsers (enabling VNC consoles to access virtual machines).
|
|
openstack-nova-scheduler
|
Dispatches requests for new virtual machines to the correct node.
|
|
Apache Qpid server (
qpidd)
|
Provides the AMPQ message queue. This server (also used by Block Storage) handles the OpenStack transaction management, including queuing, distribution, security, management, clustering, and federation. Messaging becomes especially important when an OpenStack deployment is scaled and its services are running on multiple machines.
|
|
libvirtd
|
The driver for the hypervisor. Enables the creation of virtual machines.
|
|
KVM Linux hypervisor
|
Creates virtual machines and enables their live migration from node to node.
|
|
Database
|
Provides build-time and run-time infrastructure state.
|
- raw (unstructured format)
- aki/ami/ari (Amazon kernel, ramdisk, or machine image)
- iso (archive format for optical discs; for example, CDROM)
- qcow2 (Qemu/KVM, supports Copy on Write)
- vhd (Hyper-V, common for virtual machine monitors from VMWare, Xen, Microsoft, VirtualBox, and others)
- vdi (Qemu/VirtualBox)
- vmdk (VMWare)
- bare (no metadata is included)
- ovf (OVF format)
- aki/ami/ari (Amazon kernel, ramdisk, or machine image)
Table 1.8. Image Service components
| Component | Description |
|---|---|
|
openstack-glance-api
|
Handles requests and image delivery (interacts with storage back-ends for retrieval and storage). Uses the registry to retrieve image information (the registry service is never, and should never be, accessed directly).
|
|
openstack-glance-registry
|
Manages all metadata associated with each image. Requires a database.
|
|
Database
|
Stores image metadata.
|
- Storage replicas, which are used to maintain the state of objects in the case of outage. A minimum of three replicas is recommended.
- Storage zones, which are used to host replicas. Zones ensure that each replica of a given object can be stored separately. A zone might represent an individual disk drive or array, a server, all the servers in a rack, or even an entire data center.
- Storage regions, which are essentially a group of zones sharing a location. Regions can be, for example, groups of servers or server farms, usually located in the same geographical area. Regions have a separate API endpoint per Object Storage service installation, which allows for a discrete separation of services.
Table 1.9. Object Storage Service components
| Component | Description |
|---|---|
|
openstack-swift-proxy
|
Exposes the public API, and is responsible for handling requests and routing them accordingly. Objects are streamed through the proxy server to the user (not spooled). Objects can also be served out via HTTP.
|
|
openstack-swift-object
|
Stores, retrieves, and deletes objects.
|
|
openstack-swift-account
|
Responsible for listings of containers, using the account database.
|
|
openstack-swift-container
|
Handles listings of objects (what objects are in a specific container), using the container database.
|
|
Ring files
|
Contain details of all the storage devices, and are used to deduce where a particular piece of data is stored (maps the names of stored entities to their physical location). One file is created for each object, account, and container server.
|
|
Account database
| |
|
Container database
| |
|
ext4 (recommended) or XFS file system
|
Used for object storage.
|
|
Housekeeping processes
|
Replication and auditors.
|
Table 1.10. Metering Service components
| Component | Description |
|---|---|
|
ceilometer-agent-compute
|
An agent that runs on each Compute node to poll for resource utilization statistics.
|
|
ceilometer-agent-central
|
An agent that runs on a central management server to poll for utilization statistics about resources not tied to instances or Compute nodes.
|
|
ceilometer-collector
|
An agent that runs on one or more central management servers to monitor the message queues. Notification messages are processed and turned into metering messages, and sent back out on to the message bus using the appropriate topic. Metering messages are written to the data store without modification.
|
|
Mongo database
|
For collected usage sample data.
|
|
API Server
|
Runs on one or more central management servers to provide access to the data store's data. Only the Collector and the API server have access to the data store.
|
- A single template provides access to all underlying service APIs.
- Templates are modular (resource oriented).
- Templates can be recursively defined, and therefore reusable (nested stacks). This means that the cloud infrastructure can be defined and reused in a modular way.
- Resource implementation is pluggable, which allows for custom resources.
- Autoscaling functionality (automatically adding or removing resources depending upon usage).
- Basic high availability functionality.
Table 1.11. Orchestration Service components
| Component | Description |
|---|---|
|
heat
|
A CLI tool that communicates with the heat-api to execute AWS CloudFormation APIs.
|
|
heat-api
|
An OpenStack-native REST API that processes API requests by sending them to the heat-engine over RPC.
|
|
heat-api-cfn
|
Provides an AWS-Query API that is compatible with AWS CloudFormation and processes API requests by sending them to the heat-engine over RPC.
|
|
heat-engine
|
Orchestrates the launching of templates and provide events back to the API consumer.
|
|
heat-api-cloudwatch
|
Provides monitoring (metrics collection) for the Orchestration service.
|
|
heat-cfntools
|
A package of helper scripts (for example, cfn-hup, which handles updates to metadata and executes custom hooks).
|
Note
heat-cfntools package is only installed on images that are launched by heat into Compute servers.
See Also:
root user on the system being registered.
Important
- Run the
subscription-manager registercommand to register the system to Red Hat Network.#subscription-manager register - Enter your Red Hat Network user name when prompted.
Username:
admin@example.comImportant
Your Red Hat Network account must have Red Hat Enterprise Linux OpenStack Platform entitlements. If your Red Hat Network account does not have Red Hat Enterprise Linux OpenStack entitlements then you may register for access to the evaluation program at http://www.redhat.com/openstack/. - Enter your Red Hat Network password when prompted.
Password:
- When registration completes successfully system is assigned a unique identifier.
The system has been registered with id:
IDENTIFIER
root user. Repeat these steps on each system in the OpenStack environment.
- Use the
subscription-manager listcommand to locate the pool identifier of the Red Hat Enterprise Linux subscription.#+-------------------------------------------+ Available Subscriptions +-------------------------------------------+ Product Name: Red Hat Enterprise Linux Server Product Id: 69 Pool Id:subscription-manager list--availablePOOLIDQuantity: 1 Service Level: None Service Type: None Multi-Entitlement: No Expires: 01/01/2022 Machine Type: physical ...The pool identifier is indicated in thePool Idfield associated with theRed Hat Enterprise Linux Serverproduct. The identifier will be unique to your subscription. Take note of this identifier as it will be required to perform the next step.Note
The output displayed in this step has been truncated to conserve space. All other available subscriptions will also be listed in the output of the command. - Use the
subscription-manager attachcommand to attach the subscription identified in the previous step.#subscription-managerSuccessfully attached a subscription for Red Hat Enterprise Linux Server.attach--pool=POOLIDReplacePOOLIDwith the unique identifier associated with your Red Hat Enterprise Linux Server subscription. This is the identifier that was located in the previous step. - Run the
yum repolistcommand. This command ensures that the repository configuration file/etc/yum.repos.d/redhat.repoexists and is up to date.#yumrepolistOnce repository metadata has been downloaded and examined, the list of repositories enabled will be displayed, along with the number of available packages.repo id repo name status rhel-6-server-rpms Red Hat Enterprise Linux 6 Server (RPMs) 8,816 repolist: 8,816
Note
The output displayed in this step may differ from that which appears when you run theyum repolistcommand on your system. In particular the number of packages listed will vary if or when additional packages are added to therhel-6-server-rpmsrepository.
- Red Hat Cloud Infrastructure
- Red Hat Cloud Infrastructure (without Guest OS)
- Red Hat Enterprise Linux OpenStack Platform
- Red Hat Enterprise Linux OpenStack Platform Preview
- Red Hat Enterprise Linux OpenStack Platform (without Guest OS)
root user. Repeat these steps on each system in the environment.
- Use the
subscription-manager listcommand to locate the pool identifier of the relevant Red Hat Cloud Infrastructure or Red Hat Enterprise Linux OpenStack Platform entitlement.#+-------------------------------------------+ Available Subscriptions +-------------------------------------------+ ... Product Name:subscription-manager list--availableENTITLEMENTProduct Id:ID_1Pool Id:POOLID_1Quantity: 3 Service Level: None Service Type: None Multi-Entitlement: No Expires: 02/14/2013 Machine Type: physical Product Name:ENTITLEMENTProduct Id:ID_2Pool Id:POOLID_2Quantity: unlimited Service Level: None Service Type: None Multi-Entitlement: No Expires: 02/14/2013 Machine Type: virtual ...Locate the entry in the list where theProduct Namematches the name of the entitlement that will be used to access Red Hat Enterprise Linux OpenStack Platform packages. Take note of the pool identifier associated with the entitlement, this value is indicated in thePool Idfield. The pool identifier is unique to your subscription and will be required to complete the next step.Note
The output displayed in this step has been truncated to conserve space. All other available subscriptions will also be listed in the output of the command. - Use the
subscription-manager attachcommand to attach the subscription identified in the previous step.#subscription-managerSuccessfully attached a subscription forattach--pool=POOLIDENTITLEMENT.ReplacePOOLIDwith the unique identifier associated with your Red Hat Cloud Infrastructure or Red Hat Enterprise Linux OpenStack Platform entitlement. This is the identifier that was located in the previous step. - Install the yum-utils package. The yum-utils package is provided by the Red Hat Enterprise Linux subscription but provides the
yum-config-managerutility required to complete configuration of the Red Hat Enterprise Linux OpenStack Platform software repositories.#yum install -y yum-utilsNote that depending on the options selected during Red Hat Enterprise Linux installation the yum-utils package may already be installed. - Use the
yum-config-managercommand to ensure that the correct software repositories are enabled. Each successful invocation of the command will display the updated repository configuration.- Ensure that the repository for Red Hat OpenStack 1.0 (Essex) has been disabled.
#yum-config-managerLoaded plugins: product-id ==== repo: rhel-server-ost-6-preview-rpms ==== [rhel-server-ost-6-preview-rpms] bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/6Server baseurl = https://cdn.redhat.com/content/beta/rhel/server/6/6Server/x86_64/openstack/essex/os cache = 0 cachedir = /var/cache/yum/x86_64/6Server/rhel-server-ost-6-preview-rpms cost = 1000 enabled = False ...--disable rhel-server-ost-6-preview-rpmsNote
Yum treats the valuesFalseand0as equivalent. As a result the output on your system may instead contain this string:enabled = 0Note
If you encounter this message in the output fromyum-config-managerthen the system has been registered to Red Hat Network using either RHN Classic or RHN Satellite.This system is receiving updates from RHN Classic or RHN Satellite.
Consult the Red Hat Subscription Management Guide for more information on managing subscriptions using RHN Classic or RHN Satellite. - Ensure that the repository for Red Hat OpenStack 2.1 (Folsom) is disabled.
#yum-config-managerLoaded plugins: product-id ==== repo: rhel-server-ost-6-folsom-rpms ==== [rhel-server-ost-6-folsom-rpms] bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/6Server baseurl = https://cdn.redhat.com/content/beta/rhel/server/6/6Server/x86_64/openstack/folsom/os cache = 0 cachedir = /var/cache/yum/x86_64/6Server/rhel-server-ost-6-folsom-rpms cost = 1000 enabled = False ...--disable rhel-server-ost-6-folsom-rpms - Ensure that the repository for Red Hat Enterprise Linux OpenStack Platform 3 (Grizzly) has been enabled.
#yum-config-managerLoaded plugins: product-id ==== repo: rhel-server-ost-6-3-rpms ==== [rhel-server-ost-6-3-rpms] bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/6Server baseurl = https://cdn.redhat.com/content/dist/rhel/server/6/6Server/x86_64/openstack/3/os cache = 0 cachedir = /var/cache/yum/x86_64/6Server/rhel-server-ost-6-3-rpms cost = 1000 enabled = True ...--enable rhel-server-ost-6-3-rpmsNote
Yum treats the valuesTrueand1as equivalent. As a result the output on your system may instead contain this string:enabled = 1
- Run the
yum repolistcommand. This command ensures that the repository configuration file/etc/yum.repos.d/redhat.repoexists and is up to date.#yumrepolistOnce repository metadata has been downloaded and examined, the list of repositories enabled will be displayed, along with the number of available packages.repo id repo name status rhel-6-server-rpms Red Hat Enterprise Linux 6 Server (RPMs) 8,816 rhel-server-ost-6-3-rpms Red Hat OpenStack 3 (RPMs) 138 repolist: 10,058
Note
The output displayed in this step may differ from that which appears when you run theyum repolistcommand on your system. In particular the number of packages listed will vary if or when additional packages are added to the repositories. - Install the yum-plugin-priorities package. The yum-plugin-priorities package provides a
yumplug-in allowing configuration of per-repository priorities.#yum install -y yum-plugin-priorities - Use the
yum-config-managercommand to set the priority of the Red Hat Enterprise Linux OpenStack Platform software repository to1. This is the highest priority value supported by the yum-plugin-priorities plug-in.#yum-config-manager --enable rhel-server-ost-6-3-rpms \Loaded plugins: product-id ==== repo: rhel-server-ost-6-3-rpms ==== [rhel-server-ost-6-3-rpms] bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/6Server baseurl = https://cdn.redhat.com/content/dist/rhel/server/6/6Server/x86_64/openstack/3/os cache = 0 cachedir = /var/cache/yum/x86_64/6Server/rhel-server-ost-6-3-rpms cost = 1000 enabled = True ... priority = 1 ...--setopt="rhel-server-ost-6-3-rpms.priority=1" - Run the
yumupdatecommand and reboot to ensure that the most up to date packages, including the kernel, are installed and running.#yumupdate-y#reboot
yum repolist command to confirm the repository configuration again at any time.
- Processor
- 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions, and the AMD-V or Intel VT hardware virtualization extensions enabled.
- Memory
- A minimum of 2 GB of RAM is recommended.Add additional RAM to this requirement based on the amount of memory that you intend to make available to virtual machine instances.
- Disk Space
- A minimum of 50 GB of available disk space is recommended.Add additional disk space to this requirement based on the amount of space that you intend to make available to virtual machine instances. This figure varies based on both the size of each disk image you intend to create and whether you intend to share one or more disk images between multiple instances.1 TB of disk space is recommended for a realistic environment capable of hosting multiple instances of varying sizes.
- Network Interface Cards
- 2 x 1 Gbps Network Interface Cards.
- Processor
- No specific CPU requirements are imposed by the networking services.
- Memory
- A minimum of 2 GB of RAM is recommended.
- Disk Space
- A minimum of 10 GB of available disk space is recommended.No additional disk space is required by the networking services other than that required to install the packages themselves. Some disk space however must be available for log and temporary files.
- Network Interface Cards
- 2 x 1 Gbps Network Interface Cards.
openstack-cinder-volume) and provide volumes for use by virtual machine instances or other cloud users. The block storage API (openstack-cinder-api) and scheduling services (openstack-cinder-scheduler) may run on the same nodes as the volume service or separately. In either case the primary hardware requirement of the block storage nodes is that there is enough block storage available to serve the needs of the OpenStack environment.
- The number of volumes that will be created in the environment.
- The average size of the volumes that will be created in the environment.
- Whether or not the storage backend will be configured to support redundancy.
- Whether or not the storage backend will be configured to create sparse volumes by default.
VOLUMES*SIZE*REDUNDANCY*UTILIZATION=TOTAL
- Replace
VOLUMESwith the number of volumes that it is expected will exist in the environment at any one time. - Replace
SIZEwith the expected average size of the volumes that will exist in the environment at any one time. - Replace
REDUNDANCYwith the expected number of redundant copies of each volume the backend storage will be configured to keep. Use1or skip this multiplication operation if no redundancy will be used. - Replace
UTILIZATIONwith the expected percentage of each volume that will actually be used. Use1, indicating 100%, if the use of sparse volumes will not be enabled.
Table of Contents
- 3. Installing the Database Server
- 4. Installing the Message Broker
- 5. Installing the OpenStack Identity Service
- 5.1. Identity Service Requirements
- 5.2. Installing the Packages
- 5.3. Creating the Identity Database
- 5.4. Configuring the Service
- 5.5. Starting the Identity Service
- 5.6. Creating the Identity Service Endpoint
- 5.7. Creating an Administrator Account
- 5.8. Creating a Regular User Account
- 5.9. Creating the Services Tenant
- 5.10. Validating the Identity Service Installation
- 6. Installing the OpenStack Object Storage Service
- 7. Installing the OpenStack Image Service
- 8. Installing OpenStack Block Storage
- 9. Installing the OpenStack Networking Service
- 9.1. OpenStack Networking Installation Overview
- 9.2. Networking Prerequisite Configuration
- 9.3. Common Networking Configuration
- 9.4. Configuring the Networking Service
- 9.5. Configuring the DHCP Agent
- 9.6. Configuring a Provider Network
- 9.7. Configuring the Plug-in Agent
- 9.8. Configuring the L3 Agent
- 9.9. Validating the OpenStack Networking Installation
- 10. Installing the OpenStack Compute Service
- 11. Installing the Dashboard
- mysql-server
- Provides the MySQL database server.
- mysql
- Provides the MySQL client tools and libraries. Installed as a dependency of the mysql-server package.
root user.
- Install the required packages using the
yumcommand:#yum install -y mysql-server
root user.
- Open the
/etc/sysconfig/iptablesfile in a text editor. - Add an INPUT rule allowing TCP traffic on port
3306to the file. The new rule must appear before any INPUT rules that REJECT traffic.-A INPUT -p tcp -m multiport --dports 3306 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptablesfile. - Restart the
iptablesservice to ensure that the change takes effect.#service iptables restart
iptables firewall is now configured to allow incoming connections to the MySQL database service on port 3306.
root user.
- Use the
servicecommand to start themysqldservice.#service mysqld start - Use the
chkconfigcommand to ensure that themysqldservice will be started automatically in the future.#chkconfig mysqld on
mysqld service has been started.
root user account. This account acts as the database administrator account.
root database user once the database service has been started for the first time.
root user.
- Use the
mysqladmincommand to set the password for therootdatabase user.#/usr/bin/mysqladmin -u root password "PASSWORD"ReplacePASSWORDwith the intended password. - The
mysqladmincommand can also be used to change the password of therootdatabase user if required.#/usr/bin/mysqladmin -u root -pOLDPASSNEWPASSReplaceOLDPASSwith the existing password andNEWPASSwith the password that is intended to replace it.
root, password has been set. This password will be required when logging in to create databases and database users.
- qpid-cpp-server
- Provides the Qpid message broker.
- qpid-cpp-server-ssl
- Provides the Qpid plug-in enabling support for SSL as a transport later for AMQP traffic. This package is optional but recommended to support secure configuration of Qpid.
root user.
- Install the required packages using the
yumcommand:#yum install -y qpid-cpp-server qpid-cpp-server-ssl
/etc/sasl2/qpidd.conf on the broker. To narrow the allowed mechanisms to a smaller subset, edit this file and remove mechanisms.
Important
SASL Mechanisms
- ANONYMOUS
- Clients are able to connect anonymously.Note that when the broker is started with
auth=no, authentication is disabled.PLAINandANONYMOUSauthentication mechanisms are available as identification mechanisms, but they have no authentication value. - PLAIN
- Passwords are passed in plain text between the client and the broker. This is not a secure mechanism, and should be used in development environments only. If PLAIN is used in production, it should only be used over SSL connections, where the SSL encryption of the transport protects the password.Note that when the broker is started with
auth=no, authentication is disabled. ThePLAINandANONYMOUSauthentication mechanisms are available as identification mechanisms, but they have no authentication value. - DIGEST-MD5
- MD5 hashed passwords are exchanged using HTTP headers. This is a medium strength security protocol.
cyrus-sasl-* package(s) that need to be installed on the server for each authentication mechanism to be available.
Table 4.1.
| Method | Package |
/etc/sasl2/qpidd.conf entry
|
|---|---|---|
|
ANONYMOUS
|
-
|
-
|
|
PLAIN
|
cyrus-sasl-plain
|
mech_list: PLAIN
|
|
DIGEST-MD5
|
cyrus-sasl-md5
|
mech_list: DIGEST-MD5
|
Procedure 4.1. Configure SASL using a Local Password File
guest, which are included in the database at /var/lib/qpidd/qpidd.sasldb on installation, or add your own accounts.
- Add new users to the database by using the
saslpasswd2command. The User ID for authentication and ACL authorization uses the form.user-id@domainEnsure that the correct realm has been set for the broker. This can be done by editing the configuration file or using the-uoption. The default realm for the broker isQPID.# saslpasswd2 -f /var/lib/qpidd/qpidd.sasldb -u
QPIDnew_user_name - Existing user accounts can be listed by using the
-foption:# sasldblistusers2 -f /var/lib/qpidd/qpidd.sasldb
Note
The user database at/var/lib/qpidd/qpidd.sasldbis readable only by theqpidduser. If you start the broker from a user other than theqpidduser, you will need to either modify the configuration file, or turn authentication off.Note also that this file must be readable by theqpidduser. If you delete and recreate this file, make sure the qpidd user has read permissions, or authentication attempts will fail. - To switch authentication on or off, add the appropriate line to to the
/etc/qpidd.confconfiguration file:auth=no auth=yes
The SASL configuration file is in/etc/sasl2/qpidd.conffor Red Hat Enterprise Linux.
qpidd is provided by Mozilla's Network Security Services Library (NSS).
- You will need a certificate that has been signed by a Certification Authority (CA). This certificate will also need to be trusted by your client. If you require client authentication in addition to server authentication, the client certificate will also need to be signed by a CA and trusted by the broker.In the broker, SSL is provided through the
ssl.somodule. This module is installed and loaded by default in MRG Messaging. To enable the module, you need to specify the location of the database containing the certificate and key to use. This is done using thessl-cert-dboption.The certificate database is created and managed by the Mozilla Network Security Services (NSS)certutiltool. Information on this utility can be found on the Mozilla website, including tutorials on setting up and testing SSL connections. The certificate database will generally be password protected. The safest way to specify the password is to place it in a protected file, use the password file when creating the database, and specify the password file with thessl-cert-password-fileoption when starting the broker.The following script shows how to create a certificate database using certutil:mkdir ${CERT_DIR} certutil -N -d ${CERT_DIR} -f ${CERT_PW_FILE} certutil -S -d ${CERT_DIR} -n ${NICKNAME} -s "CN=${NICKNAME}" -t "CT,," -x -f ${CERT_PW_FILE} -z /usr/bin/certutilWhen starting the broker, setssl-cert-password-fileto the value of${CERT_PW_FILE}, setssl-cert-dbto the value of${CERT_DIR}, and setssl-cert-nameto the value of${NICKNAME}. - The following SSL options can be used when starting the broker:
--ssl-use-export-policy- Use NSS export policy
--ssl-cert-password-filePATH- Required. Plain-text file containing password to use for accessing certificate database.
--ssl-cert-dbPATH- Required. Path to directory containing certificate database.
--ssl-cert-nameNAME- Name of the certificate to use. Default is
localhost.localdomain. --ssl-portNUMBER- Port on which to listen for SSL connections. If no port is specified, port 5671 is used.If the SSL port chosen is the same as the port for non-SSL connections (i.e. if the
--ssl-portand--portoptions are the same), both SSL encrypted and unencrypted connections can be established to the same port. However in this configuration there is no support for IPv6. --ssl-require-client-authentication- Require SSL client authentication (i.e. verification of a client certificate) during the SSL handshake. This occurs before SASL authentication, and is independent of SASL.This option enables the
EXTERNALSASL mechanism for SSL connections. If the client chooses theEXTERNALmechanism, the client's identity is taken from the validated SSL certificate, using theCN, and appending anyDC's to create the domain. For instance, if the certificate contains the propertiesCN=bob,DC=acme,DC=com, the client's identity isbob@acme.com.If the client chooses a different SASL mechanism, the identity take from the client certificate will be replaced by that negotiated during the SASL handshake. --ssl-sasl-no-dict- Do not accept SASL mechanisms that can be compromised by dictionary attacks. This prevents a weaker mechanism being selected instead of
EXTERNAL, which is not vulnerable to dictionary attacks. --require-encryption- This will cause
qpiddto only accept encrypted connections. This means only clients with EXTERNAL SASL on the SSL-port, or with GSSAPI on the TCP port.
pk12util -o<p12exportfile>-n<certname>-d<certdir>-w<p12filepwfile>openssl pkcs12 -in<p12exportfile>-out<clcertname>-nodes -clcerts -passin pass:<p12pw>
man openssl.
5672.
iptables. You can configure the firewall by editing the iptables configuration file, namely /etc/sysconfig/iptables. To do so:
- Open the
/etc/sysconfig/iptablesfile in a text editor. - Add an
INPUTrule allowing incoming connections on port5672to the file. The new rule must appear before anyINPUTrules thatREJECTtraffic.-A INPUT -p tcp -m tcp --dport 5672 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptablesfile. - Restart the
iptablesservice for the firewall changes to take effect.#service iptables restart
#service iptables status
qpidd service must be started before the broker can commence sending and receiving messages.
- Use the
servicecommand to start the service.#service qpidd start - Use the
chkconfigcommand to enable the service permanently.#chkconfig qpidd on
qpidd service has been started.
- 5.1. Identity Service Requirements
- 5.2. Installing the Packages
- 5.3. Creating the Identity Database
- 5.4. Configuring the Service
- 5.5. Starting the Identity Service
- 5.6. Creating the Identity Service Endpoint
- 5.7. Creating an Administrator Account
- 5.8. Creating a Regular User Account
- 5.9. Creating the Services Tenant
- 5.10. Validating the Identity Service Installation
- Access to Red Hat Network or equivalent service provided by a tool such as Satellite.
- A network interface that is addressable by all other systems that will host OpenStack services.
- Network access to the database server.
- Network access to the directory server if using an LDAP backend.
- openstack-keystone
- Provides the OpenStack Identity service.
- openstack-utils
- Provides supporting utilities to assist with a number of tasks including the editing of configuration files.
- openstack-selinux
- Provides OpenStack specific SELinux policy modules.
root user.
- Install the required packages using the
yumcommand:#yum install -y openstack-keystone \openstack-utils \openstack-selinux
root user (or at least as a user with the correct permissions: create db, create user, grant permissions).
- Connect to the database service using the
mysqlcommand.#mysql -u root -p - Create the
keystonedatabase.mysql>CREATE DATABASE keystone; - Create a
keystonedatabase user and grant it access to thekeystonedatabase.mysql>GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'PASSWORD';mysql>GRANT ALL ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'PASSWORD';ReplacePASSWORDwith a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately.
mysql>FLUSH PRIVILEGES; - Exit the
mysqlclient command.mysql>quit
root user.
- Use OpenSSL to generate an initial service token and save it in the
SERVICE_TOKENenvironment variable.#export SERVICE_TOKEN=$(openssl rand -hex 10) - Store the value of the administration token in a file for future use.
#echo $SERVICE_TOKEN > ~/ks_admin_token - Use the
openstack-configtool to set the value of theadmin_tokenconfiguration key to that of the newly created token.#openstack-config --set /etc/keystone/keystone.conf \DEFAULT admin_token $SERVICE_TOKEN
/etc/keystone/keystone.conf file. It must be updated to point to a valid database server before starting the service.
root user on the server hosting the identity service.
- Use the
openstack-configcommand to set the value of theconnectionconfiguration key.#openstack-config --set /etc/keystone/keystone.conf \sql connection mysql://USER:PASS@IP/DBReplace:USERwith the database user name the identity service is to use, usuallykeystone.PASSwith the password of the chosen database user.IPwith the IP address or host name of the database server.DBwith the name of the database that has been created for use by the identity service, usuallykeystone.
keystone-manage pki_setup command. It is however possible to manually create and sign the required certificates using a third party certificate authority. If using third party certificates the identity service configuration must be manually updated to point to the certificates and supporting files.
[signing] section of the /etc/keystone/keystone.conf configuration file that are relevant to the PKI setup are:
- ca_certs
- Specifies the location of the certificate for the authority that issued the certificate denoted by the
certfileconfiguration key. The default value is/etc/keystone/ssl/certs/ca.pem. - ca_key
- Specifies the key of the certificate authority that issued the certificate denoted by the
certfileconfiguration key. The default value is/etc/keystone/ssl/certs/cakey.pem. - ca_password
- Specifies the password, if applicable, required to open the certificate authority file. The default action if no value is specified is not to use a password.
- certfile
- Specifies the location of the certificate that must be used to verify tokens. The default value of
/etc/keystone/ssl/certs/signing_cert.pemis used if no value is specified. - keyfile
- Specifies the location of the private key that must be used when signing tokens. The default value of
/etc/keystone/ssl/private/signing_key.pemis used if no value is specified. - token_format
- Specifies the algorithm to use when generating tokens. Possible values are
UUIDandPKI. The default value isPKI.
root user.
- Run the
keystone-manage pki_setupcommand.#keystone-manage pki_setup \--keystone-userkeystone\--keystone-groupkeystone - Ensure that the
keystoneuser owns the/var/log/keystone/and/etc/keystone/ssl/directories.#chown -R keystone:keystone /var/log/keystone \/etc/keystone/ssl/
Important
authlogin_nsswitch_use_ldap Boolean enabled on any client machine accessing the LDAP backend. Run the following command on each client machine as the root user to enable the Boolean and make it persistent across reboots:
# setsebool -P authlogin_nsswitch_use_ldap
dn: cn=example,cn=org dc: openstack objectClass: dcObject objectClass: organizationalUnit ou: openstack dn: ou=Groups,cn=example,cn=org objectClass: top objectClass: organizationalUnit ou: groups dn: ou=Users,cn=example,cn=org objectClass: top objectClass: organizationalUnit ou: users dn: ou=Roles,cn=example,cn=org objectClass: top objectClass: organizationalUnit ou: roles
/etc/keystone/keystone.conf are:
[ldap] url = ldap://localhost user = dc=Manager,dc=openstack,dc=org password = badpassword suffix = dc=openstack,dc=org use_dumb_member = False allow_subtree_delete = False user_tree_dn = ou=Users,dc=openstack,dc=com user_objectclass = inetOrgPerson tenant_tree_dn = ou=Groups,dc=openstack,dc=com tenant_objectclass = groupOfNames role_tree_dn = ou=Roles,dc=example,dc=com role_objectclass = organizationalRole
objectClassposixAccount described in RFC2307 is commonly found in directory server implementations.
objectclass, then the uid field is likely to be named uidNumber and the username field is likely to be named either uid or cn. To change these two fields, the corresponding entries in the identity service configuration file are:
[ldap] user_id_attribute = uidNumber user_name_attribute = cn
[ldap] user_allow_create = False user_allow_update = False user_allow_delete = False tenant_allow_create = True tenant_allow_update = True tenant_allow_delete = True role_allow_create = True role_allow_update = True role_allow_delete = True
[ldap] user_filter = (memberof=CN=openstack-users,OU=workgroups,DC=openstack,DC=com) tenant_filter = role_filter =
[ldap] user_enabled_attribute = userAccountControl user_enabled_mask = 2 user_enabled_default = 512
userAccountControl is an integer and the enabled attribute is listed in the first bit (bit 1). The values of the user_enabled_mask and the user_enabled_attribute are added together. If the resultant value matches the mask then the account is disabled.
enabled_nomask. This is required to allow the restoration of the value when enabling or disabling a user. This needs to be done because the value contains more than just the status of the user. Setting the value of the user_enabled_mask configuration key is required in order to create a default value on the integer attribute (512 = NORMAL ACCOUNT on Active Directory).
[ldap] user_objectclass = person user_id_attribute = cn user_name_attribute = cn user_mail_attribute = mail user_enabled_attribute = userAccountControl user_enabled_mask = 2 user_enabled_default = 512 user_attribute_ignore = tenant_id,tenants tenant_objectclass = groupOfNames tenant_id_attribute = cn tenant_member_attribute = member tenant_name_attribute = ou tenant_desc_attribute = description tenant_enabled_attribute = extensionName tenant_attribute_ignore = role_objectclass = organizationalRole role_id_attribute = cn role_name_attribute = ou role_member_attribute = roleOccupant role_attribute_ignore =
[ldap] use_tls = True tls_cacertfile = /etc/keystone/ssl/certs/cacert.pem tls_cacertdir = /etc/keystone/ssl/certs/ tls_req_cert = demand
tls_cacertfile and tls_cacertdir are set then tls_cacertfile will be used and tls_cacertdir is ignored. Furthermore, valid options for tls_req_cert are demand, never, and allow. These correspond to the standard options permitted by the TLS_REQCERT TLS option.
root user.
- Open the
/etc/sysconfig/iptablesfile in a text editor. - Add an INPUT rule allowing TCP traffic on ports
5000and35357to the file. The new rule must appear before any INPUT rules that REJECT traffic.-A INPUT -p tcp -m multiport --dports 5000,35357 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptablesfile. - Restart the
iptablesservice to ensure that the change takes effect.#service iptables restart
iptables firewall is now configured to allow incoming connections to the identity service on ports 5000 and 35357.
- Use the
sucommand to switch to thekeystoneuser and run thekeystone-manage db_synccommand to initialize and populate the database identified in/etc/keystone/keystone.conf.#su keystone -s /bin/sh -c "keystone-manage db_sync"
root user.
- Use the
servicecommand to start theopenstack-keystoneservice.#service openstack-keystone start - Use the
chkconfigcommand to ensure that theopenstack-keystoneservice will be started automatically in the future.#chkconfig openstack-keystone on
openstack-keystone service has been started.
root user.
Set the
SERVICE_TOKENEnvironment VariableSet theSERVICE_TOKENenvironment variable to the administration token. This is done by reading the token file created when setting the administration token.#export SERVICE_TOKEN=`cat ~/ks_admin_token`Set the
SERVICE_ENDPOINTEnvironment VariableSet theSERVICE_ENDPOINTenvironment variable to point to the server hosting the identity service.#export SERVICE_ENDPOINT="http://IP:35357/v2.0"ReplaceIPwith the IP address or host name of your identity server.Create a Service Entry
Create a service entry for the identity service using thekeystone service-createcommand.#keystone service-create --name=keystone --type=identity \--description="Keystone Identity Service"+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Keystone Identity Service | | id | a8bff1db381f4751bd8ac126464511ae | | name | keystone | | type | identity | +-------------+----------------------------------+Take note of the unique identifier assigned to the entry. This value will be required in subsequent steps.Create an Endpoint for the API
Create an endpoint entry for the v2.0 API identity service using thekeystone endpoint-createcommand.#keystone endpoint-create \--service_idID\--publicurl 'http://IP:5000/v2.0' \--adminurl 'http://IP:35357/v2.0' \--internalurl 'http://+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://IP:5000/v2.0'IP:35357/v2.0 | | id | 1295011fdc874a838f702518e95a0e13 | | internalurl | http://IP:5000/v2.0 | | publicurl | http://IP:5000/v2.0 | | region | regionOne | | service_id |ID| +-------------+----------------------------------+ReplaceIDwith the service identifier returned in the previous step. ReplaceIPwith the IP address or host name of the identity server.Important
Ensure that thepublicurl,adminurl, andinternalurlparameters include the correct IP address for your Keystone identity server.Note
By default, the endpoint is created in the default region,regionOne. If you need to specify a different region when creating an endpoint use the--regionargument to provide it.
- Set the
SERVICE_TOKENenvironment variable to the value of the administration token. This is done by reading the token file created when setting the administration token:#SERVICE_TOKEN=`cat ~/ks_admin_token` - Set the
SERVICE_ENDPOINTenvironment variable to point to the server hosting the identity service:#export SERVICE_ENDPOINT="http://IP:35357/v2.0"ReplaceIPwith the IP address or host name of your identity server. - Use the
keystone user-createcommand to create anadminuser:#keystone user-create --name admin --pass+----------+-----------------------------------+ | Property | Value | +----------+-----------------------------------+ | email | | | enabled | True | | id |PASSWORD94d659c3c9534095aba5f8475c87091a| | name | admin | | tenantId | | +----------+-----------------------------------+ReplacePASSWORDwith a secure password for the account. Take note of the created user's ID as it will be required in subsequent steps. - Use the
keystone role-createcommand to create anadminrole:#+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id |keystone role-create --name admin78035c5d3cd94e62812d6d37551ecd6a| | name | admin | +----------+----------------------------------+Take note of theadminuser's ID as it will be required in subsequent steps. - Use the
keystone tenant-createcommand to create anadmintenant:#keystone tenant-create --name admin+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id |6f8e3e36c4194b86b9a9b55d4b722af3| | name | admin | +-------------+----------------------------------+Take note of theadmintenant's ID as it will be required in the next step. - Now that the user account, role, and tenant have been created, the relationship between them must be explicitly defined using the
keystone user-role-add:#keystone user-role-add --user-idUSERID--role-idROLEID--tenant-idTENANTIDReplace the user, role, and tenant IDs with those obtained in the previous steps. - The newly created
adminaccount will be used for future management of the identity service. To facilitate authentication, create akeystonerc_adminfile in a secure location such as the home directory of therootuser.Add these lines to the file to set the environment variables that will be used for authentication:export OS_USERNAME=admin export OS_TENANT_NAME=admin export OS_PASSWORD=
PASSWORDexport OS_AUTH_URL=http://IP:35357/v2.0/ export PS1='[\u@\h \W(keystone_admin)]\$ 'ReplacePASSWORDwith the password of theadminuser and replaceIPwith the IP address or host name of the identity server. - Run the
sourcecommand on the file to load the environment variables used for authentication:#source ~/keystonerc_admin
keystonerc_admin file has also been created for authenticating as the admin user.
- Load identity credentials from the
~/keystonerc_adminfile that was generated when the administrative user was created:#source ~/keystonerc_admin - Use the
keystone user-createto create a regular user:#keystone user-create --name+----------+-----------------------------------+ | Property | Value | +----------+-----------------------------------+ | email | | | enabled | True | | id |USER--passPASSWORDb8275d7494dd4c9cb3f69967a11f9765| | name |USER| | tenantId | | +----------+-----------------------------------+ReplaceUSERwith the user name that you would like to use for the account. ReplacePASSWORDwith a secure password for the account. Take note of the created user's ID as it will be required in subsequent steps. - Use the
keystone role-createcommand to create anMemberrole. TheMemberrole is the default role required for access to the dashboard:#+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id |keystone role-create --name Member78035c5d3cd94e62812d6d37551ecd6a| | name | Member | +----------+----------------------------------+Take note of the created role's ID as it will be required in subsequent steps. - Use the
keystone tenant-createcommand to create a tenant:#keystone tenant-create --name+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id |TENANT6f8e3e36c4194b86b9a9b55d4b722af3| | name |TENANT| +-------------+----------------------------------+ReplaceTENANTwith the name that you wish to give to the tenant. Take note of the created tenant's ID as it will be required in the next step. - Now that the user account, role, and tenant have been created, the relationship between them must be explicitly defined using the
keystone user-role-add:#keystone user-role-add --user-idUSERID--role-idROLEID--tenant-idTENANTIDReplace the user, role, and tenant IDs with those obtained in the previous steps. - To facilitate authentication create a
keystonerc_userfile in a secure location such as the home directory of therootuser.Set these environment variables that will be used for authentication:export OS_USERNAME=
USERexport OS_TENANT_NAME=TENANTexport OS_PASSWORD=PASSWORDexport OS_AUTH_URL=http://IP:5000/v2.0/ export PS1='[\u@\h \W(keystone_user)]\$ 'ReplaceUSERandTENANTwith the name of the new user and tenant respectively. ReplacePASSWORDwith the password of the user and replaceIPwith the IP address or host name of the identity server.
keystonerc_user file has also been created for authenticating as the created user.
- Distributed, typically one service tenant is created for each endpoint on which services are running (excepting the Identity and Dashboard services).
- Deployed on a single node, only one service tenant is required (but of course this is just one option; more can be created for administrative purposes).
services tenant.
Note
services tenant:
- Run the
sourcecommand on the file containing the environment variables used to identify the Identity service administrator.#source ~/keystonerc_admin - Create the
servicestenant in the Identity service:#keystone tenant-create --name services --description "Services Tenant"+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Services Tenant | | enabled | True | | id |7e193e36c4194b86b9a9b55d4b722af3| | name | services | +-------------+----------------------------------+
Note
# keystone tenant-list
keystonerc_admin and keystonerc_user files containing the environment variables required to authenticate as the administrator user and a regular user respectively.
- Run the
sourcecommand on the file containing the environment variables used to identify the identity service administrator.#source ~/keystonerc_admin - Run the
keystone user-listcommand to authenticate with the identity service and list the users defined in the system.#+----------------------------------+--------+---------+------------------+ | id | name | enabled | email | +----------------------------------+--------+---------+------------------+ |keystone user-list94d659c3c9534095aba5f8475c87091a| admin | True | | |b8275d7494dd4c9cb3f69967a11f9765|USER| True | | +----------------------------------+--------+---------+------------------+The list of users defined in the system is displayed. If the list is not displayed then there is an issue with the installation.- If the message returned indicates a permissions or authorization issue then check that the administrator user account, tenant, and role were created properly. Also ensure that the three objects are linked correctly.
Unable to communicate with identity service: {"error": {"message": "You are not authorized to perform the requested action: admin_required", "code": 403, "title": "Not Authorized"}}. (HTTP 403) - If the message returned indicates a connectivity issue then verify that the
openstack-keystoneservice is running and thatiptablesis configured to allow connections on ports5000and35357.Authorization Failed: [Errno 111] Connection refused
- Run the
sourcecommand on the file containing the environment variables used to identify the regular identity service user.#source ~/keystonerc_user - Run the
keystone user-listcommand to authenticate with the identity service and list the users defined in the system.#Unable to communicate with identity service: {"error": {"message": "You are not authorized to perform the requested action: admin_required", "code": 403, "title": "Not Authorized"}}. (HTTP 403)keystone user-listAn error message is displayed indicating that the user isNot Authorizedto run the command. If the error message is not displayed but instead the user list appears then the regular user account was incorrectly attached to theadminrole. - Run the
keystone token-getcommand to verify that the regular user account is able to run commands that it is authorized to access.#+-----------+----------------------------------+ | Property | Value | +-----------+----------------------------------+ | expires | 2013-05-07T13:00:24Z | | id | 5f6e089b24d94b198c877c58229f2067 | | tenant_id | f7e8628768f2437587651ab959fbe239 | | user_id | 8109f0e3deaf46d5990674443dcf7db7 | +-----------+----------------------------------+keystone token-get
- Proxy Service
- The proxy service uses the object ring to decide where to direct newly uploaded objects. It updates the relevant container database to reflect the presence of a new object. If a newly uploaded object goes to a new container, the proxy service updates the relevant account database to reflect the new container.The proxy service also directs get requests to one of the nodes where a replica of the requested object is stored, either randomly, or based on response time from the node.
- Object Service
- The object service is responsible for storing data objects in partitions on disk devices. Each partition is a directory. Each object is held in a subdirectory of its partition directory. A MD5 hash of the path to the object is used to identify the object itself.
- Container Service
- The container service maintains databases of objects in containers. There is one database file for each container, and the database files are replicated across the cluster. Containers are defined when objects are put in them. Containers make finding objects faster by limiting object listings to specific container namespaces.
- Account Service
- The account service maintains databases of all of the containers accessible by any given account. There is one database file for each account, and the database files are replicated across the cluster. Any account has access to a particular group of containers. An account maps to a tenant in the Identity Service.
Common Object Storage Service Deployment Configurations
- All services on all nodes.
- Simplest to set up.
- Dedicated proxy nodes, all other services combined on other nodes.
- The proxy service is CPU and I/O intensive. The other services are disk and I/O intensive. This configuration allows you to optimize your hardware usage.
- Dedicated proxy nodes, dedicated object service nodes, container and account services combined on other nodes.
- The proxy service is CPU and I/O intensive. The container and account services are more disk and I/O intensive than the object service. This configuration allows you to optimize your hardware usage even more.
- Supported Filesystems
- The Object Storage Service stores objects in filesystems. Currently,
XFSandext4are supported. Theext4filesystem is recommended.Your filesystem must be mounted withxattrenabled. For example, this is from/etc/fstab:/dev/sdb1 /srv/node/d1 ext4 acl,user_xattr 0 0
- Acceptable Mountpoints
- The Object Storage service expects devices to be mounted at
/srv/node/.
Primary OpenStack Object Storage packages
- openstack-swift-proxy
- Proxies requests for objects.
- openstack-swift-object
- Stores data objects of up to 5GB.
- openstack-swift-container
- Maintains a database that tracks all of the objects in each container.
- openstack-swift-account
- Maintains a database that tracks all of the containers in each account.
OpenStack Object Storage dependencies
- openstack-swift
- Contains code common to the specific services.
- openstack-swift-plugin-swift3
- The swift3 plugin for OpenStack Object Storage.
- memcached
- Soft dependency of the proxy server, caches authenticated clients rather than making them reauthorize at every interaction.
- openstack-utils
- Provides utilities for configuring Openstack.
Procedure 6.1. Installing the Object Storage Service Packages
- Install the required packages using the
yumcommand as the root user:#yum install -y openstack-swift-proxy \openstack-swift-object \openstack-swift-container \openstack-swift-account \openstack-utils \memcached
Prerequisites:
- Create the
swiftuser, who has theadminrole in theservicestenant. - Create the
swiftservice entry and assign it an endpoint.
keystonerc_admin file (which contains administrator credentials) and the keystone command-line utility is installed.
Procedure 6.2. Configuring the Identity Service to work with the Object Storage Service
- Set up the shell to access Keystone as the admin user:
$source ~/keystonerc_admin - Create the
swiftuser and set its password by replacingPASSWORDwith your chosen password:$keystone user-create --name swift --passPASSWORDNote the create user's ID as it will be used in subsequent steps. - Get the ID of the
adminrole:$keystone role-list | grep adminIf noadminrole exists, create one:$keystone role-create --name admin - Get the ID of the
servicestenant:$keystone tenant-list | grep servicesIf noservicestenant exists, create one:$keystone tenant-create --name services --description "Services Tenant"This guide uses one tenant for all service users. For more information, refer to Creating the Services Tenant. - Add the
swiftuser to theservicestenant with theadminrole:$keystone user-role-add --role-idROLEID--tenant-idTENANTID--user-idUSERIDReplace the user, role, and tenant IDs with those obtained in the previous steps. - Create the
swiftObject Storage service entry:$keystone service-create --name swift --type object-store \--description "Swift Storage Service"Note the create service's ID as it will be used in the next step. - Create the
swiftendpoint entry:$keystone endpoint-create --service_idSERVICEID\--publicurl "http://IP:8080/v1/AUTH_\$(tenant_id)s" \--adminurl "http://IP:8080/v1" \--internalurl "http://IP:8080/v1/AUTH_\$(tenant_id)s"ReplaceSERVICEIDwith the identifier returned by thekeystone service-createcommand. ReplaceIPwith the IP address of fully qualified domain name of the system hosting the Object Storage Proxy service.
ext4 or XFS, and mounted under the /srv/node/ directory. All of the services that will run on a given node must be enabled, and their ports opened.
Procedure 6.3. Configuring the Object Storage Service Storage Nodes
- Format your devices using the
ext4orXFSfilesystem. Make sure thatxattrs are enabled. - Add your devices to the
/etc/fstabfile to ensure that they are mounted under/srv/node/at boot time.Use theblkidcommand to find your device's unique ID, and mount the device using its unique ID.Note
If usingext4, ensure that extended attributes are enabled by mounting the filesystem with theuser_xattroption. (InXFS, extended attributes are enabled by default.) - Open the TCP ports used by each service running on each node.By default, the account service uses port 6002, the container service uses port 6001, and the object service uses port 6000.
# iptables -A INPUT -p tcp -m multiport --dports 6000,6001,6002,873 -j ACCEPT # service iptables save # service iptables restart
The-Aparameter appends the rule to the end of the iptables firewall. Make sure that the rule doesn't fall after areject-with icmp-host-prohibitedrule. - Change the owner of the contents of
/srv/node/toswift:swiftwith thechowncommand.#chown -R swift:swift /srv/node/ - Set the
SELinuxcontext correctly for all directories under/srv/node/with therestorconcommand.#restorecon -R /srv - Use the
openstack-configcommand to add a hash prefix and suffix to your/etc/swift.conf.# openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix \ $(openssl rand -hex 10) # openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix \ $(openssl rand -hex 10)These details are required for finding and placing data on all of your nodes. Back/etc/swift.confup. - Use the
openstack-configcommand to set the IP address your storage services will listen on. Run these commands for every service on every node in your Object Storage cluster.# openstack-config --set /etc/swift/object-server.conf DEFAULT bind_ip
node_ip_address# openstack-config --set /etc/swift/account-server.conf DEFAULT bind_ipnode_ip_address# openstack-config --set /etc/swift/container-server.conf DEFAULT bind_ipnode_ip_addressTheDEFAULTargument specifies the DEFAULT section of the service configuration file. Replacenode_ip_addresswith the IP address of the node you are configuring. - Copy
/etc/swift.conffrom the node you are currently configuring, to all of your Object Storage Service nodes.Important
The/etc/swift.conffile must be identical on all of your Object Storage Service nodes. - Start the services which will run on your node.
#service openstack-swift-account start#service openstack-swift-container start#service openstack-swift-object start - Use the
chkconfigcommand to make sure the services automatically start at boot time.#chkconfig openstack-swift-account on#chkconfig openstack-swift-container on#chkconfig openstack-swift-object on
/srv/node/. Any service running on the node has been enabled, and any ports used by services on the node have been opened.
gets and puts are directed.
Procedure 6.4. Configuring the Object Storage Service Proxy Service
- Update the configuration file for the proxy server with the correct authentication details for the appropriate service user:
#openstack-config --set /etc/swift/proxy-server.conf \filter:authtoken auth_hostIP#openstack-config --set /etc/swift/proxy-server.conf \filter:authtoken admin_tenant_nameservices#openstack-config --set /etc/swift/proxy-server.conf \filter:authtoken admin_userswift#openstack-config --set /etc/swift/proxy-server.conf \filter:authtoken admin_passwordPASSWORDWhere:IP- The IP address or host name of the Identity server.services- The name of the tenant that was created for the use of the Object Storage service (previous examples set this toservices).swift- The name of the service user that was created for the Object Storage service (previous examples set this toswift).PASSWORD- The password associated with the service user.
- Start the
memcachedandopenstack-swift-proxyservices using theservicecommand:#service memcached start#service openstack-swift-proxy start - Use the
chowncommand to change the ownership of the keystone signing directory:# chown swift:swift /tmp/keystone-signing-swift
- Enable the
memcachedandopenstack-swift-proxyservices permanently using thechkconfigcommand:#chkconfig memcached on#chkconfig openstack-swift-proxy on - Allow incoming connections to the Swift proxy server by adding this firewall rule to the
/etc/sysconfig/iptablesconfiguration file:-A INPUT -p tcp -m multiport --dports 8080 -j ACCEPT
Important
This rule allows communication from all remote hosts to the system hosting the Swift proxy on port8080. For information regarding the creation of more restrictive firewall rules refer to the Red Hat Enterprise Linux 6 Security Guide. - Use the
servicecommand to restart theiptablesservice for the new rule to take effect:#service iptables save#service iptables restart
Table 6.1. Parameters used when building ring files
| Ring File Parameter | Description |
|---|---|
|
Partition power
|
2 ^ partition power = partition count.
The partition is rounded up after calculation.
|
|
Replica count
|
The number of times that your data will be replicated in the cluster.
|
|
min_part_hours
|
Minimum number of hours before a partition can be moved. This parameter increases availability of data by not moving more than one copy of a given data item within that min_part_hours amount of time.
|
Procedure 6.5. Building Object Storage Service Ring Files
- Use the
swift-ring-buildercommand to build one ring for each service. Provide a builder file, apartition power, areplica count, and theminimum hours between partition re-assignment:#swift-ring-builder /etc/swift/object.builder createpart_powerreplica_countmin_part_hours#swift-ring-builder /etc/swift/container.builder createpart_powerreplica_countmin_part_hours#swift-ring-builder /etc/swift/account.builder createpart_powerreplica_countmin_part_hours - When the rings are created, add storage devices to each ring:
- Add devices to the accounts ring. Repeat for each device on each node in the cluster that you want added to the ring.
#swift-ring-builder /etc/swift/account.builder add zX-127.0.0.1:6002/device_mountpointpartition_count- Specify a zone with z
X, whereXis an integer (for example, z1 for zone one). - By default, all three services (account, container, and object) listen on the 127.0.0.1 address, and the above command matches this default.However, the service's machine IP address can also be used (for example, to handle distributed services). If you do use a real IP, remember to change the service's bind address to the same IP address or to '0.0.0.0' (configured in the
/etc/swift/file).service-server.conf - TCP port 6002 is the default port that the account server uses.
- The
device_mountpointis the directory under/srv/node/that your device is mounted at. - The recommended minimum number for
partition_countis 100, use the partition count you used to calculate your partition power.
- Add devices to the containers ring. Repeat for each device on each node in the cluster that you want added to the ring.
#swift-ring-builder /etc/swift/container.builder add zX-127.0.0.1:6001/device_mountpointpartition_count- TCP port 6001 is the default port that the container server uses.
- Add devices to the objects ring. Repeat for each device on each node in the cluster that you want added to the ring.
#swift-ring-builder /etc/swift/object.builder add zX-127.0.0.1:6000/device_mountpointpartition_count- TCP port 6000 is the default port that the object server uses.
- Distribute the partitions across the devices in the ring using the
swift-ring-buildercommand'srebalanceargument.#swift-ring-builder /etc/swift/account.builder rebalance#swift-ring-builder /etc/swift/container.builder rebalance#swift-ring-builder /etc/swift/object.builder rebalance - Check to see that you now have 3 ring files in the directory
/etc/swift. The command:#ls /etc/swift/*gzshould reveal:/etc/swift/account.ring.gz /etc/swift/container.ring.gz /etc/swift/object.ring.gz
- Ensure that all files in the
/etc/swift/directory including those that you have just created are owned by therootuser andswiftgroup.Important
All mount points must be owned byroot; all roots of mounted file systems must be owned byswift. Before running the following command, ensure that all devices are already mounted and owned byroot.#chown -R root:swift /etc/swift - Copy each ring builder file to each node in the cluster, storing them under
/etc/swift/.# scp /etc/swift/*.gz
node_ip_address:/etc/swift
- On your proxy server node, use the
openstack-configcommand to turn on debug level logging:# openstack-config --set /etc/swift/proxy-server.conf DEFAULT log_level debug
- Set up the shell to access Keystone as a user that has the admin role. The admin user is shown in this example:Use the
swiftlist to make sure you can connect to your proxy server:$swift listMessage from syslogd@thildred-swift-01 at Jun 14 02:46:00 ... �135 proxy-server Server reports support for api versions: v3.0, v2.0 - Use the
swiftcommand to upload some files to your Object Storage Service nodes$head -c 1024 /dev/urandom > data1.file ; swift upload c1 data1.file$head -c 1024 /dev/urandom > data2.file ; swift upload c1 data2.file$head -c 1024 /dev/urandom > data3.file ; swift upload c1 data3.file - Use the
swiftcommand to take a listing of the objects held in your Object Storage Service cluster.$swift list$swift list c1data1.file data2.file data3.file
.data files, based on your replica count.
$find /srv/node/ -type f -name "*data"
- MySQL database server root credentials and IP address
- Identity service administrator credentials and endpoint URL
Note
See Also:
- openstack-glance
- Provides the OpenStack Image service.
- openstack-utils
- Provides supporting utilities to assist with a number of tasks including the editing of configuration files.
- openstack-selinux
- Provides OpenStack specific SELinux policy modules.
root user.
#yum install -y openstack-glance openstack-utils openstack-selinux
root user (or as a user with suitable access: create db, create user, grant permissions).
- Connect to the database service using the
mysqlcommand.#mysql -u root -p - Create the
glancedatabase.mysql>CREATE DATABASE glance; - Create a
glancedatabase user and grant it access to theglancedatabase.mysql>GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'PASSWORD';mysql>GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'PASSWORD';ReplacePASSWORDwith a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately.
mysql>FLUSH PRIVILEGES; - Exit the
mysqlclient.mysql>quit
- Configure TLS/SSL.
- Configure the Identity service for Image service authentication (create database entries, set connection strings, and update configuration files).
- Configure the disk-image storage backend (this guide uses the Object Storage service).
- Configure the firewall for Image service access.
- Populate the Image service database.
- Create the
glance, who has theadminrole in theservicestenant. - Create the
glanceservice entry and assign it an endpoint.
- Authenticate as the administrator of the identity service by running the
sourcecommand on thekeystonerc_adminfile containing the required credentials:#source ~/keystonerc_admin - Create a user named
glancefor the Image service to use:#+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id |keystone user-create --name glance --passPASSWORD8091eaf121b641bf84ce73c49269d2d1| | name | glance | | tenantId | | +----------+----------------------------------+ReplacePASSWORDwith a secure password that will be used by the image storage service when authenticating with the identity service. Take note of the returned user ID (used in subsequent steps). - Get the ID of the
adminrole:#keystone role-get adminIf noadminrole exists, create one:$ keystone role-create --name admin
- Get the ID of the
servicestenant:$keystone tenant-list | grep servicesIf noservicestenant exists, create one:$keystone tenant-create --name services --description "Services Tenant"This guide uses one tenant for all service users. For more information, refer to Creating the Services Tenant. - Use the
keystone user-role-addcommand to link theglanceuser and theadminrole together within the context of theservicestenant:#keystone user-role-add --user-idUSERID--role-idROLEID--tenant-idTENANTID - Create the
glanceservice entry:#keystone service-create --name glance \--type image \+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Glance Image Service | | id |--description "Glance Image Service"7461b83f96bd497d852fb1b85d7037be| | name | glance | | type | image | +-------------+----------------------------------+Take note of the service's returned ID (used in the next step). - Create the
glanceendpoint entry:#keystone endpoint-create --service-idSERVICEID\--publicurl "http://IP:9292" \--adminurl "http://IP:9292" \--internalurl "http://IP:9292"ReplaceSERVICEIDwith the identifier returned by thekeystone service-createcommand. ReplaceIPwith the IP address or host name of the system hosting the Image service.
/etc/glance/glance-api.conf and /etc/glance/glance-registry.conf files. It must be updated to point to a valid database server before starting the service.
root user on the server hosting the Image service.
- Use the
openstack-configcommand to set the value of thesql_connectionconfiguration key in the/etc/glance/glance.conffile.#openstack-config --set /etc/glance/glance-api.conf \DEFAULT sql_connection mysql://USER:PASS@IP/DBReplace:USERwith the database user name the Image service is to use, usuallyglance.PASSwith the password of the chosen database user.IPwith the IP address or host name of the database server.DBwith the name of the database that has been created for use by the identity service, usuallyglance.
- Use the
openstack-configcommand to set the value of thesql_connectionconfiguration key in the/etc/glance/glance-registry.conffile.#openstack-config --set /etc/glance/glance-registry.conf \DEFAULT sql_connection mysql://USER:PASS@IP/DBReplace the placeholder valuesUSER,PASS,IP, andDBwith the same values used in the previous step.
root user on each node hosting the Image service:
- Configure the
glance-apiservice:#openstack-config --set /etc/glance/glance-api.conf \paste_deploy flavor keystone#openstack-config --set /etc/glance/glance-api.conf \keystone_authtoken auth_hostIP#openstack-config --set /etc/glance/glance-api.conf \keystone_authtoken auth_port 35357#openstack-config --set /etc/glance/glance-api.conf \keystone_authtoken auth_protocol http#openstack-config --set /etc/glance/glance-api.conf \keystone_authtoken admin_tenant_nameservices#openstack-config --set /etc/glance/glance-api.conf \keystone_authtoken admin_userglance#openstack-config --set /etc/glance/glance-api.conf \keystone_authtoken admin_passwordPASSWORD - Configure the
glance-registryservice:#openstack-config --set /etc/glance/glance-registry.conf \paste_deploy flavor keystone#openstack-config --set /etc/glance/glance-registry.conf \keystone_authtoken auth_hostIP#openstack-config --set /etc/glance/glance-registry.conf \keystone_authtoken auth_port 35357#openstack-config --set /etc/glance/glance-registry.conf \keystone_authtoken auth_protocol http#openstack-config --set /etc/glance/glance-registry.conf \keystone_authtoken admin_tenant_nameservices#openstack-config --set /etc/glance/glance-registry.conf \keystone_authtoken admin_userglance#openstack-config --set /etc/glance/glance-registry.conf \keystone_authtoken admin_passwordPASSWORD
IP- The IP address or host name of the Identity server.services- The name of the tenant that was created for the use of the Image service (previous examples set this toservices).glance- The name of the service user that was created for the Image service (previous examples set this toglance).PASSWORD- The password associated with the service user.
file) for its storage backend. However, either of the following storage backends can be used to store uploaded disk images:
file- Local file system of the Image server (/var/lib/glance/images/directory)swift- OpenStack Object Storage service
Note
openstack-config command. However, the /etc/glance/glance-api.conf file can also be manually updated. If manually updating the file:
- Ensure that the
default_storeparameter is set to the correct backend (for example, 'default_store=rbd'). - Update the parameters in that backend's section (for example, under '
RBD Store Options').
root user:
- Set the
default_storeconfiguration key toswift:#openstack-config --set /etc/glance/glance-api.conf \DEFAULT default_store swift - Set the
swift_store_auth_addressconfiguration key to the public endpoint for the Identity service:#openstack-config --set /etc/glance/glance-api.conf \DEFAULT swift_store_auth_address http://IP:5000/v2.0/ - Add the container for storing images in the Object Storage Service:
#openstack-config --set /etc/glance/glance-api.conf \ DEFAULT swift_store_create_container_on_put True - Set the
swift_store_userconfiguration key to contain the tenant and user to use for authentication in the formatTENANT:USER:- If you followed the instructions in this guide to deploy Object Storage, these values must be replaced with the
servicestenant and theswiftuser respectively. - If you did not follow the instructions in this guide to deploy Object Storage, these values must be replaced with the appropriate Object Storage tenant and user for your environment.
#openstack-config --set /etc/glance/glance-api.conf \DEFAULT swift_store_userservices:swift - Set the
swift_store_keyconfiguration key to the password of the user to be used for authentication (that is, the password that was set for theswiftuser when deploying the Object Storage service.#openstack-config --set /etc/glance/glance-api.conf \DEFAULT swift_store_keyPASSWORD
9292.
root user.
- Open the
/etc/glance/glance-api.conffile in a text editor, and remove any comment characters from in front of the following parameters:bind_host = 0.0.0.0 bind_port = 9292
- Open the
/etc/sysconfig/iptablesfile in a text editor. - Add an INPUT rule allowing TCP traffic on port
9292to the file. The new rule must appear before any INPUT rules that REJECT traffic.-A INPUT -p tcp -m multiport --dports 9292 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptablesfile. - Restart the
iptablesservice to ensure that the change takes effect.#service iptables restart
iptables firewall is now configured to allow incoming connections to the image storage service on port 9292.
root user initially. The database connection string must already be defined in the configuration of the service.
- Use the
sucommand to switch to theglanceuser.#su glance -s /bin/sh - Run the
glance-manage db_synccommand to initialize and populate the database identified in/etc/glance/glance-api.confand/etc/glance/glance-registry.conf.#glance-manage db_sync
glance-api and glance-registry services as the root user:
#service openstack-glance-registry start#service openstack-glance-api start#chkconfig openstack-glance-registry on#chkconfig openstack-glance-api on
wget command below uses an example URL.
#mkdir /tmp/images#cd /tmp/images#wget -c -O rhel-6-server-x86_64-disc1.iso "https://content-web.rhn.redhat.com/rhn/isos/xxxx/rhel-6-server-x86_64-disc1.isoxxxxxxxx"
- Template Description Language (TDL) Files - Oz accepts input in the form of XML-based TDL files, which describe the operating system being installed, the installation media's source, and any additional packages or customization changes that must be applied to the image.
virt-sysprep- It is also recommended that thevirt-sysprepcommand is run on Linux-based virtual machine images prior to uploading them to to the Image service. Thevirt-sysprepcommand re-initializes a disk image in preparation for use in a virtual environment. Default operations include the removal of SSH keys, removal of persistent MAC addresses, and removal of user accounts.Thevirt-sysprepcommand is provided by the libguestfs-tools package.
Important
default Libvirt network. It is recommended that you do not build images using Oz on a system that is running either the nova-network service or any of the OpenStack Networking components.
Procedure 7.1. Building Images using Oz
- Use the
yumcommand to install the oz and libguestfs-tools packages.#yum install -y oz libguestfs-tools - Download the Red Hat Enterprise Linux 6 Server installation DVD ISO file.Although Oz supports the use of network-based installation media, in this procedure a Red Hat Enterprise Linux 6 DVD ISO will be used.
- Use a text editor to create a TDL file for use with Oz. The following example displays the syntax for a basic TDL file.
Example 7.1. TDL File
The template below can be used to create a Red Hat Enterprise Linux 6 disk image. In particular, note the use of therootpwelement to set the password for therootuser and theisoelement to set the path to the DVD ISO.<template> <name>rhel65_x86_64</name> <description>Red Hat 6.5 x86_64 template</description> <os> <name>RHEL-6</name> <version>4</version> <arch>x86_64</arch> <rootpw>PASSWORD</rootpw> <install type='iso'> <iso>file:///home/user/rhel-server-6.5-x86_64-dvd.iso</iso> </install> </os> <commands> <command name='console'> sed -i 's/ rhgb//g' /boot/grub/grub.conf sed -i 's/ quiet//g' /boot/grub/grub.conf sed -i 's/ console=tty0 / serial=tty0 console=ttyS0,115200n8 /g' /boot/grub/grub.conf </command> </commands> </template>
- Run the
oz-installcommand to build an image:#oz-install -u -d3TDL_FILESyntax:-uensures any required customization changes to the image are applied after guest operating installation.-d3enables the display of errors, warnings, and informational messages.TDL_FILEprovides the path to your TDL file.
By default, Oz stores the resultant image in the/var/lib/libvirt/images/directory. This location is configurable by editing the/etc/oz/oz.cfgconfiguration file. - Run the
virt-sysprepcommand on the image to re-initialize it in preparation for upload to the Image service. ReplaceFILEwith the path to the disk image.#virt-sysprep --addFILERefer to thevirt-sysprepmanual page by running theman virt-sysprepcommand for information on enabling and disabling specific operations.
Important
virt-sysprep command be run on all Linux-based virtual machine images prior to uploading them to the Image service. The virt-sysprep command re-initializes a disk image in preparation for use in a virtual environment. Default operations include the removal of SSH keys, removal of persistent MAC addresses, and removal of user accounts.
virt-sysprep command is provided by the RHEL libguestfs-tools package. As the root user, execute:
#yum install -y libguestfs-tools#virt-sysprep --addFILE
#man virt-sysprep
- Set the environment variables used for authenticating with the Identity service by loading them from the
keystonercfile associated with your user (an administrative account is not required):#source~/keystonerc_userName - Use the
glance image-createcommand to import your disk image:#glance image-create --name "NAME" \--is-publicIS_PUBLIC\--disk-formatDISK_FORMAT\--container-formatCONTAINER_FORMAT\--fileIMAGEWhere:NAME= The name by which users will refer to the disk image.IS_PUBLIC= Eithertrueorfalse:true- All users will be able to view and use the image.false- Only administrators will be able to view and use the image.
DISK_FORMAT= The disk image's format. Valid values include:aki,ami,ari,iso,qcow2,raw,vdi,vhd, andvmdk.If the format of the virtual machine disk image is unknown, use theqemu-img infocommand to try and identify it.Example 7.2. Using
qemu-img infoIn the following example, theqemu-img infois used to determine the format of a disk image stored in the file./RHEL65.img.#image: ./RHEL65.img file format: qcow2 virtual size: 5.0G (5368709120 bytes) disk size: 136K cluster_size: 65536qemu-img info ./RHEL65.imgCONTAINER_FORMAT= The container format of the image. The container format isbareunless the image is packaged in a file format such asovforamithat includes additional metadata related to the image.IMAGE= The local path to the image file (for uploading).
For more information about theglance image-createsyntax, execute:#glance help image-createNote
If the image being uploaded is not locally accessible but is available using a remote URL, provide the URL using the--locationparameter instead of using the--fileparameter.However, unless you also specify the--copy-fromargument, the Image service will not copy the image into the object store. Instead, the image will be accessed remotely each time it is required.Example 7.3. Uploading an Image to the Image service
In this example theqcow2format image in the file namedrhel-65.qcow2is uploaded to the Image service. It is created in the service as a publicly accessible image namedRHEL 6.5.#glance image-create --name "RHEL 6.5" --is-public true --disk-format qcow2 \--container-format bare \+------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum |--file rhel-65.qcow22f81976cae15c16ef0010c51e3a6c163| | container_format |bare| | created_at |2013-01-25T14:45:48| | deleted |False| | deleted_at |None| | disk_format |qcow2| | id |0ce782c6-0d3e-41df-8fd5-39cd80b31cd9| | is_public |True| | min_disk |0| | min_ram |0| | name |RHEL 6.5| | owner |b1414433c021436f97e9e1e4c214a710| | protected |False| | size |25165824| | status |active| | updated_at |2013-01-25T14:45:50| +------------------+--------------------------------------+
- To verify that your image was successfully uploaded, use the
glance image-listcommand:#glance image-list+--------------+----------+-------------+------------------+----------+--------+ | ID | Name | Disk Format | Container Format |Size | Status | +--------------+----------+-------------+------------------+----------+--------+ |0ce782c6-...|RHEL 6.5|qcow2|bare|213581824|active| +--------------+----------+-------------+------------------+----------+--------+To view detailed information about an uploaded image, execute theglance image-showcommand using the image's identifier:#glance image-show+------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum |0ce782c6-0d3e-41df-8fd5-39cd80b31cd92f81976cae15c16ef0010c51e3a6c163| | container_format |bare| | created_at |2013-01-25T14:45:48| | deleted |False| | disk_format |qcow2| | id |0ce782c6-0d3e-41df-8fd5-39cd80b31cd9| | is_public |True| | min_disk |0| | min_ram |0| | name |RHEL 6.5| | owner |b1414433c021436f97e9e1e4c214a710| | protected |False| | size |25165824| | status |active| | updated_at |2013-01-25T14:45:50| +------------------+--------------------------------------+
See Also:
cinder. The three services are:
- The API service (
openstack-cinder-api) - The API service provides a HTTP endpoint for block storage requests. When an incoming request is received the API verifies identity requirements are met and translates the request into a message denoting the required block storage actions. The message is then sent to the message broker for processing by the other block storage services.
- The scheduler service (
openstack-cinder-scheduler) - The scheduler service reads requests from the message queue and determines on which block storage host the request must be actioned. The scheduler then communicates with the volume service on the selected host to process the request.
- The volume service (
openstack-cinder-volume) - The volume service manages the interaction with the block storage devices. As requests come in from the scheduler, the volume service creates, modifies, and removes volumes as required.
- Preparing for Block Storage Installation
- Steps that must be performed before installing any of the block storage services. These procedures include the creation of identity records, the database, and a database user.
- Common Block Storage Configuration
- Steps that are common to all of the block storage services and as such must be performed on all block storage nodes in the environment. These procedures include configuring the services to refer to the correct database and message broker. Additionally they include the initialization and population of the database which must only be performed once but can be performed from any of the block storage systems.
- Volume Service Specific Configuration
- Steps that are specific to systems that will be hosting the volume service and as such require direct access to block storage devices.
See Also:
root user.
- Connect to the database service using the
mysqlcommand.#mysql -u root -p - Create the
cinderdatabase.mysql>CREATE DATABASE cinder; - Create a
cinderdatabase user and grant it access to thecinderdatabase.mysql>GRANT ALL ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'PASSWORD';mysql>GRANT ALL ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'PASSWORD';ReplacePASSWORDwith a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately.
mysql>FLUSH PRIVILEGES; - Exit the
mysqlclient command.mysql>quit
- Create the
cinderuser, who has theadminrole in theservicestenant. - Create the
cinderservice entry and assign it an endpoint.
- Authenticate as the administrator of the identity service by running the
sourcecommand on thekeystonerc_adminfile containing the required credentials.#source ~/keystonerc_admin - Create a user named
cinderfor the block storage service to use.#+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id |keystone user-create --name cinder --passPASSWORDe1765f70da1b4432b54ced060139b46a| | name | cinder | | tenantId | | +----------+----------------------------------+ReplacePASSWORDwith a secure password that will be used by the block storage service when authenticating with the identity service. Take note of the created user's returned ID as it will be used in subsequent steps. - Get the ID of the
adminrole:#keystone role-get adminIf noadminrole exists, create one:$ keystone role-create --name admin
- Get the ID of the
servicestenant:$keystone tenant-list | grep servicesIf noservicestenant exists, create one:$keystone tenant-create --name services --description "Services Tenant"This guide uses one tenant for all service users. For more information, refer to Creating the Services Tenant. - Use the
keystone user-role-addcommand to link thecinderuser,adminrole, andservicestenant together:#keystone user-role-add --user-idUSERID--role-idROLEID--tenant-idTENANTIDReplace the user, role, and tenant IDs with those obtained in the previous steps. - Create the
cinderservice entry:#keystone service-create --name cinder \--type volume \+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Cinder Volume Service | | id |--description "Cinder Volume Service"dfde7878671e484c9e581a3eb9b63e66| | name | cinder | | type | volume | +-------------+----------------------------------+Take note of the created service's returned ID as it will be used in the next step. - Create the
cinderendpoint entry.#keystone endpoint-create --service-idSERVICEID\--publicurl "http://IP:8776/v1/\$(tenant_id)s" \--adminurl "http://IP:8776/v1/\$(tenant_id)s" \--internalurl "http://IP:8776/v1/\$(tenant_id)s"ReplaceSERVICEIDwith the identifier returned by thekeystone service-createcommand. ReplaceIPwith the IP address or host name of the system that will be hosting the block storage service API (openstack-cinder-api).Important
If you intend to install and run multiple instances of the API service then you must repeat this step for the IP address or host name of each instance.
- openstack-cinder
- Provides the block storage services and associated configuration files.
- openstack-utils
- Provides supporting utilities to assist with a number of tasks including the editing of configuration files.
- openstack-selinux
- Provides OpenStack specific SELinux policy modules.
root user:
#yum install -y openstack-cinder openstack-utils openstack-selinux
root user.
- Set the authentication strategy (
auth_strategy) configuration key tokeystoneusing theopenstack-configcommand.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT auth_strategy keystone - Set the authentication host (
auth_host) configuration key to the IP address or host name of the identity server.#openstack-config --set /etc/cinder/cinder.conf \keystone_authtoken auth_hostIPReplaceIPwith the IP address or host name of the identity server. - Set the administration tenant name (
admin_tenant_name) configuration key to the name of the tenant that was created for the use of the block storage service. In this guide, examples useservices.#openstack-config --set /etc/cinder/cinder.conf \keystone_authtoken admin_tenant_nameservices - Set the administration user name (
admin_user) configuration key to the name of the user that was created for the use of the block storage service. In this guide, examples usecinder.#openstack-config --set /etc/cinder/cinder.conf \keystone_authtoken admin_usercinder - Set the administration password (
admin_password) configuration key to the password that is associated with the user specified in the previous step.#openstack-config --set /etc/cinder/cinder.conf \keystone_authtoken admin_passwordPASSWORD
root user.
General Settings
Use theopenstack-configutility to set the value of therpc_backendconfiguration key to Qpid.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT rpc_backend cinder.openstack.common.rpc.impl_qpid- Use the
openstack-configutility to set the value of theqpid_hostnameconfiguration key to the host name of the Qpid server.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT qpid_hostnameIPReplaceIPwith the IP address or host name of the message broker. Authentication Settings
If you have configured Qpid to authenticate incoming connections then you must provide the details of a valid Qpid user in the block storage configuration.- Use the
openstack-configutility to set the value of theqpid_usernameconfiguration key to the username of the Qpid user that the block storage services must use when communicating with the message broker.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT qpid_usernameUSERNAMEReplaceUSERNAMEwith the required Qpid user name. - Use the
openstack-configutility to set the value of theqpid_passwordconfiguration key to the password of the Qpid user that the block storage services must use when communicating with the message broker.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT qpid_passwordPASSWORDReplacePASSWORDwith the password of the Qpid user.
Encryption Settings
If you configured Qpid to use SSL then you must inform the block storage services of this choice. Useopenstack-configutility to set the value of theqpid_protocolconfiguration key tossl.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT qpid_protocol sslThe value of theqpid_portconfiguration key must be set to5671as Qpid listens on this different port when SSL is in use.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT qpid_port 5671Important
To communicate with a Qpid message broker that uses SSL the node must also have:- The nss package installed.
- The certificate of the relevant certificate authority installed in the system NSS database (
/etc/pki/nssdb/).
Thecerttoolcommand is able to import certificates into the NSS database. See thecerttoolmanual page for more information (man certtool).
sql_connection configuration key) is defined in the /etc/cinder/cinder.conf file. The string must be updated to point to a valid database server before starting the service.
root user on each system hosting block storage services:
#openstack-config --set /etc/cinder/cinder.conf \DEFAULT sql_connection mysql://USER:PASS@IP/DB
USERwith the database user name the block storage services are to use, usuallycinder.PASSwith the password of the chosen database user.IPwith the IP address or host name of the database server.DBwith the name of the database that has been created for use by the block storage services, usuallycinder.
3260 and 8776.
root user.
- Open the
/etc/sysconfig/iptablesfile in a text editor. - Add an INPUT rule allowing TCP traffic on ports
3260and8776to the file. The new rule must appear before any INPUT rules that REJECT traffic.-A INPUT -p tcp -m multiport --dports 3260,8776 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptablesfile. - Restart the
iptablesservice to ensure that the change takes effect.#service iptables restart
iptables firewall is now configured to allow incoming connections to the block storage service on ports 3260 and 8776.
Important
- Use the
sucommand to switch to thecinderuser.#su cinder -s /bin/sh - Run the
cinder-manage db synccommand to initialize and populate the database identified in/etc/cinder/cinder.conf.$cinder-manage db sync
openstack-cinder-volume) requires access to suitable block storage. The service includes volume drivers for a number of block storage providers. Supported drivers for LVM, NFS, and Red Hat Storage are included.
root user:
- Use the
pvcreatecommand to create a physical volume.#Physical volume "pvcreateDEVICEDEVICE" successfully createdReplaceDEVICEwith the path to a valid, unused, device. For example:#pvcreate /dev/sdX - Use the
vgcreatecommand to create a volume group.#Volume group "vgcreatecinder-volumesDEVICEcinder-volumes" successfully createdReplaceDEVICEwith the path to the device used when creating the physical volume. Optionally replacecinder-volumeswith an alternative name for the new volume group. - Set the
volume_groupconfiguration key to the name of the newly created volume group.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT volume_groupcinder-volumesThe name provided must match the name of the volume group created in the previous step. - Ensure that the correct volume driver for accessing LVM storage is in use by setting the
volume_driverconfiguration key tocinder.volume.drivers.lvm.LVMISCSIDriver.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT volume_driver cinder.volume.drivers.lvm.LVMISCSIDriver
root user.
Important
virt_use_nfs Boolean enabled on any client machine accessing the instance volumes. This includes all compute nodes. Run the following command on each client machine as the root user to enable the Boolean and make it persistent across reboots:
# setsebool -P virt_use_nfs on
- Create a text file in the
/etc/cinder/directory containing a list of the NFS shares that the volume service is to use for backing storage.nfs1.example.com:/exportnfs2.example.com:/exportEach line must contain an NFS share in the formatHOST:/SHAREwhereHOSTis replaced by the IP address or host name of the NFS server andSHAREis replaced with the particular NFS share to be used. - Use the
chowncommand to set the file to be owned by therootuser and thecindergroup.#chown root:cinderFILEReplaceFILEwith the path to the file containing the list of NFS shares. - Use the
chmodcommand to set the file permissions such that it can be read by members of thecindergroup.#chmod 0640FILEReplaceFILEwith the path to the file containing the list of NFS shares. - Set the value of the
nfs_shares_configconfiguration key to the path of the file containing the list of NFS shares.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT nfs_shares_configFILEReplaceFILEwith the path to the file containing the list of NFS shares. - The
nfs_sparsed_volumesconfiguration key determines whether volumes are created as sparse files and grown as needed or fully allocated up front. The default and recommended value istrue, which ensures volumes are initially created as sparse files.Setting thenfs_sparsed_volumesconfiguration key tofalsewill result in volumes being fully allocated at the time of creation. This leads to increased delays in volume creation.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT nfs_sparsed_volumestrue - Optionally, provide any additional NFS mount options required in your environment in the
nfs_mount_optionsconfiguration key. If your NFS shares do not require any additional mount options or you are unsure then skip this step.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT nfs_mount_optionsOPTIONSReplaceOPTIONSwith the mount options to be used when accessing NFS shares. See the manual page for NFS for more information on available mount options (man nfs). - Ensure that the correct volume driver for accessing NFS storage is in use by setting the
volume_driverconfiguration key tocinder.volume.drivers.nfs.NfsDriver.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver
root user.
Important
Important
virt_use_fusefs Boolean is enabled on any client machine accessing the instance volumes. This includes all compute nodes. Run the following command on each client machine as the root user to enable the Boolean and make it persistent across reboots:
# setsebool -P virt_use_fusefs on
- Create a text file in the
/etc/cinder/directory containing a list of the Red Hat Storage shares that the volume service is to use for backing storage.HOST:/VOLUMEEach line must contain a Red Hat Storage share in the formatHOST:VOLUMEwhereHOSTis replaced by the IP address or host name of the Red Hat Storage server andVOLUMEis replaced with the name of a particular volume that exists on that host.If required additional mount options must also be added in the same way that they would be provided to themountcommand line tool:HOST:/VOLUME-oOPTIONSReplaceOPTIONSwith a comma separated list of mount options. - Use the
chowncommand to set the file to be owned by therootuser and thecindergroup.#chown root:cinderFILEReplaceFILEwith the path to the file containing the list of Red Hat Storage shares. - Use the
chmodcommand to set the file permissions such that it can be read by members of thecindergroup.#chmod 0640FILEReplaceFILEwith the path to the file containing the list of Red Hat Storage shares. - Set the value of the
glusterfs_shares_configconfiguration key to the path of the file containing the list of Red Hat Storage shares.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT glusterfs_shares_configFILEReplaceFILEwith the path to the file containing the list of Red Hat Storage shares. - The
glusterfs_sparsed_volumesconfiguration key determines whether volumes are created as sparse files and grown as needed or fully allocated up front. The default and recommended value istrue, which ensures volumes are initially created as sparse files.Setting theglusterfs_sparsed_volumesconfiguration key tofalsewill result in volumes being fully allocated at the time of creation. This leads to increased delays in volume creation.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT glusterfs_sparsed_volumestrue - Ensure that the correct volume driver for accessing Red Hat Storage is in use by setting the
volume_driverconfiguration key tocinder.volume.drivers.glusterfs.#openstack-config --set /etc/cinder/cinder.conf \DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
root user. Each system that needs to be able to access multiple storage drivers or volumes must be configured in this way.
- Open the
/etc/cinder/cinder.confconfiguration file in a text editor. - Add a configuration block for each storage driver or volume. This configuration block must have a unique name (avoid spaces or special characters) and contain values for at least these configuration keys:
volume_group- A volume group name. This is the name of the volume group that will be accessed by the driver.
- v
olume_driver - A volume driver. This is the name of the driver that will be used when accessing the volume group.
volume_backend_name- A backend name. This is an administrator-defined name for the backend, which groups the drivers so that user requests for storage served from the given backend can be serviced by any driver in the group. It is not related to the name of the configuration group which must be unique.
Any additional driver specific configuration must also be included in the configuration block.[
NAME] volume_group=GROUPvolume_driver=DRIVERvolume_backend_name=BACKENDReplaceNAMEwith a unique name for the backend and replaceGROUPwith the unique name of the applicable volume group. ReplaceDRIVERwith the driver to use when accessing this storage backend, valid values include:cinder.volume.drivers.lvm.LVMISCSIDriverfor LVM and iSCSI storage.cinder.volume.drivers.nfs.NfsDriverfor NFS storage.cinder.volume.drivers.glusterfs.GlusterfsDriverfor Red Hat Storage.
Finally replaceBACKENDwith a name for the storage backend. - Update the value of the
enabled_backendsconfiguration key in theDEFAULTconfiguration block. This configuration key must contain a comma separated list containing the names of the configuration blocks for each storage driver.Example 8.1. Multiple Backend Configuration
In this example two logical volume groups,cinder-volumes-1andcinder-volumes-2, are grouped into the storage backend namedLVM. An additional volume, backed by a list of NFS shares, is grouped into a storage backend namedNFS.[DEFAULT]
...enabled_backends=cinder-volumes-1-driver,cinder-volumes-2-driver,cinder-volumes-3-driver...[cinder-volumes-1-driver] volume_group=cinder-volumes-1 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_backend_name=LVM [cinder-volumes-2-driver] volume_group=cinder-volumes-2 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_backend_name=LVM [cinder-volumes-3-driver] nfs_shares_config=/etc/cinder/shares.txt volume_driver=cinder.volume.drivers.nfs.NfsDriver volume_backend_name=NFSImportant
The default block storage scheduler driver in Red Hat Enterprise Linux OpenStack Platform 3 is the filter scheduler. If you have changed the value of thescheduler_driverconfiguration key on any of your block storage nodes then you must update the value tocinder.scheduler.filter_scheduler.FilterSchedulerfor the multiple storage backends feature to function correctly. - Save the changes to the
/etc/cinder/cinder.conffile.
openstack-cinder-volume service has already been started then you must restart it for the changes to take effect.
Note
#source ~/keystonerc_admin#cinder type-createTYPE#cinder type-keyTYPEset volume_backend_name=BACKEND
TYPE with the name that users must provide to select this specific storage backend and replace a BACKEND with the relevant volume_backend_name as set in the /etc/cinder/cinder.conf configuration file.
tgtd, when mounting storage. To support this the tgtd service must be configured to read additional configuration files.
root user.
- Open the
/etc/tgt/targets.conffile. - Add this line to the file:
include /etc/cinder/volumes/*
- Save the changes to the file.
tgtd service is started it will be configured to support the volume service.
- The API service (
openstack-cinder-api). - The scheduler service (
openstack-cinder-scheduler). - The volume service (
openstack-cinder-volume).
Starting the API Service
Log in to each server that you intend to run the API on as therootuser and start the API service.- Use the
servicecommand to start the API service (openstack-cinder-api).#service openstack-cinder-api start - Use the
chkconfigcommand to enable the API service permanently (openstack-cinder-api).#chkconfig openstack-cinder-api on
Starting the Scheduler Service
Log in to each server that you intend to run the scheduler on as therootuser and start the scheduler service.- Use the
servicecommand to start the scheduler (openstack-cinder-scheduler).#service openstack-cinder-scheduler start - Use the
chkconfigcommand to enable the scheduler permanently (openstack-cinder-scheduler).#chkconfig openstack-cinder-scheduler on
Starting the Volume Service
Log in to each server that block storage has been attached to as therootuser and start the volume service.- Use the
servicecommand to start the volume service (openstack-cinder-volume).#service openstack-cinder-volume start - Use the
servicecommand to start the The SCSI target daemon (tgtd).#service tgtd start - Use the
chkconfigcommand to enable the volume service permanently (openstack-cinder-volume).#chkconfig openstack-cinder-volume on - Use the
chkconfigcommand to enable the SCSI target daemon permanently (tgtd).#chkconfig tgtd on
Testing Locally
The steps outlined in this section of the procedure must be performed while logged in to the server hosting the block storage API service as therootuser or a user with access to akeystonerc_adminfile containing the credentials of the OpenStack administrator. Transfer thekeystonerc_adminfile to the system before proceeding.- Run the
sourcecommand on thekeystonerc_adminfile to populate the environment variables used for identifying and authenticating the user.#source ~/keystonerc_admin - Run the
cinder listcommand and verify that no errors are returned.#cinder list - Run the
cinder createcommand to create a volume.#cinder createSIZEReplaceSIZEwith the size of the volume to create in Gigabytes (GB). - Run the
cinder deletecommand to remove the volume.#cinder deleteIDReplaceIDwith the identifier returned when the volume was created.
Testing Remotely
The steps outlined in this section of the procedure must be performed while logged in to a system other than the server hosting the block storage API service. Transfer thekeystonerc_adminfile to the system before proceeding.- Install the python-cinderclient package using the
yumcommand. You will need to authenticate as therootuser for this step.#yum install -y python-cinderclient - Run the
sourcecommand on thekeystonerc_adminfile to populate the environment variables used for identifying and authenticating the user.$source ~/keystonerc_admin - Run the
cinder listcommand and verify that no errors are returned.$cinder list - Run the
cinder createcommand to create a volume.$cinder createSIZEReplaceSIZEwith the size of the volume to create in Gigabytes (GB). - Run the
cinder deletecommand to remove the volume.$cinder deleteIDReplaceIDwith the identifier returned when the volume was created.
- 9.1. OpenStack Networking Installation Overview
- 9.2. Networking Prerequisite Configuration
- 9.3. Common Networking Configuration
- 9.4. Configuring the Networking Service
- 9.5. Configuring the DHCP Agent
- 9.6. Configuring a Provider Network
- 9.7. Configuring the Plug-in Agent
- 9.8. Configuring the L3 Agent
- 9.9. Validating the OpenStack Networking Installation
- Network
- An isolated L2 segment, analogous to VLAN in the physical networking world.
- Subnet
- A block of v4 or v6 IP addresses and associated configuration state.
- Port
- A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port.
- Provider Networks
- Provider networks allow the creation of virtual networks that map directly to networks in the physical data center. This allows the administrator to give tenants direct access to a public network such as the Internet or to integrate with existing VLANs in the physical networking environment that have a defined meaning or purpose.When the provider extension is enabled OpenStack networking users with administrative privileges are able to see additional provider attributes on all virtual networks. In addition such users have the ability to specify provider attributes when creating new provider networks.Both the Open vSwitch and Linux Bridge plug-ins support the provider networks extension.
- Layer 3 (L3) Routing and Network Address Translation (NAT)
- The L3 routing API extensions provides abstract L3 routers that API users are able to dynamically provision and configure. These routers are able to connect to one or more Layer 2 (L2) OpenStack networking controlled networks. Additionally the routers are able to provide a gateway that connects one or more private L2 networks to an common public or external network such as the Internet.The L3 router provides basic NAT capabilities on gateway ports that connect the router to external networks. The router supports floating IP addresses which give a static mapping between a public IP address on the external network and the private IP address on one of the L2 networks attached to the router.This allows the selective exposure of compute instances to systems on an external public network. Floating IP addresses are also able to be reallocated to different OpenStack networking ports as necessary.
- Security Groups
- Security groups and security group rules allow the specification of the specific type and direction of network traffic that is allowed to pass through a given network port. This provides an additional layer of security over and above any firewall rules that exist within a compute instance. The security group is a container object which can contain one or more security rules. A single security group can be shared by multiple compute instances.When a port is created using OpenStack networking it is associated with a security group. If a specific security group was not specified then the port is associated with the
defaultsecurity group. By default this group will drop all inbound traffic and allow all outbound traffic. Additional security rules can be added to thedefaultsecurity group to modify its behaviour or new security groups can be created as necessary.The Open vSwitch, Linux Bridge, Nicira NVP, NEC, and Ryu networking plug-ins currently support security groups.Note
Unlike Compute security groups, OpenStack networking security groups are applied on a per port basis rather than on a per instance basis.
- Open vSwitch (openstack-quantum-openvswitch)
- Linux Bridge (openstack-quantum-linuxbridge)
- Cisco (openstack-quantum-cisco)
- NEC OpenFlow (openstack-quantum-nec)
- Nicira (openstack-quantum-nicira)
- Ryu (openstack-quantum-ryu)
- L3 Agent
- The L3 agent is part of the openstack-quantum package. It acts as an abstract L3 router that can connect to and provide gateway services for multiple L2 networks.The nodes on which the L3 agent is to be hosted must not have a manually configured IP address on a network interface that is connected to an external network. Instead there must be a range of IP addresses from the external network that are available for use by OpenStack Networking. These IP addresses will be assigned to the routers that provide the link between the internal and external networks.The range selected must be large enough to provide a unique IP address for each router in the deployment as well as each desired floating IP.
- DHCP Agent
- The OpenStack Networking DHCP agent is capable of allocating IP addresses to virtual machines running on the network. If the agent is enabled and running when a subnet is created then by default that subnet has DHCP enabled.
- Plug-in Agent
- Many of the OpenStack Networking plug-ins, including Open vSwitch and Linux Bridge, utilize their own agent. The plug-in specific agent runs on each node that manages data packets. This includes all compute nodes as well as nodes running the dedicated agents
quantum-dhcp-agentandquantum-l3-agent.
- Service Node
- The service node exposes the networking API to clients and handles incoming requests before forwarding them to a message queue to be actioned by the other nodes. The service node hosts both the networking service itself and the active networking plug-in.In environments that use controller nodes to host the client-facing APIs and schedulers for all services, the controller node would also fulfil the role of service node as it is applied in this chapter.
- Network Node
- The network node handles the majority of the networking workload. It hosts the DHCP agent, the Layer 3 (L3) agent, the Layer 2 (L2) Agent, and the metadata proxy. In addition to plug-ins that require an agent, it runs an instance of the plug-in agent (as do all other systems that handle data packets in an environment where such plug-ins are in use). Both the Open vSwitch and Linux Bridge plug-ins include an agent.
- Compute Node
- The compute hosts the compute instances themselves. To connect compute instances to the networking services, compute nodes must also run the L2 agent. Like all other systems that handle data packets it must also run an instance of the plug-in agent.
Warning
packstack utility or manually, can be reconfigured to use OpenStack Networking. This is however currently not recommended for environments where Compute instances have already been created and configured to use Compute networking. If you wish to proceed with such a conversion. you must ensure that you stop the openstack-nova-network service on each Compute node using the service command before proceeding.
#service openstack-nova-network stop
openstack-nova-network service permanently on each node using the chkconfig command.
#chkconfig openstack-nova-network off
Important
nova-consoleauth on more than one node. Running more than one instance of nova-consoleauth causes a conflict between nodes with regard to token requests which may cause errors.
See Also:
root user.
- Connect to the database service using the
mysqlcommand.#mysql -u root -p - Create the database. If you intend to use the:This example uses the database name of '
- Open vSwitch plug-in, the recommended database name is
ovs_quantum. - Linux Bridge plug-in, the recommended database name is
quantum_linux_bridge.
ovs_quantum'.mysql>CREATE DATABASE ovs_quantum; - Create a
quantumdatabase user and grant it access to theovs_quantumdatabase.mysql>GRANT ALL ON ovs_quantum.* TO 'quantum'@'%' IDENTIFIED BY 'PASSWORD';mysql>GRANT ALL ON ovs_quantum.* TO 'quantum'@'localhost' IDENTIFIED BY 'PASSWORD';ReplacePASSWORDwith a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately.
mysql>FLUSH PRIVILEGES; - Exit the
mysqlclient command.mysql>quit
- Create the
quantumuser, who has theadminrole in theservicestenant. - Create the
quantumservice entry and assign it an endpoint.
- Authenticate as the administrator of the identity service by running the
sourcecommand on thekeystonerc_adminfile containing the required credentials:#source ~/keystonerc_admin - Create a user named
quantumfor the OpenStack networking service to use:#+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id |keystone user-create --name quantum --passPASSWORD1df18bcd14404fa9ad954f9d5eb163bc| | name | quantum | | tenantId | | +----------+----------------------------------+ReplacePASSWORDwith a secure password that will be used by the OpenStack networking service when authenticating with the identity service. Take note of the created user's returned ID as it will be used in subsequent steps. - Get the ID of the
adminrole:#keystone role-get adminIf noadminrole exists, create one:$ keystone role-create --name admin
- Get the ID of the
servicestenant:$keystone tenant-list | grep servicesIf noservicestenant exists, create one:$keystone tenant-create --name services --description "Services Tenant"This guide uses one tenant for all service users. For more information, refer to Creating the Services Tenant. - Use the
keystone user-role-addcommand to link thequantumuser,adminrole, andservicestenant together:#keystone user-role-add --user-idUSERID--role-idROLEID--tenant-idTENANTIDReplace the user, role, and tenant IDs with those obtained in the previous steps. - Create the
quantumservice entry:#keystone service-create --name quantum \--type network \+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Networking Service | | id |--description "OpenStack Networking Service"134e815915f442f89c39d2769e278f9b| | name | quantum | | type | network | +-------------+----------------------------------+Take note of the created service's returned ID as it will be used in the next step. - Create the
networkendpoint entry:#keystone endpoint-create --service-idSERVICEID\--publicurl "http://IP:9696" /--adminurl "http://IP:9696" /--internalurl "http://IP:9696"ReplaceSERVICEIDwith the ID returned by thekeystone service-createcommand. ReplaceIPwith the IP address or host name of the system that will be acting as the network node.
2.6.32-343.el6.x86_64.
root user to complete the procedure.
- Use the
unamecommand to identify the kernel that is currently in use on the system.#uname --kernel-release- If the output includes the text
openstackthen the system already has a network namespaces enabled kernel.2.6.32-
358.6.2.openstack.el6.x86_64No further action is required to install a network namespaces enabled kernel on this system. - If the output does not include the text
openstackthen the system does not currently have a network namespaces enabled kernel and further action must be taken.2.6.32-
358.el6.x86_64Further action is required to install a network namespaces enabled kernel on this system. Follow the remaining steps outlined in this procedure to perform this task.
Note
Note that the release field may contain a higher value than358. As new kernel updates are released this value is increased. - Install the updated kernel with network namespaces support using the
yumcommand.#yum install "kernel-2.6.*.openstack.el6.x86_64"The use of the wildcard character (*) ensures that the latest kernel release available will be installed. - Reboot the system to ensure that the new kernel is running before proceeding with OpenStack networking installation.
#reboot - Run the
unamecommand again once the system has rebooted to confirm that the newly installed kernel is running.#2.6.32-uname --kernel-release358.6.2.openstack.el6.x86_64
NetworkManager) service enabled. The Network Manager service is currently enabled by default on Red Hat Enterprise Linux installations where one of these package groups was selected during installation:
- Desktop
- Software Development Workstation
- Basic Server
- Database Server
- Web Server
- Identity Management Server
- Virtualization Host
- Minimal Install
root user on each system in the environment that will handle network traffic. This includes the system that will host the OpenStack Networking service, all network nodes, and all compute nodes.
NetworkManager service is disabled and replaced by the standard network service for all interfaces that will be used by OpenStack Networking.
- Verify Network Manager is currently enabled using the
chkconfigcommand.#chkconfig --list NetworkManagerThe output displayed by thechkconfigcommand inicates whether or not the Network Manager service is enabled.- The system displays an error if the Network Manager service is not currently installed:
error reading information on service NetworkManager: No such file or directory
If this error is displayed then no further action is required to disable the Network Manager service. - The system displays a list of numerical run levels along with a value of
onoroffindicating whether the Network Manager service is enabled when the system is operating in the given run level.NetworkManager 0:off 1:off 2:off 3:off 4:off 5:off 6:off
If the value displayed for all run levels isoffthen the Network Manager service is disabled and no further action is required. If the value displayed for any of the run levels isonthen the Network Manager service is enabled and further action is required.
- Ensure that the Network Manager service is stopped using the
servicecommand.#service NetworkManager stop - Ensure that the Network Manager service is disabled using the
chkconfigcommand.#chkconfig NetworkManager off - Open each interface configuration file on the system in a text editor. Interface configuration files are found in the
/etc/sysconfig/network-scripts/directory and have names of the formifcfg-whereXXis replaced by the name of the interface. Valid interface names includeeth0,p1p5, andem1.In each file ensure that theNM_CONTROLLEDconfiguration key is set tonoand theON_BOOTconfiguration key is set toyes.NM_CONTROLLED=no ONBOOT=yes
This action ensures that the standard network service will take control of the interfaces and automatically activate them on boot. - Ensure that the network service is started using the
servicecommand.#service network start - Ensure that the network service is enabled using the
chkconfigcommand.#chkconfig network on
- openstack-quantum
- Provides the networking service and associated configuration files.
- openstack-quantum-
PLUGIN - Provides a networking plug-in. Replace
PLUGINwith one of the recommended plug-ins (openvswitchandlinuxbridge). - openstack-utils
- Provides supporting utilities to assist with a number of tasks including the editing of configuration files.
- openstack-selinux
- Provides OpenStack specific SELinux policy modules.
root user:
#yum install -y openstack-quantum \openstack-quantum-PLUGIN\openstack-utils \openstack-selinux
PLUGIN with openvswitch or linuxbridge (determines which plug-in is installed).
9696.
root user.
- Open the
/etc/sysconfig/iptablesfile in a text editor. - Add an INPUT rule allowing TCP traffic on port
9696to the file. The new rule must appear before any INPUT rules that REJECT traffic.-A INPUT -p tcp -m multiport --dports 9696 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptablesfile. - Restart the
iptablesservice to ensure that the change takes effect.#service iptables restart
iptables firewall is now configured to allow incoming connections to the networking service on port 9696.
Prerequisites:
/etc/quantum/quantum.conf file.
root user.
Setting the Identity Values
The Networking service must be explicitly configured to use the Identity service for authentication.- Set the authentication strategy (
auth_strategy) configuration key tokeystoneusing theopenstack-configcommand.#openstack-config --set /etc/quantum/quantum.conf \DEFAULT auth_strategy keystone - Set the authentication host (
auth_hostconfiguration key) to the IP address or host name of the Identity server.#openstack-config --set /etc/quantum/quantum.conf \keystone_authtoken auth_hostIPReplaceIPwith the IP address or host name of the Identity server. - Set the administration tenant name (
admin_tenant_name) configuration key to the name of the tenant that was created for the use of the Networking service. Examples in this guide useservices.#openstack-config --set /etc/quantum/quantum.conf \keystone_authtoken admin_tenant_nameservices - Set the administration user name (
admin_user) configuration key to the name of the user that was created for the use of the networking services. Examples in this guide usequantum.#openstack-config --set /etc/quantum/quantum.conf \keystone_authtoken admin_userquantum - Set the administration password (
admin_password) configuration key to the password that is associated with the user specified in the previous step.#openstack-config --set /etc/quantum/quantum.conf \keystone_authtoken admin_passwordPASSWORD
The authentication keys used by the Networking service have been set and will be used when the services are started.Setting the Message Broker
The Networking service must be explicitly configured with the type, location, and authentication details of the message broker.- Use the
openstack-configutility to set the value of therpc_backendconfiguration key to Qpid.#openstack-config --set /etc/quantum/quantum.conf \DEFAULT rpc_backend quantum.openstack.common.rpc.impl_qpid - Use the
openstack-configutility to set the value of theqpid_hostnameconfiguration key to the host name of the Qpid server.#openstack-config --set /etc/quantum/quantum.conf \DEFAULT qpid_hostnameIPReplaceIPwith the IP address or host name of the message broker. - If you have configured Qpid to authenticate incoming connections, you must provide the details of a valid Qpid user in the networking configuration.
- Use the
openstack-configutility to set the value of theqpid_usernameconfiguration key to the username of the Qpid user that the Networking service must use when communicating with the message broker.#openstack-config --set /etc/quantum/quantum.conf \DEFAULT qpid_usernameUSERNAMEReplaceUSERNAMEwith the required Qpid user name. - Use the
openstack-configutility to set the value of theqpid_passwordconfiguration key to the password of the Qpid user that the Networking service must use when communicating with the message broker.#openstack-config --set /etc/quantum/quantum.conf \DEFAULT qpid_passwordPASSWORDReplacePASSWORDwith the password of the Qpid user.
- If you configured Qpid to use SSL, you must inform the Networking service of this choice. Use
openstack-configutility to set the value of theqpid_protocolconfiguration key tossl.#openstack-config --set /etc/quantum/quantum.conf \DEFAULT qpid_protocol sslThe value of theqpid_portconfiguration key must be set to5671as Qpid listens on this different port when SSL is in use.#openstack-config --set /etc/quantum/quantum.conf \DEFAULT qpid_port 5671Important
To communicate with a Qpid message broker that uses SSL the node must also have:- The nss package installed.
- The certificate of the relevant certificate authority installed in the system NSS database (
/etc/pki/nssdb/).
Thecerttoolcommand is able to import certificates into the NSS database. See thecerttoolmanual page for more information (man certtool).
The OpenStack Networking service has been configured to use the message broker and any authentication schemes that it presents.Setting the Plug-in
Additional configuration settings must be applied to enable the desired plug-in.Open vSwitch
- Create a symbolic link between the
/etc/quantum/plugin.inipath referred to by the Networking service and the plug-in specific configuration file.#ln -s /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini \/etc/quantum/plugin.ini - Update the value of the
tenant_network_typeconfiguration key in the/etc/quantum/plugin.inifile to refer to the type of network that must be used for tenant networks. Supported values areflat,vlan, andlocal.The default islocalbut this is not recommended for real deployments.#openstack-config --set /etc/quantum/plugin.ini \OVS tenant_network_typeTYPEReplaceTYPEwith the type chosen tenant network type. - If
flatorvlannetworking was chosen, the value of thenetwork_vlan_rangesconfiguration key must also be set. This configuration key maps physical networks to VLAN ranges.Mappings are of the formNAME:START:ENDwhereNAMEis replaced by the name of the physical network,STARTis replaced by the VLAN identifier that starts the range, andENDis replaced by the replaced by the VLAN identifier that ends the range.#openstack-config --set /etc/quantum/plugin.ini \OVS network_vlan_rangesNAME:START:ENDMultiple ranges can be specified using a comma separated list, for example:physnet1:1000:2999,physnet2:3000:3999
- Update the value of the
core_pluginconfiguration key in the/etc/quantum/quantum.conffile to refer to the Open vSwitch plug-in.#openstack-config --set /etc/quantum/quantum.conf \DEFAULT core_plugin \quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
Linux Bridge
- Create a symbolic link between the
/etc/quantum/plugin.inipath referred to by the Networking service and the plug-in specific configuration file.#ln -s /etc/quantum/plugins/linuxbridge/linuxbridge_conf.ini \/etc/quantum/plugin.ini - Update the value of the
tenant_network_typeconfiguration key in the/etc/quantum/plugin.inifile to refer to the type of network that must be used for tenant networks. Supported values areflat,vlan, andlocal.The default islocalbut this is not recommended for real deployments.#openstack-config --set /etc/quantum/plugin.ini \VLAN tenant_network_typeTYPEReplaceTYPEwith the type chosen tenant network type. - If
flatorvlannetworking was chosen, the value of thenetwork_vlan_rangesconfiguration key must also be set. This configuration key maps physical networks to VLAN ranges.Mappings are of the formNAME:START:ENDwhereNAMEis replaced by the name of the physical network,STARTis replaced by the VLAN identifier that starts the range, andENDis replaced by the replaced by the VLAN identifier that ends the range.#openstack-config --set /etc/quantum/plugin.ini \LINUX_BRIDGE network_vlan_rangesNAME:START:ENDMultiple ranges can be specified using a comma separated list, for example:physnet1:1000:2999,physnet2:3000:3999
- Update the value of the
core_pluginconfiguration key in the/etc/quantum/quantum.conffile to refer to the Linux Bridge plug-in.#openstack-config --set /etc/quantum/quantum.conf \DEFAULT core_plugin \quantum.plugins.linuxbridge.lb_quantum_plugin.LinuxBridgePluginV2
Setting the Database Connection String
The database connection string used by the networking service is defined in the/etc/quantum/plugin.inifile. It must be updated to point to a valid database server before starting the service.- Use the
openstack-configcommand to set the value of theconnectionconfiguration key.#openstack-config --set /etc/quantum/plugin.ini \DATABASE sql_connection mysql://USER:PASS@IP/DBReplace:USERwith the database user name the networking service is to use, usuallyquantum.PASSwith the password of the chosen database user.IPwith the IP address or host name of the database server.DBwith the name of the database that has been created for use by the networking service (ovs_quantumwas used as the example in the previous Creating the OpenStack Networking Database section).
Start the Networking Service
- Start the OpenStack Networking service using the
servicecommand.#service quantum-server start - Enable the Networking service permanantly using the
chkconfigcommand.#chkconfig quantum-server on
Important
force_gateway_on_subnet configuration key to True in the /etc/quantum/quantum.conf file.
Prerequisites:
root user on the system hosting the DHCP agent.
Configuring Authentication
The DHCP agent must be explicitly configured to use the identity service for authentication.- Set the authentication strategy (
auth_strategy) configuration key tokeystoneusing theopenstack-configcommand.#openstack-config --set /etc/quantum/dhcp_agent.ini \DEFAULT auth_strategy keystone - Set the authentication host (
auth_hostconfiguration key) to the IP address or host name of the identity server.#openstack-config --set /etc/quantum/dhcp_agent.ini \keystone_authtoken auth_hostIPReplaceIPwith the IP address or host name of the identity server. - Set the administration tenant name (
admin_tenant_name) configuration key to the name of the tenant that was created for the use of the networking services. Examples in this guide useservices.#openstack-config --set /etc/quantum/dhcp_agent.ini \keystone_authtoken admin_tenant_nameservices - Set the administration user name (
admin_user) configuration key to the name of the user that was created for the use of the networking services. Examples in this guide usequantum.#openstack-config --set /etc/quantum/dhcp_agent.ini \keystone_authtoken admin_userquantum - Set the administration password (
admin_password) configuration key to the password that is associated with the user specified in the previous step.#openstack-config --set /etc/quantum/dhcp_agent.ini \keystone_authtoken admin_passwordPASSWORD
Configuring the Interface Driver
Set the value of theinterface_driverconfiguration key in the/etc/quantum/dhcp_agent.inifile based on the networking plug-in being used. Execute only the configuration step that applies to the plug-in used in your environment.Open vSwitch Interface Driver
#openstack-config --set /etc/quantum/dhcp_agent.ini \DEFAULT interface_driver quantum.agent.linux.interface.OVSInterfaceDriverLinux Bridge Interface Driver
#openstack-config --set /etc/quantum/dhcp_agent.ini \DEFAULT interface_driver quantum.agent.linux.interface.BridgeInterfaceDriver
Starting the DHCP Agent
- Use the
servicecommand to start thequantum-dhcp-agentservice.#service quantum-dhcp-agent start - Use the
chkconfigcommand to ensure that thequantum-dhcp-agentservice will be started automatically in the future.#chkconfig quantum-dhcp-agent on
Prerequisites:
br-ex) directly, is only supported when the Open vSwitch plug-in is in use. The second method, which is supported by both the Open vSwitch plug-in and the Linux Bridge plug-in, is to use an external provider network.
keystonerc_admin file containing the authentication details of the OpenStack administrative user.
- Use the
sourcecommand to load the credentials of the administrative user.$source~/keystonerc_admin - Use the
net-createaction of thequantumcommand line client to create a new provider network.$quantum net-createEXTERNAL_NAME\--router:external True \--provider:network_typeTYPE\--provider:physical_networkPHYSICAL_NAME\--provider:segmentation_idVLAN_TAGReplace these strings with the appropriate values for your environment:- Replace
EXTERNAL_NAMEwith a name for the new external network provider. - Replace
PHYSICAL_NAMEwith a name for the physical network. This is not applicable if you intend to use a local network type. - Replace
TYPEwith the type of provider network you wish to use. Supported values areflat(for flat networks),vlan(for VLAN networks), andlocal(for local networks). - Replace
VLAN_TAGwith the VLAN tag that will be used to identify network traffic. The VLAN tag specified must have been defined by the network administrator.If thenetwork_typewas set to a value other thanvlanthen this parameter is not required.
Take note of the unique external network identifier returned, this will be required in subsequent steps. - Use the
subnet-createaction of the command line client to create a new subnet for the new external provider network.$quantum subnet-create --gatewayGATEWAY\--allocation-pool start=IP_RANGE_START,end=IP_RANGE_END\--disable-dhcpEXTERNAL_NAMEEXTERNAL_CIDRReplace these strings with the appropriate values for your environment:- Replace
GATEWAYwith the IP address or hostname of the system that is to act as the gateway for the new subnet. - Replace
IP_RANGE_STARTwith the IP address that denotes the start of the range of IP addresses within the new subnet that floating IP addresses will be allocated from. - Replace
IP_RANGE_ENDwith the IP address that denotes the end of the range of IP addresses within the new subnet that floating IP addresses will be allocated from. - Replace
EXTERNAL_NAMEwith the name of the external network the subnet is to be associated with. This must match the name that was provided to thenet-createaction in the previous step. - Replace
EXTERNAL_CIDRwith the Classless Inter-Domain Routing (CIDR) representation of the block of IP addresses the subnet represents. An example would be192.168.100.0/24.
Take note of the unique subnet identifier returned, this will be required in subsequent steps.Important
The IP address used to replace the stringGATEWAYmust be within the block of IP addresses specified in place of theEXTERNAL_CIDRstring but outside of the block of IP addresses specified by the range started byIP_RANGE_STARTand ended byIP_RANGE_END.The block of IP addresses specifed by the range started byIP_RANGE_STARTand ended byIP_RANGE_ENDmust also fall within the block of IP addresses specified byEXTERNAL_CIDR. - Use the
router-createaction of thequantumcommand line client to create a new router.$quantum router-createNAMEReplaceNAMEwith the name to give the new router. Take note of the unique router identifier returned, this will be required in subsequent steps. - Use the
router-gateway-setaction of thequantumcommand line client to link the newly created router to the external provider network.$quantum router-gateway-setROUTERNETWORKReplaceROUTERwith the unique identifier of the router, replaceNETWORKwith the unique identifier of the external provider network. - Use the
router-interface-addaction of thequantumcommand line client to link the newly created router to the subnet.$quantum router-interface-addROUTERSUBNETReplaceROUTERwith the unique identifier of the router, replaceSUBNETwith the unique identifier of the subnet.
Prerequisites:
- Confirm that the openvswitch package is installed. This is normally installed as a dependency of the quantum-plugin-openvswitch package.
#openvswitch-rpm -qa | grep openvswitch1.10.0-1.el6.x86_64 openstack-quantum-openvswitch-2013.1-3.el6.noarc - Start the
openvswitchservice.#service openvswitch start - Enable the
openvswitchservice permanently.#chkconfig openvswitch on - Each host running the Open vSwitch agent also requires an Open vSwitch bridge named
br-int. This bridge is used for private network traffic. Use theovs-vsctlcommand to create this bridge before starting the agent.#ovs-vsctl add-br br-intWarning
Thebr-intbridge is required for the agent to function correctly. Once created do not remove or otherwise modify thebr-intbridge. - Ensure that the
br-intdevice persists on reboot by creating a/etc/sysconfig/network-scripts/ifcfg-br-intfile with these contents:DEVICE=br-int DEVICETYPE=ovs TYPE=OVSBridge ONBOOT=yes BOOTPROTO=none
- Set the value of the
bridge_mappingsconfiguration key. This configuration key must contain a list of physical networks and the network bridges associated with them.The format for each entry in the comma separated list is:
WherePHYSNET:BRIDGEPHYSNETis replaced with the name of a physical network, andBRIDGEis replaced by by the name of the network bridge.The physical network must have been defined in thenetwork_vlan_rangesconfiguration variable on the OpenStack Networking server.#openstack-config --set /etc/quantum/plugin.ini \OVS bridge_mappingsMAPPINGSReplaceMAPPINGSwith the physical network to bridge mappings. - Use the
servicecommand to start thequantum-openvswitch-agentservice.#service quantum-openvswitch-agent start - Use the
chkconfigcommand to ensure that thequantum-openvswitch-agentservice is started automatically in the future.#chkconfig quantum-openvswitch-agent on - Use the
chkconfigcommand to ensure that thequantum-ovs-cleanupservice is started automatically on boot. When started at boot time this service ensures that the OpenStack Networking agents maintain full control over the creation and management of tap devices.#chkconfig quantum-ovs-cleanup on
Prerequisites:
- Set the value of the
physical_interface_mappingsconfiguration key. This configuration key must contain a list of physical networks and the VLAN ranges associated with them that are available for allocation to tenant networks.The format for each entry in the comma separated list is:
WherePHYSNET:VLAN_START:VLAN_ENDPHYSNETis replaced with the name of a physical network,VLAN_STARTis replaced by an identifier indicating the start of the VLAN range, andVLAN_ENDis replaced by an identifier indicating the end of the VLAN range.The physical networks must have been defined in thenetwork_vlan_rangesconfiguration variable on the OpenStack Networking server.#openstack-config --set /etc/quantum/plugin.ini \LINUX_BRIDGE physical_interface_mappingsMAPPINGSReplaceMAPPINGSwith the physical network to VLAN range mappings. - Use the service command to start the
quantum-linuxbridge-agentservice.#service quantum-linuxbridge-agent start - Use the chkconfig command to ensure that the
quantum-linuxbridge-agentservice is started automatically in the future.#chkconfig quantum-linuxbridge-agent on
Prerequisites:
root user.
Configuring Authentication
- Set the authentication strategy (
auth_strategy) configuration key tokeystoneusing theopenstack-configcommand.#openstack-config --set /etc/quantum/metadata_agent.ini \DEFAULT auth_strategy keystone - Set the authentication host (
auth_hostconfiguration key to the IP address or host name of the identity server.#openstack-config --set /etc/quantum/metadata_agent.ini \keystone_authtoken auth_hostIPReplaceIPwith the IP address or host name of the identity server. - Set the administration tenant name (
admin_tenant_name) configuration key to the name of the tenant that was created for the use of the networking services. Examples in this guide useservices.#openstack-config --set /etc/quantum/metadata_agent.ini \keystone_authtoken admin_tenant_nameservices - Set the administration user name (
admin_user) configuration key to the name of the user that was created for the use of the networking services. Examples in this guide usequantum.#openstack-config --set /etc/quantum/metadata_agent.ini \keystone_authtoken admin_userquantum - Set the administration password (
admin_password) configuration key to the password that is associated with the user specified in the previous step.#openstack-config --set /etc/quantum/metadata_agent.ini \keystone_authtoken admin_passwordPASSWORD
Configuring the Interface Driver
Set the value of theinterface_driverconfiguration key in the/etc/quantum/l3_agent.inifile based on the networking plug-in being used. Execute only the configuration step that applies to the plug-in used in your environment.Open vSwitch Interface Driver
#openstack-config --set /etc/quantum/l3_agent.ini \DEFAULT interface_driver quantum.agent.linux.interface.OVSInterfaceDriverLinux Bridge Interface Driver
#openstack-config --set /etc/quantum/l3_agent.ini \DEFAULT interface_driver quantum.agent.linux.interface.BridgeInterfaceDriver
Configuring External Network Access
The L3 agent connects to external networks using either an external bridge or an external provider network. When using the Open vSwitch plug-in either approach is supported. When using the Linux Bridge plug-in only the use of an external provider network is supported. Choose the approach that is most appropriate for the environment.Using an External Bridge
To use an external bridge you must create and configure it. Finally the OpenStack networking configuration must be updated to use it. This must be done on each system hosting an instance of the L3 agent.- Use the
ovs-ctlcommand to create the external bridge namedbr-ex.#ovs-vsctl add-br br-ex - Ensure that the
br-exdevice persists on reboot by creating a/etc/sysconfig/network-scripts/ifcfg-br-exfile with these contents:DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge ONBOOT=yes BOOTPROTO=none
- Ensure that the value of the
external_network_bridgeconfiguration key in the/etc/quantum/l3_agent.inifile isbr-ex. This ensures that the L3 agent will use the external bridge.#openstack-config --set /etc/quantum/l3_agent.ini \DEFAULT external_network_bridge br-ex
Using a Provider Network
To connect the L3 agent to external networks using a provider network you must first have created the provider network. You must also have created a subnet and router to associate with it. The unique identifier of the router will be required to complete these steps.- Ensure that the value of the
external_network_bridgeconfiguration key in the/etc/quantum/l3_agent.inifile is blank. This ensures that the L3 agent does not attempt to use an external bridge.#openstack-config --set /etc/quantum/l3_agent.ini \DEFAULT external_network_bridge "" - Set the value of the
router_idconfiguration key in the/etc/quantum/l3_agent.inifile to the identifier of the external router that must be used by the L3 agent when accessing the external provider network.#openstack-config --set /etc/quantum/l3_agent.ini \DEFAULT router_idROUTERReplaceROUTERwith the unique identifier of the router that has been defined for use when accessing the external provider network.
Starting the L3 Agent
- Use the
servicecommand to start thequantum-l3-agentservice.#service quantum-l3-agent start - Use the
chkconfigcommand to ensure that thequantum-l3-agentservice will be started automatically in the future.#chkconfig quantum-l3-agent on
Starting the Metadata Agent
The OpenStack networking metadata agent allows virtual machine instances to communicate with the compute metadata service. It runs on the same hosts as the Layer 3 (L3) agent.- Use the
servicecommand to start thequantum-metadata-agentservice.#service quantum-metadata-agent start - Use the
chkconfigcommand to ensure that thequantum-metadata-agentservice will be started automatically in the future.#chkconfig quantum-metadata-agent on
All Nodes
- Verify that the customized Red Hat Enterprise Linux kernel intended for use with Red Hat Enterprise Linux OpenStack Platform is running:
$uname --kernel-release2.6.32-358.6.2.openstack.el6.x86_64If the kernel release value returned does not contain the stringopenstackthen update the kernel and reboot the system. - Ensure that the installed IP utilities support network namespaces:
$ip netnsIf an error indicating that the argument is not recognised or supported is returned then update the system usingyum.
Service Nodes
- Ensure that the
quantum-serverservice is running:$quantum-server (pidservice quantum-server status3011) is running...
Network Nodes
- Ensure that the DHCP agent is running:
$quantum-dhcp-agent (pidservice quantum-dhcp-agent status3012) is running... - Ensure that the L3 agent is running:
$quantum-l3-agent (pidservice quantum-l3-agent status3013) is running... - Ensure that the plug-in agent, if applicable, is running:
$quantum-service quantum-PLUGIN-agent statusPLUGIN-agent (pid3014) is running...ReplacePLUGINwith the appropriate plug-in for the environment. Valid values includeopenvswitchandlinuxbridge. - Ensure that the metadata agent is running:
$quantum-metadata-agent (pidservice quantum-metadata-agent status3015) is running...
root user.
- Use the
grepcommand to check for the presence of thesvmorvmxCPU extensions by inspecting the/proc/cpuinfofile generated by the kernel:#grep -E 'svm|vmx' /proc/cpuinfoIf any output is shown after running this command then the CPU is hardware virtualization capable and the functionality is enabled in the system BIOS. - Use the
lsmodcommand to list the loaded kernel modules and verify that thekvmmodules are loaded:#lsmod | grep kvmIf the output includeskvm_intelorkvm_amdthen thekvmhardware virtualization modules are loaded and your kernel meets the module requirements for the OpenStack Compute Service.
openstack-nova-xvpvncproxy service).
root user.
- Install the VNC proxy utilities and the console authentication service:
- Install the openstack-nova-novncproxy package using the
yumcommand:#yum install -y openstack-nova-novncproxy - Install the openstack-nova-console package using the
yumcommand:#yum install -y openstack-nova-console
openstack-nova-novncproxy service listens on TCP port 6080 and the openstack-nova-xvpvncproxy service listens on TCP port 6081.
root user.
- Edit the
/etc/sysconfig/iptablesfile and add the following on a new line underneath the -A INPUT -i lo -j ACCEPT line and before any -A INPUT -j REJECT rules:-A INPUT -m state --state NEW -m tcp -p tcp --dport 6080 -j ACCEPT
- Save the file and exit the editor.
- Similarly, when using the
openstack-nova-xvpvncproxyservice, enable traffic on TCP port 6081 with the following on a new line in the same location:-A INPUT -m state --state NEW -m tcp -p tcp --dport 6081 -j ACCEPT
root user to apply the changes:
# service iptables restart
# iptables-save
/etc/nova/nova.conf file holds the following VNC options:
- vnc_enabled - Default is true.
- vncserver_listen - The IP address to which VNC services will bind.
- vncserver_proxyclient_address - The IP address of the compute host used by proxies to connect to instances.
- novncproxy_base_url - The browser address where clients connect to instance.
- novncproxy_port - The port listening for browser VNC connections. Default is 6080.
- xvpvncproxy_port - The port to bind for traditional VNC clients. Default is 6081.
root user, use the service command to start the console authentication service:
#service openstack-nova-consoleauth start
chkconfig command to permanently enable the service:
#chkconfig openstack-nova-consoleauth on
root user, use the service command on the nova node to start the browser-based service:
#service openstack-nova-novncproxy start
chkconfig command to permanently enable the service:
#chkconfig openstack-nova-novncproxy on
/etc/nova/nova.conf file to access instance consoles.
root user.
- Connect to the database service using the
mysqlcommand.#mysql -u root -p - Create the
novadatabase.mysql>CREATE DATABASE nova; - Create a
novadatabase user and grant it access to thenovadatabase.mysql>GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'PASSWORD';mysql>GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'PASSWORD';ReplacePASSWORDwith a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately.
mysql>FLUSH PRIVILEGES; - Exit the
mysqlclient command.mysql>quit
- Create the
computeuser, who has theadminrole in theservicestenant. - Create the
computeservice entry and assign it an endpoint.
- Authenticate as the administrator of the Identity service by running the
sourcecommand on thekeystonerc_adminfile containing the required credentials:#source ~/keystonerc_admin - Create a user named
computefor the OpenStack Compute service to use:#+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id |keystone user-create --name compute --passPASSWORD96cd855e5bfe471ce4066794bbafb615| | name | compute | | tenantId | | +----------+----------------------------------+ReplacePASSWORDwith a secure password that will be used by the Compute service when authenticating against the Identity service. Take note of the created user's returned ID as it will be used in subsequent steps. - Get the ID of the
adminrole:#keystone role-get adminIf noadminrole exists, create one:$ keystone role-create --name admin
- Get the ID of the
servicestenant:$keystone tenant-list | grep servicesIf noservicestenant exists, create one:$keystone tenant-create --name services --description "Services Tenant"This guide uses one tenant for all service users. For more information, refer to Creating the Services Tenant. - Use the
keystone user-role-addcommand to link thecomputeuser,adminrole, andservicestenant together:#keystone user-role-add --user-idUSERID--role-idROLEID--tenant-idTENANTIDReplace the user, role, and tenant IDs with those obtained in the previous steps. - Create the
computeservice entry:#keystone service-create --name compute \--type compute \+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Compute Service | | id |--description "OpenStack Compute Service"8dea97f5ee254b309c1792d2bd821e59| | name | compute | | type | compute | +-------------+----------------------------------+Take note of the created service's returned ID as it will be used in the next step. - Create the
computeendpoint entry:#keystone endpoint-create --service-idSERVICEID\--publicurl "http://IP:8774/v2/\$(tenant_id)s" \--adminurl "http://IP:8774/v2/\$(tenant_id)s" \--internalurl "http://IP:8774/v2/\$(tenant_id)s"Replace:SERVICEIDwith the ID returned by thekeystone service-createcommand.IPwith the IP address or host name of the system that will be acting as the compute node.
- openstack-nova-api
- Provides the OpenStack Compute API service. At least one node in the environment must host an instance of the API service. This imust be the node pointed to by the Identity service endpoint defition for the Compute service.
- openstack-nova-compute
- Provides the OpenStack Compute service.
- openstack-nova-conductor
- Provides the Compute conductor service. The conductor handles database requests made by Compute nodes, ensuring that individual Compute nodes do not require direct database access. At least one node in each environment must act as a Compute conductor.
- openstack-nova-scheduler
- Provides the Compute scheduler service. The scheduler handles scheduling of requests made to the API across the available Compute resources. At least one node in each environment must act as a Compute scheduler.
- python-cinderclient
- Provides client utilities for accessing storage managed by the OpenStack Block Storage service. This package is not required if you do not intend to attach block storage volumes to your instances or you intend to manage such volumes using a service other than the OpenStack Block Storage service.
root user:
#yum install -y openstack-nova-api openstack-nova-compute \openstack-nova-conductor openstack-nova-scheduler \python-cinderclient
Note
root user.
- Set the authentication strategy (
auth_strategy) configuration key tokeystoneusing theopenstack-configcommand.#openstack-config --set /etc/nova/nova.conf \DEFAULT auth_strategy keystone - Set the authentication host (
auth_host) configuration key to the IP address or host name of the identity server.#openstack-config --set /etc/nova/api-paste.ini \filter:authtoken auth_hostIPReplaceIPwith the IP address or host name of the identity server. - Set the administration tenant name (
admin_tenant_name) configuration key to the name of the tenant that was created for the use of the Compute service. In this guide, examples useservices.#openstack-config --set /etc/nova/api-paste.ini \filter:authtoken admin_tenant_nameservices - Set the administration user name (
admin_user) configuration key to the name of the user that was created for the use of the Compute service. In this guide, examples usenova.#openstack-config --set /etc/nova/api-paste.ini \filter:authtoken admin_usernova - Set the administration password (
admin_password) configuration key to the password that is associated with the user specified in the previous step.#openstack-config --set /etc/nova/api-paste.ini \filter:authtoken admin_passwordPASSWORD
/etc/nova/nova.conf file. It must be updated to point to a valid database server before starting the service.
openstack-nova-conductor). Compute nodes communicate with the conductor using the messaging infrastructure, the conductor in turn orchestrates communication with the database. As a result individual compute nodes no longer require direct access to the database. This procedure only needs to be followed on nodes that will host the conductor service. There must be at least one instance of the conductor service in any compute environment.
root user on the server hosting the Compute service.
- Use the
openstack-configcommand to set the value of thesql_connectionconfiguration key.#openstack-config --set /etc/nova/nova.conf \DEFAULT sql_connection mysql://USER:PASS@IP/DBReplace:USERwith the database user name the Compute service is to use, usuallynova.PASSwith the password of the chosen database user.IPwith the IP address or host name of the database server.DBwith the name of the database that has been created for use by the compute, usuallynova.
root user:
General Settings
Use theopenstack-configutility to set the value of therpc_backendconfiguration key to Qpid.#openstack-config --set /etc/nova/nova.conf \DEFAULT rpc_backend nova.openstack.common.rpc.impl_qpidConfiguration Key
Use theopenstack-configutility to set the value of theqpid_hostnameconfiguration key to the host name of the Qpid server.#openstack-config --set /etc/nova/nova.conf \DEFAULT qpid_hostnameIPReplaceIPwith the IP address or host name of the message broker.Authentication Settings
If you have configured Qpid to authenticate incoming connections, you must provide the details of a valid Qpid user in the Compute configuration:- Use the
openstack-configutility to set the value of theqpid_usernameconfiguration key to the username of the Qpid user that the Compute services must use when communicating with the message broker.#openstack-config --set /etc/nova/nova.conf \DEFAULT qpid_usernameUSERNAMEReplaceUSERNAMEwith the required Qpid user name. - Use the
openstack-configutility to set the value of theqpid_passwordconfiguration key to the password of the Qpid user that the Compute services must use when communicating with the message broker.#openstack-config --set /etc/nova/nova.conf \DEFAULT qpid_passwordPASSWORDReplacePASSWORDwith the password of the Qpid user.
Encryption Settings
If you configured Qpid to use SSL, you must inform the Compute services of this choice. Useopenstack-configutility to set the value of theqpid_protocolconfiguration key tossl.#openstack-config --set /etc/nova/nova.conf \DEFAULT qpid_protocol sslThe value of theqpid_portconfiguration key must be set to5671as Qpid listens on this different port when SSL is in use.#openstack-config --set /etc/nova/nova.conf \DEFAULT qpid_port 5671Important
To communicate with a Qpid message broker that uses SSL the node must also have:- The nss package installed.
- The certificate of the relevant certificate authority installed in the system NSS database (
/etc/pki/nssdb/).
Thecerttoolcommand is able to import certificates into the NSS database. See thecerttoolmanual page for more information (man certtool).
Important
- Default CPU overcommit ratio - 16
- Default memory overcommit ratio - 1.5
- The default CPU overcommit ratio of 16 means that up to 16 virtual cores can be assigned to a node for each physical core.
- The default memory overcommit ratio of 1.5 means that instances can be assigned to a physical node if the total instance memory usage is less than 1.5 times the amount of physical memory available.
cpu_allocation_ratio and ram_allocation_ratio directives in /etc/nova/nova.conf to change these default settings.
/etc/nova/nova.conf:
reserved_host_memory_mb- Defaults to 512MB.reserved_host_disk_mb- Defaults to 0MB.
nova-network service must not run. Instead all network related decisions are delegated to the OpenStack networking Service.
nova-manage and nova to manage networks or IP addressing, including both fixed and floating IPs, is not supported with OpenStack Networking.
Important
nova-network and reboot any physical nodes that were running nova-network before using them to run OpenStack Network. Inadvertently running the nova-network process while using OpenStack Networking service can cause problems, as can stale iptables rules pushed down by a previously running nova-network.
root user.
- Modify the
network_api_classconfiguration key to indicate that the OpenStack Networking service is in use.#openstack-config --set /etc/nova/nova.conf \DEFAULT network_api_class nova.network.quantumv2.api.API - Set the value of the
quantum_urlconfiguration key to point to the endpoint of the networking API.#openstack-config --set /etc/nova/nova.conf \DEFAULT quantum_url http://IP:9696/ReplaceIPwith the IP address or host name of the server hosting the API of the OpenStack Networking service. - Set the value of the
quantum_admin_tenant_nameconfiguration key to the name of the tenant used by the OpenStack Networking service. Examples in this guide useservices.#openstack-config --set /etc/nova/nova.conf \DEFAULT quantum_admin_tenant_nameservices - Set the value of the
quantum_admin_usernameconfiguration key to the name of the administrative user for the OpenStack Networking service. Examples in this guide usequantum.#openstack-config --set /etc/nova/nova.conf \DEFAULT quantum_admin_usernamequantum - Set the value of the
quantum_admin_passwordconfiguration key to the password associated with the administrative user for the networking service.#openstack-config --set /etc/nova/nova.conf \DEFAULT quantum_admin_passwordPASSWORD - Set the value of the
quantum_admin_auth_urlconfiguration key to the URL associated with the identity service endpoint.#openstack-config --set /etc/nova/nova.conf \DEFAULT quantum_admin_auth_url http://IP:35357/v3ReplaceIPwith the IP address or host name of the Identity service endpoint. - Set the value of the
security_group_apiconfiguration key toquantum.#openstack-config --set /etc/nova/nova.conf \DEFAULT security_group_api quantumThis enables the use of OpenStack Networking security groups. - Set the value of the
firewall_driverconfiguration key tonova.virt.firewall.NoopFirewallDriver.#openstack-config --set /etc/nova/nova.conf \DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriverThis must be done when OpenStack Networking security groups are in use.
nova-compute creates an instance, it must 'plug' each of the vNIC associated with the instance into a OpenStack networking controlled virtual switch. It must also inform the virtual switch of the OpenStack networking port identifier associated with each vNIC.
libvirt_vif_driver field in the /etc/nova/nova.conf configuration file. In Red Hat Enterprise Linux OpenStack Platform 3 a generic virtual interface driver, nova.virt.libvirt.vif.LibvirtGenericVIFDriver, is provided. This driver relies on OpenStack networking being able to return the type of virtual interface binding required. These plug-ins support this operation:
- Linux Bridge
- Open vSwitch
- NEC
- BigSwitch
- CloudBase Hyper-V
- brocade
openstack-config command to set the value of the libvirt_vif_driver configuration key appropriately:
#openstack-config --set /etc/nova/nova.conf \DEFAULT libvirt_vif_driver \nova.virt.libvirt.vif.LibvirtGenericVIFDriver
Important
nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver, instead of the generic driver.
Important
/etc/libvirt/qemu.conf file to ensure that the virtual machine launches properly:
user = "root" group = "root" cgroup_device_acl = [ "/dev/null", "/dev/full", "/dev/zero", "/dev/random", "/dev/urandom", "/dev/ptmx", "/dev/kvm", "/dev/kqemu", "/dev/rtc", "/dev/hpet", "/dev/net/tun", ]
8773, 8774, and 8775.
5900 to 5999.
root user. Repeat the process for each compute node.
- Open the
/etc/sysconfig/iptablesfile in a text editor. - Add an INPUT rule allowing TCP traffic on ports in the ranges
5900to5999and8773to8775by adding these lines to the file.-A INPUT -p tcp -m multiport --dports 5900:5999 -j ACCEPT -A INPUT -p tcp -m multiport --dports 8773,8774,8775 -j ACCEPT
The new rule must appear before any INPUT rules that REJECT traffic. - Save the changes to the
/etc/sysconfig/iptablesfile. - Restart the
iptablesservice to ensure that the change takes effect.#service iptables restart
iptables firewall is now configured to allow incoming connections to the Compute services.
Important
- Use the
sucommand to switch to thenovauser.#su nova -s /bin/sh - Run the
nova-manage db synccommand to initialize and populate the database identified in/etc/nova/nova.conf.$nova-manage db sync
Starting the Message Bus Service
Libvirt requires that themessagebusservice be enabled and running.- Use the
servicecommand to start themessagebusservice.#service messagebus start - Use the
chkconfigcommand to enable themessagebusservice permanently.#chkconfig messagebus on
Starting the Libvirtd Service
The Compute service requires that thelibvirtdservice be enabled and running.- Use the
servicecommand to start thelibvirtdservice.#service libvirtd start - Use the
chkconfigcommand to enable thelibvirtdservice permanently.#chkconfig libvirtd on
Starting the API Service
Start the API service on each system that will be hosting an instance of it. Note that each API instance should either have its own endpoint defined in the identity service database or be pointed to by a load balancer that is acting as the endpoint.- Use the
servicecommand to start theopenstack-nova-apiservice.#service openstack-nova-api start - Use the
chkconfigcommand to enable theopenstack-nova-apiservice permanently.#chkconfig openstack-nova-api on
Starting the Scheduler
Start the scheduler on each system that will be hosting an instance of it.- Use the
servicecommand to start theopenstack-nova-schedulerservice.#service openstack-nova-scheduler start - Use the
chkconfigcommand to enable theopenstack-nova-schedulerservice permanently.#chkconfig openstack-nova-scheduler on
Starting the Conductor
The conductor is intended to minimize or eliminate the need for Compute nodes to access the database directly. Compute nodes instead communicate with the conductor via a message broker and the conductor handles database access.Start the conductor on each system that is intended to host an instance of it. Note that it is recommended that this service is not run on each and every Compute node as this eliminates the security benefits of restricting direct database access from the Compute nodes.- Use the
servicecommand to start theopenstack-nova-conductorservice.#service openstack-nova-conductor start - Use the
chkconfigcommand to enable theopenstack-nova-conductorservice permanently.#chkconfig openstack-nova-conductor on
Starting the Compute Service
Start the Compute service on every system that is intended to host virtual machine instances.- Use the
servicecommand to start theopenstack-nova-computeservice.#service openstack-nova-compute start - Use the
chkconfigcommand to enable theopenstack-nova-computeservice permanently.#chkconfig openstack-nova-compute on
Starting Optional Services
Depending on environment configuration you may also need to start these services:openstack-nova-cert- The X509 certificate service, required if you intend to use the EC2 API to the Compute service.
openstack-nova-network- The Nova networking service. Note that you must not start this service if you have installed and configured, or intend to install and configure, OpenStack networking.
openstack-nova-objectstore- The Nova object storage service. It is recommended that the OpenStack Object Storage service (Swift) is used for new deployments.
- The system hosting the Dashboard service must have:
- The following already installed:
httpd,mod_wsgi, andmod_ssl(for security purposes). - A connection to the Identity service, as well as to the other OpenStack API services (OpenStack Compute, Block Storage, Object Storage, Image, and Networking services).
- The installer must know the URL of the Identity service endpoint.
Note
mod_wsgi, httpd, and mod_ssl, execute as root:
# yum install -y mod_wsgi httpd mod_ssl
Note
- openstack-dashboard
- Provides the OpenStack Dashboard service.
- memcached
- Memory-object caching system, which speeds up dynamic web applications by alleviating database load.
- python-memcached
- Python interface to the memcached daemon.
root user.
- Install the memcached object caching system:
#yum install -y memcached python-memcached - Install the Dashboard package:
#yum install -y openstack-dashboard
httpd service. To start the service, execute the following commands as the root user:
- To start the
service, execute on the command line:#service httpd start - To ensure that the httpd service starts automatically in the future, execute:
#chkconfig httpd on - You can confirm that
httpdis running by executing:#service --status-all | grep httpd
/etc/openstack-dashboard/local_settings file (refer to the sample file in the Appendix):
- Cache Backend - As the
rootuser, update theCACHESsettings with the memcached values:SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION' : 'memcacheURL:port', } }Where:memcacheURLis the host on which memcache was installedportis the value from thePORTparameter in the/etc/sysconfig/memcachedfile.
- Dashboard Host - Specify the host URL for your OpenStack Identity service endpoint. For example:
OPENSTACK_HOST="127.0.0.1" - Time Zone - To change the dashboard's timezone, update the following (the time zone can also be changed using the dashboard GUI):
TIME_ZONE="UTC" - To ensure the configuration changes take effect, restart the Apache web server.
Note
HORIZON_CONFIG dictionary contains all the settings for the Dashboard. Whether or not a service is in the Dashboard depends on the Service Catalog configuration in the Identity service. For a full listing, refer to http://docs.openstack.org/developer/horizon/topics/settings.html (Horizon Settings and Configuration).
- Edit the
/etc/openstack-dashboard/local_settingsfile, and uncomment the following parameters:SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https') CSRF_COOKIE_SECURE = True SESSION_COOKIE_SECURE = TrueThe latter two settings instruct the browser to only send dashboard cookies over HTTPS connections, ensuring that sessions will not work over HTTP. - Edit the
/etc/httpd/conf/httpd.conffile, and add the following line:NameVirtualHost *:443
- Edit the
/etc/httpd/conf.d/openstack-dashboard.conffile, and substitute the 'Before' section for 'After':Before:WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi Alias /static /usr/share/openstack-dashboard/static/ <Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi> <IfModule mod_deflate.c> SetOutputFilter DEFLATE <IfModule mod_headers.c> # Make sure proxies don’t deliver the wrong content Header append Vary User-Agent env=!dont-vary </IfModule> </IfModule> Order allow,deny Allow from all </Directory>After:<VirtualHost *:80> ServerName openstack.example.com RedirectPermanent / https://openstack.example.com </VirtualHost> <VirtualHost *:443> ServerName openstack.example.com SSLEngine On SSLCertificateFile /etc/httpd/SSL/openstack.example.com.crt SSLCACertificateFile /etc/httpd/SSL/openstack.example.com.crt SSLCertificateKeyFile /etc/httpd/SSL/openstack.example.com.key SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10 Alias /static /usr/share/openstack-dashboard/static/ <Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi> Order allow,deny Allow from all </Directory> </VirtualHost>In the 'After' configuration, Apache listens on port 443 and redirects all non-secured requests to the HTTPs protocol. In the secured section, the private key, the public key, and the certificate are defined for usage. - As the
rootuser, restart Apache and memcached:
If the HTTP version of the dashboard is used now via the browser, the user should be redirected to the HTTPs version of the page.#service httpd restart#service memcached restart
- Log in to the system on which your
keystonerc_adminfile resides and authenticate as the Identity administrator:#source~/keystonerc_admin - Use the
keystone role-createcommand to create theMemberrole:#keystone role-create --nameMember+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id |8261ac4eabcc4da4b01610dbad6c038a| | name | Member | +----------+----------------------------------+
Note
Member role, change the value of the OPENSTACK_KEYSTONE_DEFAULT_ROLE configuration key, which is stored in:
/etc/openstack-dashboard/local_settings
httpd service must be restarted for the change to take effect.
- Use the
getenforcecommand to check the status of SELinux on the system:#getenforce - If the resulting value is 'Enforcing' or 'Permissive', use the
setseboolcommand as therootuser to allowhttpd-Identity service connections:#setsebool -P httpd_can_network_connect on
Note
# sestatus
SELinux status: enabled
SELinuxfs mount: /selinux
Current mode: permissive
Mode from config file: enforcing
Policy version: 24
Policy from config file: targeted
For more information, refer to the Security-Enhanced Linux User Guide for Red Hat Enterprise Linux.
Note
root user:
- Edit the
/etc/sysconfig/iptablesconfiguration file:- Allow incoming connections using just HTTPS by adding this firewall rule to the file:
-A INPUT -p tcp --dport 443 -j ACCEPT
- Allow incoming connections using both HTTP and HTTPS by adding this firewall rule to the file:
-A INPUT -p tcp -m multiport --dports 80,443 -j ACCEPT
- Restart the iptables service for the changes to take effect.
#service iptables restart
Important
- No shared storage across processes or workers.
- No persistence after a process terminates.
/etc/openstack-dashboard/local_settings file:
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache'
}
}
root user to initialize the database and configure it for use:
- Start the MySQL command-line client, by executing:
#mysql -u root -p - Specify the MySQL root user's password when prompted.
- Create the
dashdatabase:mysql>CREATE DATABASE dash; - Create a MySQL user for the newly-created dash database who has full control of the database.
mysql>GRANT ALL ON dash.* TO 'dash'@'%' IDENTIFIED BY 'PASSWORD';
Replacemysql>GRANT ALL ON dash.* TO 'dash'@'localhost' IDENTIFIED BY 'PASSWORD';PASSWORDwith a secure password for the new database user to authenticate with. - Enter quit at the
mysql>prompt to exit the MySQL client. - In the
/etc/openstack-dashboard/local_settingsfile, change the following options to refer to the new MySQL database:SESSION_ENGINE = 'django.contrib.sessions.backends.cached_db' DATABASES = { 'default': { # Database configuration here 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dash', 'USER': 'dash', 'PASSWORD': 'ReplacePASSWORD', 'HOST': 'HOST', 'default-character-set': 'utf8' } }PASSWORDwith the password of thedashdatabase user and replaceHOSTwith the IP address or fully qualified domain name of the databse server. - Populate the new database by executing:
Note: You will be asked to create an admin account; this is not required.#cd /usr/share/openstack-dashboard#python manage.py syncdbAs a result, the following should be displayed:Installing custom SQL ... Installing indexes ... DEBUG:django.db.backends:(0.008) CREATE INDEX `django_session_c25c2c28` ON `django_session` (`expire_date`);; args=() No fixtures found.
- Restart Apache to pick up the default site and symbolic link settings:
#service httpd restart - Restart the
openstack-nova-apiservice to ensure the API server can connect to the Dashboard and to avoid an error displayed in the Dashboard.#service openstack-nova-api restart
cached_db session backend can be used, which utilizes both the database and caching infrastructure to perform write-through caching and efficient retrieval.
SESSION_ENGINE = "django.contrib.sessions.backends.cached_db"
- Advantages:
- Does not require additional dependencies or infrastructure overhead.
- Scales indefinitely as long as the quantity of session data being stored fits into a normal cookie.
- Disadvantages:
- Places session data into storage on the user’s machine and transports it over the wire.
- Limits the quantity of session data which can be stored.
Note
- In the
/etc/openstack-dashboard/local_settingsfile, set:SESSION_ENGINE = "django.contrib.sessions.backends.signed_cookies" - Add a randomly-generated
SECRET_KEYto the project by executing on the command line:$django-admin.py startprojectNote
TheSECRET_KEYis a text string, which can be specified manually or automatically generated (as in this procedure). You will just need to ensure that the key is unique (that is, does not match any other password on the machine).
HOSTNAME with the host name or IP address of the server on which you installed the Dashboard service:
- HTTPS
https://HOSTNAME/dashboard/
- HTTP
http://HOSTNAME/dashboard/
Table of Contents
- Installed the dashboard (refer to Installing the Dashboard).
- Have an available image for use (refer to Obtaining a Test Disk Image).
- Click Images & Snapshots in the menu.
- Configure the settings that define your instance on the Details tab.
- Enter a name for the image.
- Use the location of your image file in the Image Location field.
- Select the correct type from the drop-down menu in the Format field (for example,
QCOW2). - Leave the Minimum Disk (GB) and Minimum RAM (MB) fields empty.
- Select the Public box.
- Click the button.
See Also:
Note
- When a keypair is created, a keypair file is automatically downloaded through the browser. You can optionally load this file into ssh, for command-line ssh connections, by executing:
#ssh-add ~/.ssh/NAME.pem - To delete an existing keypair, click the keypair's button on the Keypairs tab.
See Also:
- Installed the dashboard (refer to Installing the Dashboard).
- Installed OpenStack Networking Services (refer to Installing the OpenStack Networking Service).
- Log in to the dashboard.
- Click Networks in the menu.
- By default, the dialog opens to the Network tab. You have the option of specifying a network name.
- To define the network's subnet, click on the Subnet and Subnet Detail tabs. Click into each field for field tips.
Note
You do not have to initially specify a subnet (although this will result in any attached instance having the status of 'error'). If you do not define a specific subnet, clear the Create Subnet check box. - Click the button.
See Also:
- Uploaded an image to use as the basis for your instances (refer to Uploading a Disk Image).
- Created a network (refer to Creating a Network).
- Log in to the dashboard.
- Click Instances in the menu.
- By default, the dialog opens to the Details tab:
- Select an Instance Source for your instance. Available values are:
- Image
- Snapshot
- Select an Image or Snapshot to use when launching your instance. The image selected defines the operating system and architecture of your instance.
- Enter an Instance Name to identify your instance.
- Select a Flavor for your instance. The flavor determines the compute resources available to your instance. After a flavor is selected, their resources are displayed in the Flavor Details pane for preview.
- Enter an Instance Count. This determines how many instances to launch using the selected options.
- Click the Access & Security tab and configure the security settings for your instance:
- Either select an existing keypair from the Keypair drop down box, or click the + button to upload a new keypair.
- Select the Security Groups that you wish to apply to your instances. By default, only the default security group will be available.
- Click the button.
Note
- On the Instances tab, click the name of your instance. The Instance Detail page is displayed.
- Click the tab on the resultant page.
See Also:
- Installed the dashboard (refer to Installing the Dashboard).
- Installed the Block Storage service (refer to Installing OpenStack Block Storage).
- Log in to the dashboard.
- Click Volumes in the menu.
- To configure the volume:
- Enter a Volume Name to identify your new volume by.
- Enter a Description to further describe your new volume.
- Enter the Size of your new volume in gigabytes (GB).
Important
In this guide, LVM storage is configured as thecinder-volumesvolume group (refer to Configuring for LVM Storage Backend). There must be enough free disk space in thecinder-volumesvolume group for your new volume to be allocated. - Click the button to create the new volume.
- Launched an instance (refer to Launching an Instance).
- Created a volume (refer to Creating a Volume).
- Log in to the dashboard as a user.
- Click Volumes in the menu.
- Select the instance for the volume in the Attach to Instance field.
- Specifiy the device name in the Device Name field (for example, '/dev/vdc').
- Click the button.
- Log in to the dashboard.
- Click Instances in the menu.
- Click the button on the row associated with the instance of which you want to take a snapshot.
The Create Snapshot dialog is displayed. - Enter a descriptive name for your snapshot in the Snapshot Name field.
- Click the button to create the snapshot.Your new snapshot will appear in the Image Snapshots table in the Images & Snapshots screen.
See Also:
See Also:
- Create an external network for the pool:
#quantum net-createnetworkName--router:external=TrueExample 13.1. Defining an External Network
#quantum net-create ext-net --router:external=True Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 3a53e3be-bd0e-4c05-880d-2b11aa618aff | | name | ext-net | | provider:network_type | local | | provider:physical_network | | | provider:segmentation_id | | | router:external | True | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | 6b406408dff14a2ebf6d2cd7418510b2 | +---------------------------+--------------------------------------+
- Create the pool of floating IP addresses:
$quantum subnet-create --allocation-pool start=IPStart,end=IPStart--gatewayGatewayIP--disable-dhcpnetworkNameCIDRExample 13.2. Defining a Pool of Floating IP Addresses
$quantum subnet-create --allocation-pool start=10.38.15.128,end=10.38.15.159 --gateway 10.38.15.254 --disable-dhcp ext-net 10.38.15.0/24 Created a new subnet: +------------------+--------------------------------------------------+ | Field | Value | +------------------+--------------------------------------------------+ | allocation_pools | {"start": "10.38.15.128", "end": "10.38.15.159"} | | cidr | 10.38.15.0/24 | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 10.38.15.254 | | host_routes | | | id | 6a15f954-935c-490f-a1ab-c2a1c1b1529d | | ip_version | 4 | | name | | | network_id | 4ad5e73b-c575-4e32-b581-f9207a22eb09 | | tenant_id | e5be83dc0a474eeb92ad2cda4a5b94d5 | +------------------+--------------------------------------------------+
- Installed the dashboard (refer to Installing the Dashboard).
- Created an external network (refer to Defining a Floating IP-Address Pool).
- Created an internal network (refer to Creating a Network).
- Log in to the dashboard.
- Click Routers in the Manage Network menu.
- Specify the router's name and click the button. The new router is now displayed in the router list.
- Click the new router's button.
- Specify the network to which the router will connect in the External Network field, and click the button.
- To connect a private network to the newly created router:
See Also:
- Created a pool of floating IP addresses (refer to Defining a Floating IP-Address Pool).
- Launched an instance (refer to Launching an Instance).
- Log in to the dashboard as a user that has the
Memberrole. - Click Access & Security in the menu.
- Click the button. The Allocate Floating IP window is displayed.
- Select a pool of addresses from the Pool list.
- Click the button. The allocated IP address will appear in the Floating IPs table.
- Locate the newly allocated IP address in the Floating IPs table. On the same row click the button to assign the IP address to a specific instance.
- The IP Address field is automatically set to the selected floating IP address.Select the instance (with which to associate the floating IP address) from the Port to be associated list.
- Click the button to associate the IP address with the selected instance.
Note
To disassociate a floating IP address from an instance when it is no longer required use the button.
- Installed the dashboard (refer to Installing the Dashboard).
- Installed OpenStack Networking (refer to Installing the OpenStack Networking Service).
Note
- Log into the dashboard.
- Click Access & Security in the menu.
- In the Security Groups pane, click the button on the row for the
defaultsecurity group. The Edit Security Group Rules window is displayed. - Click the button. The Add Rule window is displayed.
- Configure the rule:
- Select the protocol to which the rule must apply from the IP Protocol list.
- Define the port or ports to which the rule will apply using the Open field:
Port- Define a specific port in the Port fieldPort Range- Define the port range using the From Port and To Port fields.
- Define the IP address from which connections should be accepted on the defined port using the Source field:
CIDR- Enter a specific IP address in the CIDR field using the Classless Inter-Domain Routing (CIDR) notation. A value of 0.0.0.0/0 allows connections from all IP addresses.Security Group- Select an existing security group from the Source Group drop-down list. This allows connections from any instances from the specified security group.
- Click the button to add the new rule to the security group.
See Also:
Table of Contents
- nagios
- Nagios program that monitors hosts and services on the network, and which can send email or page alerts when a problem arises and when a problem is resolved.
- nagios-devel
- Includes files which can be used by Nagios-related applications.
- nagios-plugins*
- Nagios plugins for Nagios-related applications (including ping and nrpe).
- gd and gd-devel
- gd Graphics Library, for dynamically creating images, and the gd development libraries for gd.
- php
- HTML-embedded scripting language, used by Nagios for the web interface.
- gcc, glibc, and glibc-common
- GNU compiler collection, together with standard programming libraries and binaries (including locale support).
- openssl
- OpenSSL toolkit, which provides support for secure communication between machines.
root user, using the yum command:
# yum install nagios nagios-devel nagios-plugins* gd gd-devel php gcc glibc glibc-common openssl
Note
# subscription-manager repos --enable rhel-6-server-optional-rpms
root user, execute the following:
# yum install -y nrpe nagios-plugins* openssl
/usr/lib64/nagios/plugins directory (depending on the machine, they may be in /usr/lib/nagios/plugins).
Note
- Check web-interface user name and password, and check basic configuration.
- Add OpenStack monitoring to the local server.
- If the OpenStack cloud includes distributed hosts:
- Install and configure NRPE on each remote machine (that has services to be monitored).
- Tell Nagios which hosts are being monitored.
- Tell Nagios which services are being monitored for each host.
Table 14.1. Nagios Configuration Files
| File Name | Description |
|---|---|
/etc/nagios/nagios.cfg
|
Main Nagios configuration file.
|
/etc/nagios/cgi.cfg
|
CGI configuration file.
|
/etc/httpd/conf.d/nagios.conf
|
Nagios configuration for httpd.
|
/etc/nagios/passwd
|
Password file for Nagios users.
|
/usr/local/nagios/etc/ResourceName.cfg
|
Contains user-specific settings.
|
/etc/nagios/objects/ObjectsDir/ObjectsFile.cfg
|
Object definition files that are used to store information about items such as services or contact groups.
|
/etc/nagios/nrpe.cfg
|
NRPE configuration file.
|
nagiosadmin / nagiosadmin. This value can be viewed in the /etc/nagios/cgi.cfg file.
root user:
- To change the default password for the user nagiosadmin, execute:
#htpasswd -c /etc/nagios/passwdnagiosadminNote
To create a new user, use the following command with the new user's name:#htpasswd /etc/nagios/passwdnewUserName - Update the
nagiosadminemail address in/etc/nagios/objects/contacts.cfgdefine contact{ contact_name nagiosadmin ; Short name of user [...snip...] email yourName@example.com ; <<*****CHANGE THIS****** } - Verify that the basic configuration is working:
#nagios -v /etc/nagios/nagios.cfgIf errors occur, check the parameters set in/etc/nagios/nagios.cfg - Ensure that Nagios is started automatically when the system boots.
#chkconfig --add nagios#chkconfig nagios on - Start up Nagios and restart httpd:
#service httpd restart#service nagios start
Note
/etc/nagios/objects/localhost.cfg file is used to define services for basic local statistics (for example, swap usage or the number of current users). You can always comment these services out if they are no longer needed by prefacing each line with a '#' character. This same file can be used to add new OpenStack monitoring services.
Note
cfg_file parameter in the /etc/nagios/nagios.cfg file.
root user:
- Write a short script for the item to be monitored (for example, whether a service is running), and place it in the
/usr/lib64/nagios/pluginsdirectory.For example, the following script checks the number of Compute instances, and is stored in a file namednova-list:#!/bin/env bash export OS_USERNAME=
userNameexport OS_TENANT_NAME=tenantNameexport OS_PASSWORD=passwordexport OS_AUTH_URL=http://identityURL:35357/v2.0/ data=$(nova list 2>&1) rv=$? if [ "$rv" != "0" ] ; then echo $data exit $rv fi echo "$data" | grep -v -e '--------' -e '| Status |' -e '^$' | wc -l - Ensure the script is executable:
#chmod u+x nova-list - In the
/etc/nagios/objects/commands.cfgfile, specify a command section for each new script:define command { command_line /usr/lib64/nagios/plugins/nova-list command_name nova-list } - In the
/etc/nagios/objects/localhost.cfgfile, define a service for each new item, using the defined command. For example:define service { check_command nova-list host_namelocalURLname nova-list normal_check_interval 5 service_description Number of nova vm instances use generic-service } - Restart nagios using:
#service nagios restart
root user:
- In the
/etc/nagios/nrpe.cfgfile, add the central Nagios server IP address in theallowed_hostsline:allowed_hosts=127.0.0.1,
NagiosServerIP - In the
/etc/nagios/nrpe.cfgfile, add any commands to be used to monitor the OpenStack services. For example:command[keystone]=/usr/lib64/nagios/plugins/check_procs -c 1: -w 3: -C keystone-all
Each defined command can then be specified in theservices.cfgfile on the Nagios monitoring server (refer to Creating Service Definitions).Note
Any complicated monitoring can be placed into a script, and then referred to in the command definition. For an OpenStack script example, refer to Configuring OpenStack Services. - Open up the machine's NRPE port:
#iptables -I INPUT -p tcp --dport 5666 -j ACCEPT#iptables-save > /etc/sysconfig/iptables - Start the NRPE service:
#service nrpe start
root user:
- In the
/etc/nagios/objectsdirectory, create ahosts.cfgfile. - In the file, specify a
hostsection for each machine on which an OpenStack service is running and should be monitored:define host{ use linux-server host_nameremoteHostNamealiasremoteHostAliasaddressremoteAddress}where:host_name= Name of the remote machine to be monitored (typically listed in the local/etc/hostsfile). This name is used to reference the host in service and host group definitions.alias= Name used to easily identify the host (typically the same as thehost_name).address= Host address (typically its IP address, although a FQDN can be used instead, just make sure that DNS services are available).
For example:define host{ host_name Server-ABC alias OS-ImageServices address 192.168.1.254 } - In the
/etc/nagios/nagios.cfgfile, under theOBJECT CONFIGURATION FILESsection, specify the following line:cfg_file=/etc/nagios/objects/hosts.cfg
/etc/nagios/objects/services.cfg (as the root user):
- In the
/etc/nagios/objects/commands.cfgfile, specify the following to handle the use of thecheck_nrpeplugin with remote scripts or plugins:define command{ command_name check_nrpe command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ } - In the
/etc/nagios/objectsdirectory, create theservices.cfgfile. - In the file, specify the following
servicesections for each remote OpenStack host to be monitored:##Basic remote checks############# ##Remember that
remoteHostNameis defined in the hosts.cfg file. define service{ use generic-service host_nameremoteHostNameservice_description PING check_command check_ping!100.0,20%!500.0,60% } define service{ use generic-service host_nameremoteHostNameservice_description Load Average check_command check_nrpe!check_load } ##OpenStack Service Checks####### define service{ use generic-service host_nameremoteHostNameservice_description Identity Service check_command check_nrpe!keystone }The above sections ensure that a server heartbeat, load check, and the OpenStack Identity service status are reported back to the Nagios server. All OpenStack services can be reported, just ensure that a matching command is specified in the remote server'snrpe.cfgfile. - In the
/etc/nagios/nagios.cfgfile, under theOBJECT CONFIGURATION FILESsection, specify the following line:cfg_file=/etc/nagios/objects/services.cfg
root user:
- Verify that the updated configuration is working:
#nagios -v /etc/nagios/nagios.cfgIf errors occur, check the parameters set in/etc/nagios/nagios.cfg,/etc/nagios/services.cfg, and/etc/nagios/hosts.cfg. - Restart Nagios:
#service nagios restart - Log into the Nagios dashboard again by using the following URL in your browser, and using the
nagiosadminuser and the password that was set in Step 1:http://nagiosHostURL/nagios
rsyslog service provides facilities both for running a centralized logging server and for configuring individual systems to send their log files to the centralized logging server. This is referred to as configuring the systems for "remote logging".
- While logged in as the
rootuser install the rsyslog package. Using theyumcommand.#yum install rsyslog
root user.
- Configure SELinux to allow rsyslog traffic.
# semanage -a -t syslogd_port_t -p udp 514
- Configure the
iptablesfirewall to allow rsyslog traffic.- Open the
/etc/sysconfig/iptablesfile in a text editor. - Add an
INPUTrule allowing UDP traffic on port514to the file. The new rule must appear before anyINPUTrules thatREJECTtraffic.-A INPUT -m state --state NEW -m udp -p udp --dport 514 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptablesfile. - Restart the
iptablesservice for the firewall changes to take effect.#service iptables restart
- Open the
/etc/rsyslog.conffile in a text editor.- Add this line to the file, defining the location logs will be saved to:
$template TmplAuth, "/var/log/%HOSTNAME%/%PROGRAMNAME%.log" authpriv.* ?TmplAuth *.info,mail.none,authpriv.none,cron.none ?TmplMsg
- Remove the comment character (#) from the beginning of these lines in the file:
#$ModLoad imudp #$UDPServerRun 514
Save the changes to the/etc/rsyslog.conffile.
root user.
- Edit the
/etc/rsyslog.conf, and specify the address of your centralized log server by adding the following:*.* @
YOURSERVERADDRESS:YOURSERVERPORTReplaceYOURSERVERADDRESSwith the address of the centralized logging server. ReplaceYOURSERVERPORTwith the port on which thersyslogservice is listening. For example:*.* @
192.168.20.254:514Or:*.* @
log-server.company.com:514The single@specifies the UDP protocol for transmission. Use a double@@to specify the TCP protocol for transmission.Important
The use of the wildcard * character in these example configurations indicates torsyslogthat log entries from all log facilities and of all log priorities must be sent to the remotersyslogserver.For information on applying more precise filtering of log files refer to the manual page for thersyslogconfiguration file,rsyslog.conf. Access the manual page by running the commandman rsyslog.conf.
rsyslog service is started or restarted the system will send all log messages to the centralized logging server.
rsyslog service must be running on both the centralized logging server and the systems attempting to log to it.
root user.
- Use the
servicecommand to start thersyslogservice.#service rsyslog start - Use the
chkconfigcommand to ensure thersyslogservice starts automatically in future.#chkconfig rsyslog on
rsyslog service has been started. The service will start sending or receiving log messages based on its local configuration.
Table of Contents
regionOne.
--region argument when adding service endpoints.
$keystone endpoint-create --regionREGION\--service-idSERVICEID\--publicurlPUBLICURL--adminurlADMINURL--internalurlINTERNALURL
REGION with the name of the region that the endpoint belongs to. When sharing an endpoint between regions create an endpoint entry containing the same URLs for each applicable region. For information on setting the URLs for each service refer to the identity service configuration information of the service in question.
Example 16.1. Endpoints within Discrete Regions
APAC and EMEA regions share an identity server (identity.example.com) endpoint while providing region specific compute API endpoints.
$ keystone endpoint-list
+---------+--------+------------------------------------------------------+
| id | region | publicurl |
+---------+--------+------------------------------------------------------+
| 0d8b... | APAC | http://identity.example.com:5000/v3 |
| 769f... | EMEA | http://identity.example.com:5000/v3 |
| 516c... | APAC | http://nova-apac.example.com:8774/v2/$(tenant_id)s |
| cf7e... | EMEA | http://nova-emea.example.com:8774/v2/$(tenant_id)s |
+---------+--------+------------------------------------------------------+openstack-nova-compute) service. Once the Compute service is configured and running it communicates with other nodes in the environment, including Compute API endpoints and Compute conductors, via the message broker.
openstack-nova-conductor) then you must also ensure that the service is configured to access the compute database, refer to Section 10.3.4.2, “Setting the Database Connection String” for more information.
nova service-list while authenticated as the OpenStack administrator (using a keystonerc file) to confirm the status of the new node.
- Use the
sourcecommand to load the administrative credentials from thekeystonerc_adminfile.$source ~/keystonerc_admin - Use the
nova service-listcommand to identify the compute node to be removed.$+------------------+----------+----------+---------+-------+ | Binary | Host | Zone | Status | State | +------------------+----------+----------+---------+-------+ | nova-cert | node0001 | internal | enabled | up | | nova-compute | node0001 | nova | enabled | up | | nova-conductor | node0001 | internal | enabled | up | | nova-consoleauth | node0001 | internal | enabled | up | | nova-network | node0001 | internal | enabled | up | | nova-scheduler | node0001 | internal | enabled | up | | ... | ... | ... | ... | ... | +------------------+----------+----------+---------+-------+nova service-list - Use the
nova service-disablecommand to disable thenova-computeservice on the node. This prevents new instances from being scheduled to run on the host.$+----------+--------------+----------+ | Host | Binary | Status | +----------+--------------+----------+ | node0001 | nova-compute | disabled | +----------+--------------+----------+nova service-disableHOSTnova-computeReplaceHOSTwith the name of the node to disable as indicated in the output of thenova service-listcommand in the previous step. - Use the
nova service-listcommand to verify that the relevant instance of thenova-computeservice is now disabled.$+------------------+----------+----------+----------+-------+ | Binary | Host | Zone | Status | State | +------------------+----------+----------+----------+-------+ | nova-cert | node0001 | internal | enabled | up | | nova-compute | node0001 | nova | disabled | up | | nova-conductor | node0001 | internal | enabled | up | | nova-consoleauth | node0001 | internal | enabled | up | | nova-network | node0001 | internal | enabled | up | | nova-scheduler | node0001 | internal | enabled | up | | ... | ... | ... | ... | ... | +------------------+----------+----------+----------+-------+nova service-list - Use the
nova migratecommand to migrate running instances to other compute nodes.$nova migrateHOSTReplaceHOSTwith the name of the host being removed as indicated by thenova service-listcommand in the previous steps.
- Has been built with the cloud-init package, it can automatically access metadata passed via config drive.
- Does not have the cloud-init package installed, it must be customized to run a script that mounts the config drive on boot, reads the data from the drive, and takes appropriate action such as adding the public key to an account.
- Use the
--config-drive=trueparameter when callingnova boot.The following complex example enables the config drive as well as passing user data, two files, and two key/value metadata pairs, all of which are accessible from the config drive.$nova boot --config-drive=true --image my-image-name \--key-name mykey --flavor 1 --user-data ./my-user-data.txt myinstance \--file /etc/network/interfaces=/home/myuser/instance-interfaces \--file known_hosts=/home/myuser/.ssh/known_hosts --meta role=webservers \--meta essential=false - Configure the Compute service to automatically create a config drive when booting by setting the following option in
/etc/nova/nova.conf:force_config_drive=trueNote
If a user uses the--config-drive=trueflag with thenova bootcommand, an administrator cannot disable the config drive.
Warning
genisoimage program must be installed on each Compute host before attempting to use config drive (or the instance will not boot properly).
- ISO 9660 format, add the following line to
/etc/nova/nova.conf:config_drive_format=iso9660 - VFAT format, add the following line to
/etc/nova/nova.conf:config_drive_format=vfatNote
For legacy reasons, the config drive can be configured to use VFAT format instead of ISO 9660. However, it is unlikely that you would require VFAT format, since ISO 9660 is widely supported across operating systems. If you use the VFAT format, the config drive will be 64 MBs.
nova boot options for config drive:
Table 16.1. Description of configuration options for config drive
| Configuration option=Default value | (Type) Description |
|---|---|
|
config_drive_cdrom=False
|
(BoolOpt) Whether Compute attaches the Config Drive image as a cdrom drive instead of a disk drive.
|
|
config_drive_format=iso9660
|
(StrOpt) Config drive format (valid options: iso9660 or vfat).
|
|
config_drive_inject_password=False
|
(BoolOpt) Sets the administrative password in the config drive image.
|
|
config_drive_skip_versions=1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01
|
(StrOpt) List of metadata versions to skip placing into the config drive.
|
|
config_drive_tempdir=None
|
(StrOpt) Where to put temporary files associated with config drive creation.
|
|
force_config_drive=False
|
(StrOpt) Whether Compute automatically creates config drive on startup.
|
|
mkisofs_cmd=genisoimage
|
(StrOpt) Name and optional path of the tool used for ISO image creation. Ensure that the specified tool is installed on each Compute host before attempting to use config drive (or the instance will not boot properly).
|
config-2 volume label.
Procedure 16.1. Mount the Drive by Label
/dev/disk/by-label/config-2 device. For example:
- Create the directory to use for access:
#mkdir -p /mnt/config - Mount the device:
#mount /dev/disk/by-label/config-2 /mnt/config
Procedure 16.2. Mount the Drive using Disk Identification
udev, then the /dev/disk/by-label directory will not be present.
- Use the
blkidcommand to identify the block device that corresponds to the config drive. For example, when booting the cirros image with the m1.tiny flavor, the device will be/dev/vdb:#blkid -t LABEL="config-2" -odevice/dev/vdb - After you have identified the disk, the device can then be mounted:
- Create the directory to use for access:
#mkdir -p /mnt/config - Mount the device:
#mount /dev/vdb /mnt/config
Note
- If using a Windows guest, the config drive is automatically displayed as the next available drive letter (for example, 'D:/').
- When accessing the config drive, do not rely on the presence of the EC2 metadata (files under the
ec2directory) to be present in the config drive (this content may be removed in a future release). - When creating images that access config drive data, if there are multiple directories under the
openstackdirectory, always select the highest API version by date that your consumer supports. For example, if your guest image can support versions 2012-03-05, 2012-08-05, 2013-04-13, it is best to try 2013-04-13 first and only revert to an earlier version if 2013-04-13 is absent.
Procedure 16.3. View the Mounted Config Drive
- Move to the newly mounted drive's files. For example:
$cd /mnt/config - The files in the resulting config drive vary depending on the arguments that were passed to
nova boot. Based on the example in Enabling Config Drive, the contents of the config drive would be:$lsec2/2013-04-13/meta-data.json ec2/2013-04-13/user-data ec2/latest/meta-data.json ec2/latest/user-data openstack/2013-08-10/meta_data.json openstack/2013-08-10/user_data openstack/content openstack/content/0000 openstack/content/0001 openstack/latest/meta_data.json openstack/latest/user_data
openstack/2012-08-10/meta_data.json, openstack/latest/meta_data.json, these two files are identical), formatted to improve readability:
{
"availability_zone": "nova",
"files": [
{
"content_path": "/content/0000",
"path": "/etc/network/interfaces"
},
{
"content_path": "/content/0001",
"path": "known_hosts"
}
],
"hostname": "test.novalocal",
"launch_index": 0,
"name": "test",
"meta": {
"role": "webservers"
"essential": "false"
},
"public_keys": {
"mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDBqUfVvCSez0/Wfpd8dLLgZXV9GtXQ7hnMN+Z0OWQUyebVEHey1CXuin0uY1cAJMhUq8j98SiW+cU0sU4J3x5l2+xi1bodDm1BtFWVeLIOQINpfV1n8fKjHB+ynPpe1F6tMDvrFGUlJs44t30BrujMXBe8Rq44cCk6wqyjATA3rQ== Generated by Nova\n"
},
"uuid": "83679162-1378-4288-a2d4-70e13ec132aa"
}
Note
--file /etc/network/interfaces=/home/myuser/instance-interfaces is used with the nova boot command, the contents of this file are contained in the file openstack/content/0000 file on the config drive, and the path is specified as /etc/network/interfaces in the meta_data.json file.
ec2/2009-04-04/meta-data.json, latest/meta-data.json, formatted to improve readability (the two files are identical) :
{
"ami-id": "ami-00000001",
"ami-launch-index": 0,
"ami-manifest-path": "FIXME",
"block-device-mapping": {
"ami": "sda1",
"ephemeral0": "sda2",
"root": "/dev/sda1",
"swap": "sda3"
},
"hostname": "test.novalocal",
"instance-action": "none",
"instance-id": "i-00000001",
"instance-type": "m1.tiny",
"kernel-id": "aki-00000002",
"local-hostname": "test.novalocal",
"local-ipv4": null,
"placement": {
"availability-zone": "nova"
},
"public-hostname": "test.novalocal",
"public-ipv4": "",
"public-keys": {
"0": {
"openssh-key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDBqUfVvCSez0/Wfpd8dLLgZXV9GtXQ7hnMN+Z0OWQUyebVEHey1CXuin0uY1cAJMhUq8j98SiW+cU0sU4J3x5l2+xi1bodDm1BtFWVeLIOQINpfV1n8fKjHB+ynPpe1F6tMDvrFGUlJs44t30BrujMXBe8Rq44cCk6wqyjATA3rQ== Generated by Nova\n"
}
},
"ramdisk-id": "ari-00000003",
"reservation-id": "r-7lfps8wj",
"security-groups": [
"default"
]
}
--user-data flag was passed to nova boot, and will contain the contents of the user data file passed as the argument.
openstack/2013-08-10/user_dataopenstack/latest/user_dataec2/2013-04-13/user-dataec2/latest/user-data
Note
- Updating Compute Quotas using the Command Line
- Updating Block Storage Quotas using the Command Line
- Select the Admin > Projects option in the navigation sidebar.
- Click the project's button, and then select Modify Quotas. The Edit Project window is displayed.
- Edit quota values on the Quota tab, and click the button. The table below provides parameter descriptions (listed in order of appearance).
Table 17.1. Compute Quota Descriptions
| Quota | Description | Service |
|---|---|---|
|
Metadata Items
|
Number of metadata items allowed per instance.
|
Compute
|
|
VCPUs
|
Number of instance cores allowed per tenant.
|
Compute
|
|
Instances
|
Number of instances allowed per tenant.
|
Compute
|
|
Injected Files
|
Number of injected files allowed per tenant.
|
Compute
|
|
Injected File Content Bytes
|
Number of content bytes allowed per injected file.
|
Compute
|
|
Volumes
|
Number of volumes allowed per tenant.
|
Block Storage
|
|
Gigabytes
|
Number of volume gigabtyes allowed per tenant.
|
Block Storage
|
|
RAM (MB)
|
Megabytes of ram allowed per instance.
|
Compute
|
|
Floating IPs
|
Number of floating IP addresses allowed per tenant.
|
Compute
|
|
Fixed IPs
|
Number of fixed IP addresses allowed per tenant. This number should equal at least the number of allowed instances.
|
Compute
|
|
Security Groups
|
Number of security groups allowed per tenant.
|
Compute
|
|
Security Group Rules
|
Number of rules per security group.
|
Compute
|
- Because Compute quotas are managed per tenant, use the following to obtain a tenant list:
#keystone tenant-list +----------------------------------+----------+---------+ | id | name | enabled | +----------------------------------+----------+---------+ | a981642d22c94e159a4a6540f70f9f8d | admin | True | | 934b662357674c7b9f5e4ec6ded4d0e7 | redhat01 | True | | 7bc1dbfd7d284ec4a856ea1eb82dca80 | redhat02 | True | | 9c554aaef7804ba49e1b21cbd97d218a | services | True | +----------------------------------+----------+---------+ - To update a particular Compute quota for the tenant, use:
For example:#nova-manage project quotatenantName--keyquotaName--valuequotaValue#nova-manage project quota redhat01 --key floating_ips --value 20 metadata_items: 128 injected_file_content_bytes: 10240 ram: 51200 floating_ips: 20 security_group_rules: 20 instances: 10 key_pairs: 100 injected_files: 5 cores: 20 fixed_ips: unlimited injected_file_path_bytes: 255 security_groups: 10 - Restart the Compute service:
#service openstack-nova-api restart
Note
/etc/nova/nova.conf file.
Table 17.2. Compute Quota Descriptions
| Quota | Description | Parameter |
|---|---|---|
|
Injected File Content Bytes
|
Number of bytes allowed per injected file.
|
injected_file_content_bytes
|
|
Metadata Items
|
Number of metadata items allowed per instance
|
metadata_items
|
|
Ram
|
Megabytes of instance ram allowed per tenant.
|
ram
|
|
Floating Ips
|
Number of floating IP addresses allowed per tenant.
|
floating_ips
|
|
Key Pairs
|
Number of key pairs allowed per user.
|
key_pairs
|
|
Injected File Path Bytes
|
Number of bytes allowed per injected file path.
|
injected_file_path_bytes
|
|
Instances
|
Number of instances allowed per tenant.
|
instances
|
|
Security Group Rules
|
Number of rules per security group.
|
security_group_rules
|
|
Injected Files
|
Number of allowed injected files.
|
injected_files
|
|
Cores
|
Number of instance cores allowed per tenant.
|
cores
|
|
Fixed Ips
|
Number of fixed IP addresses allowed per tenant. This number should equal at least the number of allowed instances.
|
fixed_ips
|
|
Security Groups
|
Number of security groups per tenant.
|
security_groups
|
- Because Block Storage quotas are managed per tenant, use the following to obtain a tenant list:
#keystone tenant-list +----------------------------------+----------+---------+ | id | name | enabled | +----------------------------------+----------+---------+ | a981642d22c94e159a4a6540f70f9f8d | admin | True | | 934b662357674c7b9f5e4ec6ded4d0e7 | redhat01 | True | | 7bc1dbfd7d284ec4a856ea1eb82dca80 | redhat02 | True | | 9c554aaef7804ba49e1b21cbd97d218a | services | True | +----------------------------------+----------+---------+ - To view Block Storage quotas for a tenant, use:
For example:#cinder quota-showtenantName#cinder quota-show redhat01 +-----------+-------+ | Property | Value | +-----------+-------+ | gigabytes | 1000 | | snapshots | 10 | | volumes | 10 | +-----------+-------+ - To update a particular quota value, use:
For example:#cinder quota-updatetenantName--quotaKey=NewValue#cinder quota-update redhat01 --volumes=15#cinder quota-show redhat01 +-----------+-------+ | Property | Value | +-----------+-------+ | gigabytes | 1000 | | snapshots | 10 | | volumes | 15 | +-----------+-------+
Note
/etc/cinder/cinder.conf file.
Note
rootaccess to the host machine (to install components, as well other administrative tasks such as updating the firewall).- Administrative access to the Identity service.
- Administrative access to the database (ability to add both databases and users).
Table A.1. OpenStack Installation-General
| Item | Description | Value/Verified |
|---|---|---|
|
Hardware Requirements
|
Requirements in section 3.2 Hardware Requirements must be verified.
|
Yes | No
|
|
Operating System
|
Red Hat Enterprise Linux 6.5 Server
|
Yes | No
|
|
Red Hat Subscription
|
You must have a subscription to:
|
Yes | No
|
| Administrative access on all installation machines | Almost all procedures in this guide must be performed as the root user, so the installer must have root access. | Yes | No |
|
Red Hat Subscriber Name/Password
|
You must know the Red Hat subscriber name and password.
|
|
|
Machine addresses
|
You must know the host IP address of the machine or machines on which any OpenStack components and supporting software will be installed.
|
Provide host addresses for the following:
|
Table A.2. OpenStack Identity Service
| Item | Description | Value |
|---|---|---|
|
Host Access
|
The system hosting the Identity service must have:
|
Verify whether the system has:
|
|
SSL Certificates
|
If using external SSL certificates, you must know where the database and certificates are located, and have access to them.
|
Yes | No
|
| LDAP Information | If using LDAP, you must have administrative access to configure a new directory server schema. | Yes | No |
| Connections | The system hosting the Identity service must have a connection to all other OpenStack services. | Yes | No |
Table A.3. OpenStack Object Storage Service
| Item | Description | Value |
|---|---|---|
|
File System
|
Red Hat currently supports the
XFS and ext4 file systems for object storage; one of these must be available.
|
|
|
Mount Point
|
The /srv/node mount point must be available.
|
Yes | No
|
| Connections | For the cloud installed in this guide, the system hosting the Object Storage service will need a connection to the Identity service. | Yes | No |
Table A.4. OpenStack Image Service
| Item | Description | Value |
|---|---|---|
|
Backend Storage
|
The Image service supports a number of storage backends. You must decide on one of the following:
|
Storage:
|
| Connections | The system hosting the Image service must have a connection to the Identity, Dashboard , and Compute services, as well as to the Object Storage service if using OpenStack Object Storage as its backend. | Yes | No |
Table A.5. OpenStack Block Storage Service
| Item | Description | Value |
|---|---|---|
|
Backend Storage
|
The Block Storage service supports a number of storage backends. You must decide on one of the following:
|
Storage:
|
| Connections | The system hosting the Block Storage service must have a connection to the Identity, Dashboard, and Compute services. | Yes | No |
Table A.6. OpenStack Networking Service
| Item | Description | Value |
|---|---|---|
|
Plugin agents
|
In addition to the standard OpenStack Networking components, a wide choice of plugin agents are also available that implement various networking mechanisms.
You'll need to decide which of these apply to your network and install them.
|
Circle appropriate:
|
| Connections | The system hosting the OpenStack Networking service must have a connection to the Identity, Dashboard, and Compute services. | Yes | No |
Table A.7. OpenStack Compute Service
| Item | Description | Value |
|---|---|---|
|
Hardware virtualization support
|
The OpenStack Compute service requires hardware virtualization support. Note: a procedure is included in this Guide to verify this (refer to Checking for Hardware Virtualization Support).
|
Yes | No
|
|
VNC client
|
The Compute service supports the Virtual Network Computing (VNC) console to instances through a web browser. You must decide whether this will be provided to your users.
|
Yes | No
|
|
Resources: CPU and Memory
|
OpenStack supports overcommitting of CPU and memory resources on Compute nodes (refer to Configuring Resource Overcommitment).
|
Decide:
|
|
Resources:Host
|
You can reserve resources for the host, to prevent a given amount of memory and disk resources from being automatically assigned to other resources on the host (refer to Reserving Host Resources.
|
Decide:
|
| libvirt Version | You will need to know the version of your libvirt for the configuration of Virtual Interface Plugging (refer to Configuring Virtual Interface Plugging). | Version: |
| Connections | The system hosting the Compute service must have a connection to all other OpenStack services. | Yes | No |
Table A.8. OpenStack Dashboard Service
| Item | Description | Value |
|---|---|---|
|
Host software
|
The system hosting the Dashboard service must have the following already installed:
|
Yes | No
|
|
Connections
|
The system hosting the Dashboard service must have a connection to all other OpenStack services.
|
Yes | No
|
ERROR message is displayed. Determining the actual cause of the failure requires the use of the command line tools.
nova list to locate the unique identifier of the instance. Then use this identifier as an argument to the nova show command. One of the items returned will be the error condition. The most common value is NoValidHost.
Note
nova console-log command. Sometimes however the log of a running instance will either appear to be completely empty or contain a single errant character, often a ? character.
console=tty0 console=ttyS0,115200n8 to the list of kernel arguments specified in the boot loader.
keystone) is unable to contact the identity service it returns an error:
$Unable to communicate with identity service: [Errno 113] No route to host. (HTTP 400)keystoneACTION
ACTION with any valid identity service client action such as user-list or service-list. When the service is unreachable any identity client command that requires it will fail.
- Service Status
- On the system hosting the identity service check the service status:
$keystone (pidservice openstack-keystone status2847) is running...If the service is not running then log in as therootuser and start it.#service openstack-keystone start - Firewall Rules
- On the system hosting the identity service check that the firewall allows TCP traffic on ports
5000and35357. This command must be run while logged in as therootuser.#ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 5000,35357iptables --list -n | grep -P "(5000|35357)"If no rule is found then add one to the/etc/sysconfig/iptablesfile:-A INPUT -p tcp -m multiport --dports 5000,35357 -j ACCEPT
Restart theiptablesservice for the change to take effect.#service iptables restart - Service Endpoints
- On the system hosting the identity service check that the endpoints are defined correctly.
- Obtain the administration token:
$admin_token =grep admin_token /etc/keystone/keystone.conf0292d404a88c4f269383ff28a3839ab4 - Determine the correct administration endpoint for the identity service:
http://
IP:35357/VERSIONReplaceIPwith the IP address or host name of the system hosting the identity service. ReplaceVERSIONwith the API version (v2.0, orv3) that is in use. - Unset any pre-defined identity service related environment variables:
$unset OS_USERNAME OS_TENANT_NAME OS_PASSWORD OS_AUTH_URL - Use the administration token and endpoint to authenticate with the identity service. Confirm that the identity service endpoint is correct:
$keystone --os-token=TOKEN\--os-endpoint=ENDPOINT\endpoint-listVerify that the listedpublicurl,internalurl, andadminurlfor the identity service are correct. In particular ensure that the IP addresses and port numbers listed within each endpoint are correct and reachable over the network.If these values are incorrect then refer to Section 5.6, “Creating the Identity Service Endpoint” for information on adding the correct endpoint. Once the correct endpoints have been added remove any incorrect endpoints using theendpoint-deleteaction of thekeystonecommand.$keystone --os-token=TOKEN\--os-endpoint=ENDPOINT\endpoint-deleteIDReplaceTOKENandENDPOINTwith the values identified previously. ReplaceIDwith the identity of the endpoint to remove as listed by theendpoint-listaction.
/var/log/cinder/ directory of the host on which each service runs.
Table C.1. Log Files
| File name | Description |
|---|---|
api.log
|
The log of the API service (
openstack-cinder-api).
|
cinder-manage.log
|
The log of the management interface (
cinder-manage).
|
scheduler.log
|
The log of the scheduler service (
openstack-cinder-scheduler).
|
volume.log
|
The log of the volume service (
openstack-cinder-volume).
|
/var/log/nova/ directory of the host on which each service runs.
Table C.2. Log Files
| File name | Description |
|---|---|
api.log
|
The log of the API service (
openstack-nova-api).
|
cert.log
|
The log of the X509 certificate service (
openstack-nova-cert). This service is only required by the EC2 API to the Compute service.
|
compute.log
|
The log file of the Compute service itself (
openstack-nova-compute).
|
conductor.log
|
The log file of the conductor (
openstack-nova-conductor) that provides database query support to the Compute service.
|
consoleauth.log
|
The log file of the console authentication service (
openstack-nova-consoleauth).
|
network.log
|
The log of the network service (
openstack-nova-network). Note that this service only runs in environments that are not configured to use OpenStack networking.
|
nova-manage.log
|
The log of the management interface (
nova-manage).
|
scheduler.log
|
The log of the scheduler service (
openstack-nova-scheduler).
|
httpd). As a result, the log files for the dashboard are stored in the /var/log/httpd directory.
Table C.3. Log Files
| File name | Description |
|---|---|
access_log
|
The access log details all attempts to access the web server.
|
error_log
|
The error log details all unsuccessful attempts to access the web server and the reason for each failure.
|
/var/log/keystone/ directory of the host on which it runs.
Table C.4. Log File
| File name | Description |
|---|---|
keystone.log
|
The log of the identity service (
openstack-keystone).
|
/var/log/glance/ directory of the host on which each service runs.
Table C.5. Log Files
| File name | Description |
|---|---|
api.log
|
The log of the API service (
openstack-glance-api).
|
registry.log
|
The log of the image registry service (
openstack-glance-registry).
|
/var/log/quantum/ directory of the host on which each service runs.
Table C.6. Log Files
| File name | Description |
|---|---|
dhcp-agent.log
|
The log for the DHCP agent (
quantum-dhcp-agent).
|
l3-agent.log
|
The log for the L3 agent (
quantum-l3-agent).
|
lbaas-agent.log
|
The log for the Load Balancer as a Service (LBaaS) agent (
quantum-lbass-agent).
|
linuxbridge-agent.log
|
The log for the Linux Bridge agent (
quantum-linuxbridge-agent).
|
metadata-agent.log
|
The log for the metadata agent (
quantum-metadata-agent).
|
openvswitch-agent.log
|
The log for the Open vSwitch agent (
quantum-openvswitch-agent).
|
server.log
|
The log for the OpenStack networking service itself (
quantum-server).
|
/etc/openstack-dashboard/local_settings file. This file must be modified after installation.
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard import exceptions
DEBUG = True
TEMPLATE_DEBUG = DEBUG
# Required for Django 1.5.
# If horizon is running in production (DEBUG is False), set this
# with the list of host/domain names that the application can serve.
# For more information see:
# https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
#ALLOWED_HOSTS = ['horizon.example.com', ]
# Set SSL proxy settings:
# For Django 1.4+ pass this header from the proxy after terminating the SSL,
# and don't forget to strip it from the client's request.
# For more information see:
# https://docs.djangoproject.com/en/1.4/ref/settings/#secure-proxy-ssl-header
# SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')
# If Horizon is being served through SSL, then uncomment the following two
# settings to better secure the cookies from security exploits
#CSRF_COOKIE_SECURE = True
#SESSION_COOKIE_SECURE = True
# Overrides for OpenStack API versions. Use this setting to force the
# OpenStack dashboard to use a specific API version for a given service API.
# NOTE: The version should be formatted as it appears in the URL for the
# service API. For example, The identity service APIs have inconsistent
# use of the decimal point, so valid options would be "2.0" or "3".
# OPENSTACK_API_VERSIONS = {
# "identity": 3
# }
# Set this to True if running on multi-domain model. When this is enabled, it
# will require user to enter the Domain name in addition to username for login.
# OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False
# Overrides the default domain used when running on single-domain model
# with Keystone V3. All entities will be created in the default domain.
# OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
# Default OpenStack Dashboard configuration.
HORIZON_CONFIG = {
'dashboards': ('project', 'admin', 'settings',),
'default_dashboard': 'project',
'user_home': 'openstack_dashboard.views.get_user_home',
'ajax_queue_limit': 10,
'auto_fade_alerts': {
'delay': 3000,
'fade_duration': 1500,
'types': ['alert-success', 'alert-info']
},
'help_url': "http://docs.openstack.org",
'exceptions': {'recoverable': exceptions.RECOVERABLE,
'not_found': exceptions.NOT_FOUND,
'unauthorized': exceptions.UNAUTHORIZED},
}
# Specify a regular expression to validate user passwords.
# HORIZON_CONFIG["password_validator"] = {
# "regex": '.*',
# "help_text": _("Your password does not meet the requirements.")
# }
# Disable simplified floating IP address management for deployments with
# multiple floating IP pools or complex network requirements.
# HORIZON_CONFIG["simple_ip_management"] = False
# Turn off browser autocompletion for the login form if so desired.
# HORIZON_CONFIG["password_autocomplete"] = "off"
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
# Set custom secret key:
# You can either set it to a specific value or you can let horizon generate a
# default secret key that is unique on this machine, e.i. regardless of the
# amount of Python WSGI workers (if used behind Apache+mod_wsgi): However, there
# may be situations where you would want to set this explicitly, e.g. when
# multiple dashboard instances are distributed on different machines (usually
# behind a load-balancer). Either you have to make sure that a session gets all
# requests routed to the same dashboard instance or you set the same SECRET_KEY
# for all of them.
from horizon.utils import secret_key
SECRET_KEY = secret_key.generate_or_read_from_file(os.path.join(LOCAL_PATH, '.secret_key_store'))
# We recommend you use memcached for development; otherwise after every reload
# of the django development server, you will have to login again. To use
# memcached set CACHES to something like
# CACHES = {
# 'default': {
# 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
# 'LOCATION' : '127.0.0.1:11211',
# }
#}
CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.locmem.LocMemCache'
}
}
# Send email to the console by default
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# Or send them to /dev/null
#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
# Configure these for your outgoing email host
# EMAIL_HOST = 'smtp.my-company.com'
# EMAIL_PORT = 25
# EMAIL_HOST_USER = 'djangomail'
# EMAIL_HOST_PASSWORD = 'top-secret!'
# For multiple regions uncomment this configuration, and add (endpoint, title).
# AVAILABLE_REGIONS = [
# ('http://cluster1.example.com:5000/v2.0', 'cluster1'),
# ('http://cluster2.example.com:5000/v2.0', 'cluster2'),
# ]
OPENSTACK_HOST = "127.0.0.1"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"
# Disable SSL certificate checks (useful for self-signed certificates):
# OPENSTACK_SSL_NO_VERIFY = True
# The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the
# capabilities of the auth backend for Keystone.
# If Keystone has been configured to use LDAP as the auth backend then set
# can_edit_user to False and name to 'ldap'.
#
# TODO(tres): Remove these once Keystone has an API to identify auth backend.
OPENSTACK_KEYSTONE_BACKEND = {
'name': 'native',
'can_edit_user': True,
'can_edit_project': True
}
OPENSTACK_HYPERVISOR_FEATURES = {
'can_set_mount_point': True,
# NOTE: as of Grizzly this is not yet supported in Nova so enabling this
# setting will not do anything useful
'can_encrypt_volumes': False
}
# The OPENSTACK_QUANTUM_NETWORK settings can be used to enable optional
# services provided by quantum. Currently only the load balancer service
# is available.
OPENSTACK_QUANTUM_NETWORK = {
'enable_lb': False
}
# OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints
# in the Keystone service catalog. Use this setting when Horizon is running
# external to the OpenStack environment. The default is 'internalURL'.
#OPENSTACK_ENDPOINT_TYPE = "publicURL"
# The number of objects (Swift containers/objects or images) to display
# on a single page before providing a paging element (a "more" link)
# to paginate results.
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20
# The timezone of the server. This should correspond with the timezone
# of your entire OpenStack installation, and hopefully be in UTC.
TIME_ZONE = "UTC"
LOGGING = {
'version': 1,
# When set to True this will disable all logging except
# for loggers specified in this configuration dictionary. Note that
# if nothing is specified here and disable_existing_loggers is True,
# django.db.backends will still log unless it is disabled explicitly.
'disable_existing_loggers': False,
'handlers': {
'null': {
'level': 'DEBUG',
'class': 'django.utils.log.NullHandler',
},
'console': {
# Set the level to "DEBUG" for verbose output logging.
'level': 'INFO',
'class': 'logging.StreamHandler',
},
},
'loggers': {
# Logging from django.db.backends is VERY verbose, send to null
# by default.
'django.db.backends': {
'handlers': ['null'],
'propagate': False,
},
'requests': {
'handlers': ['null'],
'propagate': False,
},
'horizon': {
'handlers': ['console'],
'propagate': False,
},
'openstack_dashboard': {
'handlers': ['console'],
'propagate': False,
},
'novaclient': {
'handlers': ['console'],
'propagate': False,
},
'keystoneclient': {
'handlers': ['console'],
'propagate': False,
},
'glanceclient': {
'handlers': ['console'],
'propagate': False,
},
'nose.plugins.manager': {
'handlers': ['console'],
'propagate': False,
}
}
}
/etc/cinder/api-paste.ini file.
############# # OpenStack # ############# [composite:osapi_volume] use = call:cinder.api:root_app_factory /: apiversions /v1: openstack_volume_api_v1 /v2: openstack_volume_api_v2 [composite:openstack_volume_api_v1] use = call:cinder.api.middleware.auth:pipeline_factory noauth = faultwrap sizelimit noauth apiv1 keystone = faultwrap sizelimit authtoken keystonecontext apiv1 keystone_nolimit = faultwrap sizelimit authtoken keystonecontext apiv1 [composite:openstack_volume_api_v2] use = call:cinder.api.middleware.auth:pipeline_factory noauth = faultwrap sizelimit noauth apiv2 keystone = faultwrap sizelimit authtoken keystonecontext apiv2 keystone_nolimit = faultwrap sizelimit authtoken keystonecontext apiv2 [filter:faultwrap] paste.filter_factory = cinder.api.middleware.fault:FaultWrapper.factory [filter:noauth] paste.filter_factory = cinder.api.middleware.auth:NoAuthMiddleware.factory [filter:sizelimit] paste.filter_factory = cinder.api.middleware.sizelimit:RequestBodySizeLimiter.factory [app:apiv1] paste.app_factory = cinder.api.v1.router:APIRouter.factory [app:apiv2] paste.app_factory = cinder.api.v2.router:APIRouter.factory [pipeline:apiversions] pipeline = faultwrap osvolumeversionapp [app:osvolumeversionapp] paste.app_factory = cinder.api.versions:Versions.factory ########## # Shared # ########## [filter:keystonecontext] paste.filter_factory = cinder.api.middleware.auth:CinderKeystoneContext.factory [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory service_protocol = http service_host = 127.0.0.1 service_port = 5000 signing_dir = /var/lib/cinder
/etc/cinder/cinder.conf file.
[DEFAULT] logdir = /var/log/cinder state_path = /var/lib/cinder lock_path = /var/lib/cinder/tmp volumes_dir = /etc/cinder/volumes iscsi_helper = tgtadm sql_connection = mysql://cinder:f1ec062ca512429d@10.38.15.166/cinder rpc_backend = cinder.openstack.common.rpc.impl_qpid rootwrap_config = /etc/cinder/rootwrap.conf qpid_hostname=10.38.15.166 api_paste_config=/etc/cinder/api-paste.ini rabbit_port=5672 rabbit_password= iscsi_ip_address=10.38.15.166 volume_group=cinder-volumes rabbit_userid=nova verbose=False rabbit_virtual_host=/ glance_host=10.38.15.166 rabbit_host=127.0.0.1 auth_strategy=keystone [keystone_authtoken] admin_tenant_name = services admin_user = cinder admin_password = 5a9d3e08713f4b2c auth_host = 10.38.15.166 auth_port = 35357 auth_protocol = http signing_dirname = /tmp/keystone-signing-cinder service_host=10.38.15.166 service_protocol=http service_port=5000
/etc/cinder/policy.json file defines additional access controls that apply to the Compute service.
{
"context_is_admin": [["role:admin"]],
"admin_or_owner": [["is_admin:True"], ["project_id:%(project_id)s"]],
"default": [["rule:admin_or_owner"]],
"admin_api": [["is_admin:True"]],
"volume:create": [],
"volume:get_all": [],
"volume:get_volume_metadata": [],
"volume:get_snapshot": [],
"volume:get_all_snapshots": [],
"volume_extension:types_manage": [["rule:admin_api"]],
"volume_extension:types_extra_specs": [["rule:admin_api"]],
"volume_extension:extended_snapshot_attributes": [],
"volume_extension:volume_image_metadata": [],
"volume_extension:quotas:show": [],
"volume_extension:quotas:update_for_project": [["rule:admin_api"]],
"volume_extension:quotas:update_for_user": [["rule:admin_or_projectadmin"]],
"volume_extension:quota_classes": [],
"volume_extension:volume_admin_actions:reset_status": [["rule:admin_api"]],
"volume_extension:snapshot_admin_actions:reset_status": [["rule:admin_api"]],
"volume_extension:volume_admin_actions:force_delete": [["rule:admin_api"]],
"volume_extension:snapshot_admin_actions:force_delete": [["rule:admin_api"]],
"volume_extension:volume_host_attribute": [["rule:admin_api"]],
"volume_extension:volume_tenant_attribute": [["rule:admin_api"]],
"volume_extension:hosts": [["rule:admin_api"]],
"volume_extension:services": [["rule:admin_api"]],
"volume:services": [["rule:admin_api"]]
}/etc/nova/rootwrap.conf file defines configuration values used by the rootwrap script that is used by the Compute service when it needs to escalate its privileges to those of the root user.
# Configuration for cinder-rootwrap # This file should be owned by (and only-writeable by) the root user [DEFAULT] # List of directories to load filter definitions from (separated by ','). # These directories MUST all be only writeable by root ! filters_path=/etc/cinder/rootwrap.d,/usr/share/cinder/rootwrap # List of directories to search executables in, in case filters do not # explicitly specify a full path (separated by ',') # If not specified, defaults to system PATH environment variable. # These directories MUST all be only writeable by root ! exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin # Enable logging to syslog # Default value is False use_syslog=False # Which syslog facility to use. # Valid values include auth, authpriv, syslog, user0, user1... # Default value is 'syslog' syslog_log_facility=syslog # Which messages to log. # INFO means log all usage # ERROR means only log unsuccessful attempts syslog_log_level=ERROR
/etc/nova/api-paste.ini file.
############ # Metadata # ############ [composite:metadata] use = egg:Paste#urlmap /: meta [pipeline:meta] pipeline = ec2faultwrap logrequest metaapp [app:metaapp] paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory ####### # EC2 # ####### [composite:ec2] use = egg:Paste#urlmap /services/Cloud: ec2cloud [composite:ec2cloud] use = call:nova.api.auth:pipeline_factory noauth = ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor keystone = ec2faultwrap logrequest ec2keystoneauth cloudrequest validator ec2executor [filter:ec2faultwrap] paste.filter_factory = nova.api.ec2:FaultWrapper.factory [filter:logrequest] paste.filter_factory = nova.api.ec2:RequestLogging.factory [filter:ec2lockout] paste.filter_factory = nova.api.ec2:Lockout.factory [filter:ec2keystoneauth] paste.filter_factory = nova.api.ec2:EC2KeystoneAuth.factory [filter:ec2noauth] paste.filter_factory = nova.api.ec2:NoAuth.factory [filter:cloudrequest] controller = nova.api.ec2.cloud.CloudController paste.filter_factory = nova.api.ec2:Requestify.factory [filter:authorizer] paste.filter_factory = nova.api.ec2:Authorizer.factory [filter:validator] paste.filter_factory = nova.api.ec2:Validator.factory [app:ec2executor] paste.app_factory = nova.api.ec2:Executor.factory ############# # Openstack # ############# [composite:osapi_compute] use = call:nova.api.openstack.urlmap:urlmap_factory /: oscomputeversions /v1.1: openstack_compute_api_v2 /v2: openstack_compute_api_v2 [composite:openstack_compute_api_v2] use = call:nova.api.auth:pipeline_factory noauth = faultwrap sizelimit noauth ratelimit osapi_compute_app_v2 keystone = faultwrap sizelimit authtoken keystonecontext ratelimit osapi_compute_app_v2 keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v2 [filter:faultwrap] paste.filter_factory = nova.api.openstack:FaultWrapper.factory [filter:noauth] paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory [filter:ratelimit] paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory [filter:sizelimit] paste.filter_factory = nova.api.sizelimit:RequestBodySizeLimiter.factory [app:osapi_compute_app_v2] paste.app_factory = nova.api.openstack.compute:APIRouter.factory [pipeline:oscomputeversions] pipeline = faultwrap oscomputeversionapp [app:oscomputeversionapp] paste.app_factory = nova.api.openstack.compute.versions:Versions.factory ########## # Shared # ########## [filter:keystonecontext] paste.filter_factory = nova.api.auth:NovaKeystoneContext.factory [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory # signing_dir is configurable, but the default behavior of the authtoken # middleware should be sufficient. It will create a temporary directory # in the home directory for the user the nova process is running as. #signing_dir = /var/lib/nova/keystone-signing # Workaround for https://bugs.launchpad.net/nova/+bug/1154809 auth_version = v2.0
/etc/nova/nova.conf file. Note that in this example configuration file Nova networking is in use.
[DEFAULT] logdir = /var/log/nova state_path = /var/lib/nova lock_path = /var/lib/nova/tmp volumes_dir = /etc/nova/volumes dhcpbridge = /usr/bin/nova-dhcpbridge dhcpbridge_flagfile = /etc/nova/nova.conf force_dhcp_release = false injected_network_template = /usr/share/nova/interfaces.template libvirt_nonblocking = True libvirt_inject_partition = -1 network_manager = nova.network.manager.FlatDHCPManager iscsi_helper = tgtadm sql_connection = mysql://nova:463f1c1aeea04c98@10.38.15.166/nova compute_driver = libvirt.LibvirtDriver firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver rpc_backend = nova.openstack.common.rpc.impl_qpid rootwrap_config = /etc/nova/rootwrap.conf glance_api_servers=10.38.15.166:9292 osapi_compute_listen=0.0.0.0 image_service=nova.image.glance.GlanceImageService api_paste_config=/etc/nova/api-paste.ini metadata_listen=0.0.0.0 ec2_listen=0.0.0.0 qpid_hostname=10.38.15.166 service_down_time=60 auth_strategy=keystone volume_api_class=nova.volume.cinder.API enabled_apis=ec2,osapi_compute,metadata rabbit_host=localhost metadata_host=10.38.15.166 osapi_volume_listen=0.0.0.0 verbose=false novncproxy_port=6080 flat_interface=lo auto_assign_floating_ip=False floating_range=10.3.4.0/22 scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter vncserver_listen=10.38.15.166 novncproxy_base_url=http://10.38.15.166:6080/vnc_auto.html network_host=10.38.15.166 fixed_range=192.168.32.0/22 flat_network_bridge=br100 cpu_allocation_ratio=16.0 ram_allocation_ratio=1.5 public_interface=eth1 connection_type=libvirt vnc_enabled=true novncproxy_host=0.0.0.0 vncserver_proxyclient_address=10.38.15.166 dhcp_domain=novalocal flat_injected=false libvirt_type=qemu libvirt_cpu_mode=none default_floating_pool=nova [keystone_authtoken] admin_tenant_name = services admin_user = nova admin_password = ea2f8908e4d74d87 auth_host = 10.38.15.166 auth_port = 35357 auth_protocol = http signing_dir = /tmp/keystone-signing-nova
/etc/nova/policy.json file defines additional access controls that apply to the Compute service.
{
"context_is_admin": "role:admin",
"admin_or_owner": "is_admin:True or project_id:%(project_id)s",
"default": "rule:admin_or_owner",
"compute:create": "",
"compute:create:attach_network": "",
"compute:create:attach_volume": "",
"compute:create:forced_host": "is_admin:True",
"compute:get_all": "",
"compute:get_all_tenants": "",
"admin_api": "is_admin:True",
"compute_extension:accounts": "rule:admin_api",
"compute_extension:admin_actions": "rule:admin_api",
"compute_extension:admin_actions:pause": "rule:admin_or_owner",
"compute_extension:admin_actions:unpause": "rule:admin_or_owner",
"compute_extension:admin_actions:suspend": "rule:admin_or_owner",
"compute_extension:admin_actions:resume": "rule:admin_or_owner",
"compute_extension:admin_actions:lock": "rule:admin_api",
"compute_extension:admin_actions:unlock": "rule:admin_api",
"compute_extension:admin_actions:resetNetwork": "rule:admin_api",
"compute_extension:admin_actions:injectNetworkInfo": "rule:admin_api",
"compute_extension:admin_actions:createBackup": "rule:admin_or_owner",
"compute_extension:admin_actions:migrateLive": "rule:admin_api",
"compute_extension:admin_actions:resetState": "rule:admin_api",
"compute_extension:admin_actions:migrate": "rule:admin_api",
"compute_extension:aggregates": "rule:admin_api",
"compute_extension:agents": "rule:admin_api",
"compute_extension:attach_interfaces": "",
"compute_extension:baremetal_nodes": "rule:admin_api",
"compute_extension:cells": "rule:admin_api",
"compute_extension:certificates": "",
"compute_extension:cloudpipe": "rule:admin_api",
"compute_extension:cloudpipe_update": "rule:admin_api",
"compute_extension:console_output": "",
"compute_extension:consoles": "",
"compute_extension:coverage_ext": "rule:admin_api",
"compute_extension:createserverext": "",
"compute_extension:deferred_delete": "",
"compute_extension:disk_config": "",
"compute_extension:evacuate": "rule:admin_api",
"compute_extension:extended_server_attributes": "rule:admin_api",
"compute_extension:extended_status": "",
"compute_extension:extended_availability_zone": "",
"compute_extension:extended_ips": "",
"compute_extension:fixed_ips": "rule:admin_api",
"compute_extension:flavor_access": "",
"compute_extension:flavor_disabled": "",
"compute_extension:flavor_rxtx": "",
"compute_extension:flavor_swap": "",
"compute_extension:flavorextradata": "",
"compute_extension:flavorextraspecs:index": "",
"compute_extension:flavorextraspecs:show": "",
"compute_extension:flavorextraspecs:create": "rule:admin_api",
"compute_extension:flavorextraspecs:update": "rule:admin_api",
"compute_extension:flavorextraspecs:delete": "rule:admin_api",
"compute_extension:flavormanage": "rule:admin_api",
"compute_extension:floating_ip_dns": "",
"compute_extension:floating_ip_pools": "",
"compute_extension:floating_ips": "",
"compute_extension:floating_ips_bulk": "rule:admin_api",
"compute_extension:fping": "",
"compute_extension:fping:all_tenants": "rule:admin_api",
"compute_extension:hide_server_addresses": "is_admin:False",
"compute_extension:hosts": "rule:admin_api",
"compute_extension:hypervisors": "rule:admin_api",
"compute_extension:image_size": "",
"compute_extension:instance_actions": "",
"compute_extension:instance_actions:events": "rule:admin_api",
"compute_extension:instance_usage_audit_log": "rule:admin_api",
"compute_extension:keypairs": "",
"compute_extension:multinic": "",
"compute_extension:networks": "rule:admin_api",
"compute_extension:networks:view": "",
"compute_extension:networks_associate": "rule:admin_api",
"compute_extension:quotas:show": "",
"compute_extension:quotas:update": "rule:admin_api",
"compute_extension:quota_classes": "",
"compute_extension:rescue": "",
"compute_extension:security_group_default_rules": "rule:admin_api",
"compute_extension:security_groups": "",
"compute_extension:server_diagnostics": "rule:admin_api",
"compute_extension:server_password": "",
"compute_extension:services": "rule:admin_api",
"compute_extension:simple_tenant_usage:show": "rule:admin_or_owner",
"compute_extension:simple_tenant_usage:list": "rule:admin_api",
"compute_extension:users": "rule:admin_api",
"compute_extension:virtual_interfaces": "",
"compute_extension:virtual_storage_arrays": "",
"compute_extension:volumes": "",
"compute_extension:volume_attachments:index": "",
"compute_extension:volume_attachments:show": "",
"compute_extension:volume_attachments:create": "",
"compute_extension:volume_attachments:delete": "",
"compute_extension:volumetypes": "",
"compute_extension:availability_zone:list": "",
"compute_extension:availability_zone:detail": "rule:admin_api",
"volume:create": "",
"volume:get_all": "",
"volume:get_volume_metadata": "",
"volume:get_snapshot": "",
"volume:get_all_snapshots": "",
"volume_extension:types_manage": "rule:admin_api",
"volume_extension:types_extra_specs": "rule:admin_api",
"volume_extension:volume_admin_actions:reset_status": "rule:admin_api",
"volume_extension:snapshot_admin_actions:reset_status": "rule:admin_api",
"volume_extension:volume_admin_actions:force_delete": "rule:admin_api",
"network:get_all": "",
"network:get": "",
"network:create": "",
"network:delete": "",
"network:associate": "",
"network:disassociate": "",
"network:get_vifs_by_instance": "",
"network:allocate_for_instance": "",
"network:deallocate_for_instance": "",
"network:validate_networks": "",
"network:get_instance_uuids_by_ip_filter": "",
"network:get_instance_id_by_floating_address": "",
"network:setup_networks_on_host": "",
"network:get_backdoor_port": "",
"network:get_floating_ip": "",
"network:get_floating_ip_pools": "",
"network:get_floating_ip_by_address": "",
"network:get_floating_ips_by_project": "",
"network:get_floating_ips_by_fixed_address": "",
"network:allocate_floating_ip": "",
"network:deallocate_floating_ip": "",
"network:associate_floating_ip": "",
"network:disassociate_floating_ip": "",
"network:release_floating_ip": "",
"network:migrate_instance_start": "",
"network:migrate_instance_finish": "",
"network:get_fixed_ip": "",
"network:get_fixed_ip_by_address": "",
"network:add_fixed_ip_to_instance": "",
"network:remove_fixed_ip_from_instance": "",
"network:add_network_to_project": "",
"network:get_instance_nw_info": "",
"network:get_dns_domains": "",
"network:add_dns_entry": "",
"network:modify_dns_entry": "",
"network:delete_dns_entry": "",
"network:get_dns_entries_by_address": "",
"network:get_dns_entries_by_name": "",
"network:create_private_dns_domain": "",
"network:create_public_dns_domain": "",
"network:delete_dns_domain": ""
}/etc/nova/rootwrap.conf file defines configuration values used by the rootwrap script that is used by the Compute service when it needs to escalate its privileges to those of the root user.
# Configuration for nova-rootwrap # This file should be owned by (and only-writeable by) the root user [DEFAULT] # List of directories to load filter definitions from (separated by ','). # These directories MUST all be only writeable by root ! filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap # List of directories to search executables in, in case filters do not # explicitly specify a full path (separated by ',') # If not specified, defaults to system PATH environment variable. # These directories MUST all be only writeable by root ! exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin # Enable logging to syslog # Default value is False use_syslog=False # Which syslog facility to use. # Valid values include auth, authpriv, syslog, user0, user1... # Default value is 'syslog' syslog_log_facility=syslog # Which messages to log. # INFO means log all usage # ERROR means only log unsuccessful attempts syslog_log_level=ERROR
/etc/keystone/keystone.conf.
[DEFAULT]
log_file = /var/log/keystone/keystone.log
admin_token = e6bdb221b674074ff76e
# A "shared secret" between keystone and other openstack services
# admin_token = ADMIN
# The IP address of the network interface to listen on
# bind_host = 0.0.0.0
# The port number which the public service listens on
# public_port = 5000
# The port number which the public admin listens on
# admin_port = 35357
# The base endpoint URLs for keystone that are advertised to clients
# (NOTE: this does NOT affect how keystone listens for connections)
# public_endpoint = http://localhost:%(public_port)d/
# admin_endpoint = http://localhost:%(admin_port)d/
# The port number which the OpenStack Compute service listens on
# compute_port = 8774
# Path to your policy definition containing identity actions
# policy_file = policy.json
# Rule to check if no matching policy definition is found
# FIXME(dolph): This should really be defined as [policy] default_rule
# policy_default_rule = admin_required
# Role for migrating membership relationships
# During a SQL upgrade, the following values will be used to create a new role
# that will replace records in the user_tenant_membership table with explicit
# role grants. After migration, the member_role_id will be used in the API
# add_user_to_project, and member_role_name will be ignored.
# member_role_id = 9fe2ff9ee4384b1894a90878d3e92bab
# member_role_name = _member_
# === Logging Options ===
# Print debugging output
# (includes plaintext request logging, potentially including passwords)
debug = True
# Print more verbose output
verbose = True
# Name of log file to output to. If not set, logging will go to stdout.
# log_file = keystone.log
# The directory to keep log files in (will be prepended to --logfile)
# log_dir = /var/log/keystone
# Use syslog for logging.
# use_syslog = False
# syslog facility to receive log lines
# syslog_log_facility = LOG_USER
# If this option is specified, the logging configuration file specified is
# used and overrides any other logging options specified. Please see the
# Python logging module documentation for details on logging configuration
# files.
# log_config = logging.conf
# A logging.Formatter log message format string which may use any of the
# available logging.LogRecord attributes.
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# Format string for %(asctime)s in log records.
# log_date_format = %Y-%m-%d %H:%M:%S
# onready allows you to send a notification when the process is ready to serve
# For example, to have it notify using systemd, one could set shell command:
# onready = systemd-notify --ready
# or a module with notify() method:
# onready = keystone.common.systemd
[sql]
connection = mysql://keystone:Redhat123@sgordon-cinder-api.usersys.redhat.com/keystone
# The SQLAlchemy connection string used to connect to the database
# connection = sqlite:///keystone.db
# the timeout before idle sql connections are reaped
# idle_timeout = 200
[identity]
driver = keystone.identity.backends.sql.Identity
# driver = keystone.identity.backends.sql.Identity
# This references the domain to use for all Identity API v2 requests (which are
# not aware of domains). A domain with this ID will be created for you by
# keystone-manage db_sync in migration 008. The domain referenced by this ID
# cannot be deleted on the v3 API, to prevent accidentally breaking the v2 API.
# There is nothing special about this domain, other than the fact that it must
# exist to order to maintain support for your v2 clients.
# default_domain_id = default
[trust]
# driver = keystone.trust.backends.sql.Trust
# delegation and impersonation features can be optionally disabled
# enabled = True
[catalog]
template_file = /etc/keystone/default_catalog.templates
driver = keystone.catalog.backends.sql.Catalog
# dynamic, sql-based backend (supports API/CLI-based management commands)
# driver = keystone.catalog.backends.sql.Catalog
# static, file-based backend (does *NOT* support any management commands)
# driver = keystone.catalog.backends.templated.TemplatedCatalog
# template_file = default_catalog.templates
[token]
driver = keystone.token.backends.sql.Token
# driver = keystone.token.backends.kvs.Token
# Amount of time a token should remain valid (in seconds)
# expiration = 86400
[policy]
# driver = keystone.policy.backends.sql.Policy
[ec2]
driver = keystone.contrib.ec2.backends.sql.Ec2
# driver = keystone.contrib.ec2.backends.kvs.Ec2
[ssl]
#enable = True
#certfile = /etc/keystone/ssl/certs/keystone.pem
#keyfile = /etc/keystone/ssl/private/keystonekey.pem
#ca_certs = /etc/keystone/ssl/certs/ca.pem
#cert_required = True
[signing]
#token_format = PKI
#certfile = /etc/keystone/ssl/certs/signing_cert.pem
#keyfile = /etc/keystone/ssl/private/signing_key.pem
#ca_certs = /etc/keystone/ssl/certs/ca.pem
#key_size = 1024
#valid_days = 3650
#ca_password = None
[ldap]
# url = ldap://localhost
# user = dc=Manager,dc=example,dc=com
# password = None
# suffix = cn=example,cn=com
# use_dumb_member = False
# allow_subtree_delete = False
# dumb_member = cn=dumb,dc=example,dc=com
# Maximum results per page; a value of zero ('0') disables paging (default)
# page_size = 0
# The LDAP dereferencing option for queries. This can be either 'never',
# 'searching', 'always', 'finding' or 'default'. The 'default' option falls
# back to using default dereferencing configured by your ldap.conf.
# alias_dereferencing = default
# The LDAP scope for queries, this can be either 'one'
# (onelevel/singleLevel) or 'sub' (subtree/wholeSubtree)
# query_scope = one
# user_tree_dn = ou=Users,dc=example,dc=com
# user_filter =
# user_objectclass = inetOrgPerson
# user_domain_id_attribute = businessCategory
# user_id_attribute = cn
# user_name_attribute = sn
# user_mail_attribute = email
# user_pass_attribute = userPassword
# user_enabled_attribute = enabled
# user_enabled_mask = 0
# user_enabled_default = True
# user_attribute_ignore = tenant_id,tenants
# user_allow_create = True
# user_allow_update = True
# user_allow_delete = True
# user_enabled_emulation = False
# user_enabled_emulation_dn =
# tenant_tree_dn = ou=Groups,dc=example,dc=com
# tenant_filter =
# tenant_objectclass = groupOfNames
# tenant_domain_id_attribute = businessCategory
# tenant_id_attribute = cn
# tenant_member_attribute = member
# tenant_name_attribute = ou
# tenant_desc_attribute = desc
# tenant_enabled_attribute = enabled
# tenant_attribute_ignore =
# tenant_allow_create = True
# tenant_allow_update = True
# tenant_allow_delete = True
# tenant_enabled_emulation = False
# tenant_enabled_emulation_dn =
# role_tree_dn = ou=Roles,dc=example,dc=com
# role_filter =
# role_objectclass = organizationalRole
# role_id_attribute = cn
# role_name_attribute = ou
# role_member_attribute = roleOccupant
# role_attribute_ignore =
# role_allow_create = True
# role_allow_update = True
# role_allow_delete = True
# group_tree_dn =
# group_filter =
# group_objectclass = groupOfNames
# group_id_attribute = cn
# group_name_attribute = ou
# group_member_attribute = member
# group_desc_attribute = desc
# group_attribute_ignore =
# group_allow_create = True
# group_allow_update = True
# group_allow_delete = True
[auth]
methods = password,token
password = keystone.auth.plugins.password.Password
token = keystone.auth.plugins.token.Token
[filter:debug]
paste.filter_factory = keystone.common.wsgi:Debug.factory
[filter:token_auth]
paste.filter_factory = keystone.middleware:TokenAuthMiddleware.factory
[filter:admin_token_auth]
paste.filter_factory = keystone.middleware:AdminTokenAuthMiddleware.factory
[filter:xml_body]
paste.filter_factory = keystone.middleware:XmlBodyMiddleware.factory
[filter:json_body]
paste.filter_factory = keystone.middleware:JsonBodyMiddleware.factory
[filter:user_crud_extension]
paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory
[filter:crud_extension]
paste.filter_factory = keystone.contrib.admin_crud:CrudExtension.factory
[filter:ec2_extension]
paste.filter_factory = keystone.contrib.ec2:Ec2Extension.factory
[filter:s3_extension]
paste.filter_factory = keystone.contrib.s3:S3Extension.factory
[filter:url_normalize]
paste.filter_factory = keystone.middleware:NormalizingFilter.factory
[filter:sizelimit]
paste.filter_factory = keystone.middleware:RequestBodySizeLimiter.factory
[filter:stats_monitoring]
paste.filter_factory = keystone.contrib.stats:StatsMiddleware.factory
[filter:stats_reporting]
paste.filter_factory = keystone.contrib.stats:StatsExtension.factory
[filter:access_log]
paste.filter_factory = keystone.contrib.access:AccessLogMiddleware.factory
[app:public_service]
paste.app_factory = keystone.service:public_app_factory
[app:service_v3]
paste.app_factory = keystone.service:v3_app_factory
[app:admin_service]
paste.app_factory = keystone.service:admin_app_factory
[pipeline:public_api]
pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug ec2_extension user_crud_extension public_service
[pipeline:admin_api]
pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug stats_reporting ec2_extension s3_extension crud_extension admin_service
[pipeline:api_v3]
pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug stats_reporting ec2_extension s3_extension service_v3
[app:public_version_service]
paste.app_factory = keystone.service:public_version_app_factory
[app:admin_version_service]
paste.app_factory = keystone.service:admin_version_app_factory
[pipeline:public_version_api]
pipeline = access_log sizelimit stats_monitoring url_normalize xml_body public_version_service
[pipeline:admin_version_api]
pipeline = access_log sizelimit stats_monitoring url_normalize xml_body admin_version_service
[composite:main]
use = egg:Paste#urlmap
/v2.0 = public_api
/v3 = api_v3
/ = public_version_api
[composite:admin]
use = egg:Paste#urlmap
/v2.0 = admin_api
/v3 = api_v3
/ = admin_version_api
/etc/ glance/glance-registry.conf. This file must be modified after installation.
[DEFAULT]
# Show more verbose log output (sets INFO log level output)
verbose = True
# Show debugging output in logs (sets DEBUG log level output)
debug = False
# Address to bind the registry server
bind_host = 0.0.0.0
# Port the bind the registry server to
bind_port = 9191
# Log to this file. Make sure you do not set the same log
# file for both the API and registry servers!
log_file = /var/log/glance/registry.log
# Backlog requests when creating socket
backlog = 4096
# TCP_KEEPIDLE value in seconds when creating socket.
# Not supported on OS X.
#tcp_keepidle = 600
# SQLAlchemy connection string for the reference implementation
# registry server. Any valid SQLAlchemy connection string is fine.
# See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/
connections.html#sqlalchemy.create_engine
sql_connection = mysql://glance:YOUR_GLANCEDB_PASSWORD@192.168.206.130/glance
# Period in seconds after which SQLAlchemy should reestablish its connection
# to the database.
#
# MySQL uses a default `wait_timeout` of 8 hours, after which it will drop
# idle connections. This can result in 'MySQL Gone Away' exceptions. If you
# notice this, you can lower this value to ensure that SQLAlchemy reconnects
# before MySQL can drop the connection.
sql_idle_timeout = 3600
# Limit the api to return `param_limit_max` items in a call to a container. If
# a larger `limit` query param is provided, it will be reduced to this value.
api_limit_max = 1000
# If a `limit` query param is not provided in an api request, it will
# default to `limit_param_default`
limit_param_default = 25
# Role used to identify an authenticated user as administrator
#admin_role = admin
# ================= Syslog Options ============================
# Send logs to syslog (/dev/log) instead of to file specified
# by `log_file`
use_syslog = False
# Facility to use. If unset defaults to LOG_USER.
#syslog_log_facility = LOG_LOCAL1
# ================= SSL Options ===============================
# Certificate file to use when starting registry server securely
#cert_file = /path/to/certfile
# Private key file to use when starting registry server securely
#key_file = /path/to/keyfile
# CA certificate file to use to verify connecting clients
#ca_file = /path/to/cafile
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = services
admin_user = glance
admin_password = secret
[paste_deploy]
# Name of the paste configuration file that defines the available pipelines
config_file = /etc/glance/glance-registry-paste.ini
# Partial name of a pipeline in your paste configuration file with the
# service name removed. For example, if your paste section name is
# [pipeline:glance-api-keystone], you would configure the flavor below
# as 'keystone'.
flavor=keystone
/etc/glance/glance-registry-paste.ini
# Use this pipeline for no auth - DEFAULT
# [pipeline:glance-registry]
# pipeline = unauthenticated-context registryapp
# Use this pipeline for keystone auth
[pipeline:glance-registry-keystone]
pipeline = authtoken context registryapp
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
admin_tenant_name = services
admin_user = glance
admin_password = secret
[app:registryapp]
paste.app_factory = glance.registry.api.v1:API.factory
[filter:context]
paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory
[filter:unauthenticated-context]
paste.filter_factory =
glance.api.middleware.context:UnauthenticatedContextMiddleware.factory
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
/etc/glance/glance-api.conf
[DEFAULT]
# Show more verbose log output (sets INFO log level output)
verbose = True
# Show debugging output in logs (sets DEBUG log level output)
debug = False
# Which backend scheme should the Image service use by default? Known schemes are determined
# by the known_stores option below.
# Default: 'file'
default_store = file
# List of which store classes and store class locations are
# currently known to glance at startup.
#known_stores = glance.store.filesystem.Store,
# glance.store.http.Store,
# glance.store.rbd.Store,
# glance.store.s3.Store,
# glance.store.swift.Store,
# Maximum image size (in bytes) that may be uploaded through the
# Glance API server. Defaults to 1 TB.
# WARNING: this value should only be increased after careful consideration
# and must be set to a value under 8 EB (9223372036854775808).
#image_size_cap = 1099511627776
# Address to bind the API server
bind_host = 0.0.0.0
# Port the bind the API server to
bind_port = 9292
# Log to this file. Make sure you do not set the same log
# file for both the API and registry servers!
log_file = /var/log/glance/api.log
# Backlog requests when creating socket
backlog = 4096
# TCP_KEEPIDLE value in seconds when creating socket.
# Not supported on OS X.
#tcp_keepidle = 600
# SQLAlchemy connection string for the reference implementation
# registry server. Any valid SQLAlchemy connection string is fine.
# See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/
connections.html#sqlalchemy.create_engine
# sql_connection = sqlite:///glance.sqlite
# sql_connection = sql_connection = mysql://
glance:YOUR_GLANCEDB_PASSWORD@192.168.206.130/glance
# Period in seconds after which SQLAlchemy should reestablish its connection
# to the database.
#
# MySQL uses a default `wait_timeout` of 8 hours, after which it will drop
# idle connections. This can result in 'MySQL Gone Away' exceptions. If you
# notice this, you can lower this value to ensure that SQLAlchemy reconnects
# before MySQL can drop the connection.
sql_idle_timeout = 3600
# Number of Glance API worker processes to start.
# On machines with more than one CPU increasing this value
# may improve performance (especially if using SSL with
# compression turned on). It is typically recommended to set
# this value to the number of CPUs present on your machine.
workers = 1
# Role used to identify an authenticated user as administrator
#admin_role = admin
# Allow unauthenticated users to access the API with read-only
# privileges. This only applies when using ContextMiddleware.
#allow_anonymous_access = False
# Allow access to version 1 of glance api
#enable_v1_api = True
# Allow access to version 2 of glance api
#enable_v2_api = True
# ================= Syslog Options ============================
# Send logs to syslog (/dev/log) instead of to file specified
# by `log_file`
use_syslog = False
# Facility to use. If unset defaults to LOG_USER.
#syslog_log_facility = LOG_LOCAL0
# ================= SSL Options ===============================
# Certificate file to use when starting API server securely
#cert_file = /path/to/certfile
# Private key file to use when starting API server securely
#key_file = /path/to/keyfile
# CA certificate file to use to verify connecting clients
#ca_file = /path/to/cafile
# ================= Security Options ==========================
# AES key for encrypting store 'location' metadata, including
# -- if used -- Swift or S3 credentials
# Should be set to a random string of length 16, 24 or 32 bytes
#metadata_encryption_key = <16, 24 or 32 char registry metadata key>
# ============ Registry Options ===============================
# Address to find the registry server
registry_host = 0.0.0.0
# Port the registry server is listening on
registry_port = 9191
# What protocol to use when connecting to the registry server?
# Set to https for secure HTTP communication
registry_client_protocol = http
# The path to the key file to use in SSL connections to the
# registry server, if any. Alternately, you may set the
# GLANCE_CLIENT_KEY_FILE environ variable to a filepath of the key file
#registry_client_key_file = /path/to/key/file
# The path to the cert file to use in SSL connections to the
# registry server, if any. Alternately, you may set the
# GLANCE_CLIENT_CERT_FILE environ variable to a filepath of the cert file
#registry_client_cert_file = /path/to/cert/file
# The path to the certifying authority cert file to use in SSL connections
# to the registry server, if any. Alternately, you may set the
# GLANCE_CLIENT_CA_FILE environ variable to a filepath of the CA cert file
#registry_client_ca_file = /path/to/ca/file
# ============ Notification System Options =====================
# Notifications can be sent when images are create, updated or deleted.
# There are three methods of sending notifications, logging (via the
# log_file directive), rabbit (via a rabbitmq queue), qpid (via a Qpid
# message queue), or noop (no notifications sent, the default)
notifier_strategy = noop
# Configuration options if sending notifications via rabbitmq (these are
# the defaults)
rabbit_host = localhost
rabbit_port = 5672
rabbit_use_ssl = false
rabbit_userid = guest
rabbit_password = guest
rabbit_virtual_host = /
rabbit_notification_exchange = glance
rabbit_notification_topic = glance_notifications
rabbit_durable_queues = False
# Configuration options if sending notifications via Qpid (these are
# the defaults)
qpid_notification_exchange = glance
qpid_notification_topic = glance_notifications
qpid_host = localhost
qpid_port = 5672
qpid_username =
qpid_password =
qpid_sasl_mechanisms =
qpid_reconnect_timeout = 0
qpid_reconnect_limit = 0
qpid_reconnect_interval_min = 0
qpid_reconnect_interval_max = 0
qpid_reconnect_interval = 0
qpid_heartbeat = 5
# Set to 'ssl' to enable SSL
qpid_protocol = tcp
qpid_tcp_nodelay = True
# ============ Filesystem Store Options ========================
# Directory that the Filesystem backend store
# writes image data to
filesystem_store_datadir = /var/lib/glance/images/
# ============ Swift Store Options =============================
# Version of the authentication service to use
# Valid versions are '2' for keystone and '1' for swauth and rackspace
swift_store_auth_version = 2
# Address where the Swift authentication service lives
# Valid schemes are 'http://' and 'https://'
# If no scheme specified, default to 'https://'
# For swauth, use something like '127.0.0.1:8080/v1.0/'
swift_store_auth_address = 127.0.0.1:5000/v2.0/
# User to authenticate against the Swift authentication service
# If you use Swift authentication service, set it to 'account':'user'
# where 'account' is a Swift storage account and 'user'
# is a user in that account
swift_store_user = jdoe:jdoe
# Auth key for the user authenticating against the
# Swift authentication service
swift_store_key = a86850deb2742ec3cb41518e26aa2d89
# Container within the account that the account should use
# for storing images in Swift
swift_store_container = glance
# Do we create the container if it does not exist?
swift_store_create_container_on_put = False
# What size, in MB, should Glance start chunking image files
# and do a large object manifest in Swift? By default, this is
# the maximum object size in Swift, which is 5GB
swift_store_large_object_size = 5120
# When doing a large object manifest, what size, in MB, should
# Glance write chunks to Swift? This amount of data is written
# to a temporary disk buffer during the process of chunking
# the image file, and the default is 200MB
swift_store_large_object_chunk_size = 200
# Whether to use ServiceNET to communicate with the Swift storage servers.
# (If you aren't RACKSPACE, leave this False!)
#
# To use ServiceNET for authentication, prefix hostname of
# `swift_store_auth_address` with 'snet-'.
# Ex. https://example.com/v1.0/ -> https://snet-example.com/v1.0/
swift_enable_snet = False
# If set to True enables multi-tenant storage mode which causes Glance images
# to be stored in tenant specific Swift accounts.
#swift_store_multi_tenant = False
# A list of tenants that will be granted read/write access on all Swift
# containers created by Glance in multi-tenant mode.
#swift_store_admin_tenants = []
# The region of the swift endpoint to be used for single tenant. This setting
# is only necessary if the tenant has multiple swift endpoints.
#swift_store_region =
# ============ S3 Store Options =============================
# Address where the S3 authentication service lives
# Valid schemes are 'http://' and 'https://'
# If no scheme specified, default to 'http://'
s3_store_host = 127.0.0.1:8080/v1.0/
# User to authenticate against the S3 authentication service
s3_store_access_key = <20-char AWS access key>
# Auth key for the user authenticating against the
# S3 authentication service
s3_store_secret_key = <40-char AWS secret key>
# Container within the account that the account should use
# for storing images in S3. Note that S3 has a flat namespace,
# so you need a unique bucket name for your glance images. An
# easy way to do this is append your AWS access key to "glance".
# S3 buckets in AWS *must* be lowercased, so remember to lowercase
# your AWS access key if you use it in your bucket name below!
s3_store_bucket = <lowercased 20-char aws access key>glance
# Do we create the bucket if it does not exist?
s3_store_create_bucket_on_put = False
# When sending images to S3, the data will first be written to a
# temporary buffer on disk. By default the platform's temporary directory
# will be used. If required, an alternative directory can be specified here.
#s3_store_object_buffer_dir = /path/to/dir
# When forming a bucket url, boto will either set the bucket name as the
# subdomain or as the first token of the path. Amazon's S3 service will
# accept it as the subdomain, but Swift's S3 middleware requires it be
# in the path. Set this to 'path' or 'subdomain' - defaults to 'subdomain'.
#s3_store_bucket_url_format = subdomain
# ============ RBD Store Options =============================
# Ceph configuration file path
# If using cephx authentication, this file should
# include a reference to the right keyring
# in a client.<USER> section
rbd_store_ceph_conf = /etc/ceph/ceph.conf
# RADOS user to authenticate as (only applicable if using cephx)
rbd_store_user = glance
# RADOS pool in which images are stored
rbd_store_pool = images
# Images will be chunked into objects of this size (in megabytes).
# For best performance, this should be a power of two
rbd_store_chunk_size = 8
# ============ Delayed Delete Options =============================
# Turn on/off delayed delete
delayed_delete = False
# Delayed delete time in seconds
scrub_time = 43200
# Directory that the scrubber will use to remind itself of what to delete
# Make sure this is also set in glance-scrubber.conf
scrubber_datadir = /var/lib/glance/scrubber
# =============== Image Cache Options =============================
# Base directory that the Image Cache uses
image_cache_dir = /var/lib/glance/image-cache/
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = services
admin_user = glance
admin_password = secret
[paste_deploy]
# Name of the paste configuration file that defines the available pipelines
config_file = /etc/glance/glance-api-paste.ini
# Partial name of a pipeline in your paste configuration file with the
# service name removed. For example, if your paste section name is
# [pipeline:glance-api-keystone], you would configure the flavor below
# as 'keystone'.
flavor=keystone
/etc/glance/glance-api-paste.ini
# Use this pipeline for no auth or image caching - DEFAULT
# [pipeline:glance-api]
# pipeline = versionnegotiation unauthenticated-context rootapp
# Use this pipeline for image caching and no auth
# [pipeline:glance-api-caching]
# pipeline = versionnegotiation unauthenticated-context cache rootapp
# Use this pipeline for caching w/ management interface but no auth
# [pipeline:glance-api-cachemanagement]
# pipeline = versionnegotiation unauthenticated-context cache cachemanage
rootapp
# Use this pipeline for keystone auth
[pipeline:glance-api-keystone]
pipeline = versionnegotiation authtoken context rootapp
# Use this pipeline for keystone auth with image caching
# [pipeline:glance-api-keystone+caching]
# pipeline = versionnegotiation authtoken context cache rootapp
# Use this pipeline for keystone auth with caching and cache management
# [pipeline:glance-api-keystone+cachemanagement]
# pipeline = versionnegotiation authtoken context cache cachemanage rootapp
[composite:rootapp]
paste.composite_factory = glance.api:root_app_factory
/: apiversions
/v1: apiv1app
/v2: apiv2app
[app:apiversions]
paste.app_factory = glance.api.versions:create_resource
[app:apiv1app]
paste.app_factory = glance.api.v1.router:API.factory
[app:apiv2app]
paste.app_factory = glance.api.v2.router:API.factory
[filter:versionnegotiation]
paste.filter_factory =
glance.api.middleware.version_negotiation:VersionNegotiationFilter.factory
[filter:cache]
paste.filter_factory = glance.api.middleware.cache:CacheFilter.factory
[filter:cachemanage]
paste.filter_factory =
glance.api.middleware.cache_manage:CacheManageFilter.factory
[filter:context]
paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory
[filter:unauthenticated-context]
paste.filter_factory =
glance.api.middleware.context:UnauthenticatedContextMiddleware.factory
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
delay_auth_decision = true
/etc/glance/glance-scrubber.conf
[DEFAULT]
# Show more verbose log output (sets INFO log level output)
verbose = True
# Show debugging output in logs (sets DEBUG log level output)
debug = False
# Log to this file. Make sure you do not set the same log
# file for both the API and registry servers!
log_file = /var/log/glance/scrubber.log
# Send logs to syslog (/dev/log) instead of to file specified by `log_file`
use_syslog = False
# Delayed delete time in seconds
scrub_time = 43200
# Should we run our own loop or rely on cron/scheduler to run us
daemon = False
# Loop time between checking the registry for new items to schedule for delete
wakeup_time = 300
[app:glance-scrubber]
paste.app_factory = glance.store.scrubber:app_factory
[composite:quantum] use = egg:Paste#urlmap /: quantumversions /v2.0: quantumapi_v2_0 [composite:quantumapi_v2_0] use = call:quantum.auth:pipeline_factory noauth = extensions quantumapiapp_v2_0 keystone = authtoken keystonecontext extensions quantumapiapp_v2_0 [filter:keystonecontext] paste.filter_factory = quantum.auth:QuantumKeystoneContext.factory [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory [filter:extensions] paste.filter_factory = quantum.api.extensions:plugin_aware_extension_middleware_factory [app:quantumversions] paste.app_factory = quantum.api.versions:Versions.factory [app:quantumapiapp_v2_0] paste.app_factory = quantum.api.v2.router:APIRouter.factory
[DEFAULT] # Show debugging output in log (sets DEBUG log level output) # debug = true # The DHCP agent will resync its state with Quantum to recover from any # transient notification or rpc errors. The interval is number of # seconds between attempts. # resync_interval = 5 # The DHCP requires that an inteface driver be set. Choose the one that best # matches you plugin. # OVS based plugins(OVS, Ryu, NEC, NVP, BigSwitch/Floodlight) interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver # OVS based plugins(Ryu, NEC, NVP, BigSwitch/Floodlight) that use OVS # as OpenFlow switch and check port status #ovs_use_veth = True # LinuxBridge #interface_driver = quantum.agent.linux.interface.BridgeInterfaceDriver # The agent can use other DHCP drivers. Dnsmasq is the simplest and requires # no additional setup of the DHCP server. dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq # Allow overlapping IP (Must have kernel build with CONFIG_NET_NS=y and # iproute2 package that supports namespaces). # use_namespaces = True # The DHCP server can assist with providing metadata support on isolated # networks. Setting this value to True will cause the DHCP server to append # specific host routes to the DHCP request. The metadata service will only # be activated when the subnet gateway_ip is None. The guest instance must # be configured to request host routes via DHCP (Option 121). # enable_isolated_metadata = False # Allows for serving metadata requests coming from a dedicated metadata # access network whose cidr is 169.254.169.254/16 (or larger prefix), and # is connected to a Quantum router from which the VMs send metadata # request. In this case DHCP Option 121 will not be injected in VMs, as # they will be able to reach 169.254.169.254 through a router. # This option requires enable_isolated_metadata = True # enable_metadata_network = False
[DEFAULT] # Show debugging output in log (sets DEBUG log level output) # debug = True # L3 requires that an interface driver be set. Choose the one that best # matches your plugin. # OVS based plugins (OVS, Ryu, NEC) that supports L3 agent interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver # OVS based plugins(Ryu, NEC) that use OVS # as OpenFlow switch and check port status #ovs_use_veth = True # LinuxBridge #interface_driver = quantum.agent.linux.interface.BridgeInterfaceDriver # Allow overlapping IP (Must have kernel build with CONFIG_NET_NS=y and # iproute2 package that supports namespaces). # use_namespaces = True # If use_namespaces is set as False then the agent can only configure one router. # This is done by setting the specific router_id. # router_id = # Each L3 agent can be associated with at most one external network. This # value should be set to the UUID of that external network. If empty, # the agent will enforce that only a single external networks exists and # use that external network id # gateway_external_network_id = # Indicates that this L3 agent should also handle routers that do not have # an external network gateway configured. This option should be True only # for a single agent in a Quantum deployment, and may be False for all agents # if all routers must have an external network gateway # handle_internal_only_routers = True # Name of bridge used for external network traffic. This should be set to # empty value for the linux bridge # external_network_bridge = br-ex # TCP Port used by Quantum metadata server # metadata_port = 9697 # Send this many gratuitous ARPs for HA setup. Set it below or equal to 0 # to disable this feature. # send_arp_for_ha = 3 # seconds between re-sync routers' data if needed # periodic_interval = 40 # seconds to start to sync routers' data after # starting agent # periodic_fuzzy_delay = 5 # enable_metadata_proxy, which is true by default, can be set to False # if the Nova metadata server is not available # enable_metadata_proxy = True
[DEFAULT] # Show debugging output in log (sets DEBUG log level output) # debug = true # The LBaaS agent will resync its state with Quantum to recover from any # transient notification or rpc errors. The interval is number of # seconds between attempts. # periodic_interval = 10 # OVS based plugins(OVS, Ryu, NEC, NVP, BigSwitch/Floodlight) interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver # OVS based plugins(Ryu, NEC, NVP, BigSwitch/Floodlight) that use OVS # as OpenFlow switch and check port status # ovs_use_veth = True # LinuxBridge # interface_driver = quantum.agent.linux.interface.BridgeInterfaceDriver # The agent requires a driver to manage the loadbalancer. HAProxy is the # opensource version. device_driver = quantum.plugins.services.agent_loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver # Allow overlapping IP (Must have kernel build with CONFIG_NET_NS=y and # iproute2 package that supports namespaces). # use_namespaces = True # The user group # user_group = nogroup
[DEFAULT] # Show debugging output in log (sets DEBUG log level output) # debug = True # The Quantum user information for accessing the Quantum API. auth_url = http://localhost:35357/v2.0 auth_region = RegionOne admin_tenant_name = %SERVICE_TENANT_NAME% admin_user = %SERVICE_USER% admin_password = %SERVICE_PASSWORD% # Network service endpoint type to pull from the keystone catalog # endpoint_type = adminURL # IP address used by Nova metadata server # nova_metadata_ip = 127.0.0.1 # TCP Port used by Nova metadata server # nova_metadata_port = 8775 # When proxying metadata requests, Quantum signs the Instance-ID header with a # shared secret to prevent spoofing. You may select any string for a secret, # but it must match here and in the configuration used by the Nova Metadata # Server. NOTE: Nova uses a different key: quantum_metadata_proxy_shared_secret # metadata_proxy_shared_secret =
{
"context_is_admin": "role:admin",
"admin_or_owner": "rule:context_is_admin or tenant_id:%(tenant_id)s",
"admin_or_network_owner": "rule:context_is_admin or tenant_id:%(network_tenant_id)s",
"admin_only": "rule:context_is_admin",
"regular_user": "",
"shared": "field:networks:shared=True",
"external": "field:networks:router:external=True",
"default": "rule:admin_or_owner",
"subnets:private:read": "rule:admin_or_owner",
"subnets:private:write": "rule:admin_or_owner",
"subnets:shared:read": "rule:regular_user",
"subnets:shared:write": "rule:admin_only",
"create_subnet": "rule:admin_or_network_owner",
"get_subnet": "rule:admin_or_owner or rule:shared",
"update_subnet": "rule:admin_or_network_owner",
"delete_subnet": "rule:admin_or_network_owner",
"create_network": "",
"get_network": "rule:admin_or_owner or rule:shared or rule:external",
"get_network:router:external": "rule:regular_user",
"get_network:provider:network_type": "rule:admin_only",
"get_network:provider:physical_network": "rule:admin_only",
"get_network:provider:segmentation_id": "rule:admin_only",
"get_network:queue_id": "rule:admin_only",
"create_network:shared": "rule:admin_only",
"create_network:router:external": "rule:admin_only",
"create_network:provider:network_type": "rule:admin_only",
"create_network:provider:physical_network": "rule:admin_only",
"create_network:provider:segmentation_id": "rule:admin_only",
"update_network": "rule:admin_or_owner",
"update_network:provider:network_type": "rule:admin_only",
"update_network:provider:physical_network": "rule:admin_only",
"update_network:provider:segmentation_id": "rule:admin_only",
"delete_network": "rule:admin_or_owner",
"create_port": "",
"create_port:mac_address": "rule:admin_or_network_owner",
"create_port:fixed_ips": "rule:admin_or_network_owner",
"create_port:port_security_enabled": "rule:admin_or_network_owner",
"create_port:binding:host_id": "rule:admin_only",
"create_port:mac_learning_enabled": "rule:admin_or_network_owner",
"get_port": "rule:admin_or_owner",
"get_port:queue_id": "rule:admin_only",
"get_port:binding:vif_type": "rule:admin_only",
"get_port:binding:capabilities": "rule:admin_only",
"get_port:binding:host_id": "rule:admin_only",
"get_port:binding:profile": "rule:admin_only",
"update_port": "rule:admin_or_owner",
"update_port:fixed_ips": "rule:admin_or_network_owner",
"update_port:port_security_enabled": "rule:admin_or_network_owner",
"update_port:binding:host_id": "rule:admin_only",
"update_port:mac_learning_enabled": "rule:admin_or_network_owner",
"delete_port": "rule:admin_or_owner",
"create_service_type": "rule:admin_only",
"update_service_type": "rule:admin_only",
"delete_service_type": "rule:admin_only",
"get_service_type": "rule:regular_user",
"create_qos_queue": "rule:admin_only",
"get_qos_queue": "rule:admin_only",
"update_agent": "rule:admin_only",
"delete_agent": "rule:admin_only",
"get_agent": "rule:admin_only",
"get_agents": "rule:admin_only",
"create_dhcp-network": "rule:admin_only",
"delete_dhcp-network": "rule:admin_only",
"get_dhcp-networks": "rule:admin_only",
"create_l3-router": "rule:admin_only",
"delete_l3-router": "rule:admin_only",
"get_l3-routers": "rule:admin_only",
"get_dhcp-agents": "rule:admin_only",
"get_l3-agents": "rule:admin_only",
"create_router": "rule:regular_user",
"get_router": "rule:admin_or_owner",
"update_router:add_router_interface": "rule:admin_or_owner",
"update_router:remove_router_interface": "rule:admin_or_owner",
"delete_router": "rule:admin_or_owner",
"create_floatingip": "rule:regular_user",
"update_floatingip": "rule:admin_or_owner",
"delete_floatingip": "rule:admin_or_owner",
"get_floatingip": "rule:admin_or_owner"
}[DEFAULT]
# Default log level is INFO
# verbose and debug has the same result.
# One of them will set DEBUG log level output
# debug = False
# verbose = False
# Where to store Quantum state files. This directory must be writable by the
# user executing the agent.
# state_path = /var/lib/quantum
# Where to store lock files
lock_path = $state_path/lock
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# log_date_format = %Y-%m-%d %H:%M:%S
# use_syslog -> syslog
# log_file and log_dir -> log_dir/log_file
# (not log_file) and log_dir -> log_dir/{binary_name}.log
# use_stderr -> stderr
# (not user_stderr) and (not log_file) -> stdout
# publish_errors -> notification system
# use_syslog = False
# syslog_log_facility = LOG_USER
# use_stderr = True
# log_file =
# log_dir =
# publish_errors = False
# Address to bind the API server
bind_host = 0.0.0.0
# Port the bind the API server to
bind_port = 9696
# Path to the extensions. Note that this can be a colon-separated list of
# paths. For example:
# api_extensions_path = extensions:/path/to/more/extensions:/even/more/extensions
# The __path__ of quantum.extensions is appended to this, so if your
# extensions are in there you don't need to specify them here
# api_extensions_path =
# Quantum plugin provider module
# core_plugin =
# Advanced service modules
# service_plugins =
# Paste configuration file
api_paste_config = api-paste.ini
# The strategy to be used for auth.
# Supported values are 'keystone'(default), 'noauth'.
# auth_strategy = keystone
# Base MAC address. The first 3 octets will remain unchanged. If the
# 4h octet is not 00, it will also used. The others will be
# randomly generated.
# 3 octet
# base_mac = fa:16:3e:00:00:00
# 4 octet
# base_mac = fa:16:3e:4f:00:00
# Maximum amount of retries to generate a unique MAC address
# mac_generation_retries = 16
# DHCP Lease duration (in seconds)
# dhcp_lease_duration = 120
# Allow sending resource operation notification to DHCP agent
# dhcp_agent_notification = True
# Enable or disable bulk create/update/delete operations
# allow_bulk = True
# Enable or disable pagination
# allow_pagination = False
# Enable or disable sorting
# allow_sorting = False
# Enable or disable overlapping IPs for subnets
# Attention: the following parameter MUST be set to False if Quantum is
# being used in conjunction with nova security groups
# allow_overlapping_ips = False
# Ensure that configured gateway is on subnet
# force_gateway_on_subnet = False
# RPC configuration options. Defined in rpc __init__
# The messaging module to use, defaults to kombu.
# rpc_backend = quantum.openstack.common.rpc.impl_kombu
# Size of RPC thread pool
# rpc_thread_pool_size = 64,
# Size of RPC connection pool
# rpc_conn_pool_size = 30
# Seconds to wait for a response from call or multicall
# rpc_response_timeout = 60
# Seconds to wait before a cast expires (TTL). Only supported by impl_zmq.
# rpc_cast_timeout = 30
# Modules of exceptions that are permitted to be recreated
# upon receiving exception data from an rpc call.
# allowed_rpc_exception_modules = quantum.openstack.common.exception, nova.exception
# AMQP exchange to connect to if using RabbitMQ or QPID
control_exchange = quantum
# If passed, use a fake RabbitMQ provider
# fake_rabbit = False
# Configuration options if sending notifications via kombu rpc (these are
# the defaults)
# SSL version to use (valid only if SSL enabled)
# kombu_ssl_version =
# SSL key file (valid only if SSL enabled)
# kombu_ssl_keyfile =
# SSL cert file (valid only if SSL enabled)
# kombu_ssl_certfile =
# SSL certification authority file (valid only if SSL enabled)'
# kombu_ssl_ca_certs =
# IP address of the RabbitMQ installation
# rabbit_host = localhost
# Password of the RabbitMQ server
# rabbit_password = guest
# Port where RabbitMQ server is running/listening
# rabbit_port = 5672
# RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, host2:5672)
# rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port'
# rabbit_hosts = localhost:5672
# User ID used for RabbitMQ connections
# rabbit_userid = guest
# Location of a virtual RabbitMQ installation.
# rabbit_virtual_host = /
# Maximum retries with trying to connect to RabbitMQ
# (the default of 0 implies an infinite retry count)
# rabbit_max_retries = 0
# RabbitMQ connection retry interval
# rabbit_retry_interval = 1
# Use HA queues in RabbitMQ (x-ha-policy: all).You need to
# wipe RabbitMQ database when changing this option. (boolean value)
# rabbit_ha_queues = false
# QPID
# rpc_backend=quantum.openstack.common.rpc.impl_qpid
# Qpid broker hostname
# qpid_hostname = localhost
# Qpid broker port
# qpid_port = 5672
# Qpid single or HA cluster (host:port pairs i.e: host1:5672, host2:5672)
# qpid_hosts is defaulted to '$qpid_hostname:$qpid_port'
# qpid_hosts = localhost:5672
# Username for qpid connection
# qpid_username = ''
# Password for qpid connection
# qpid_password = ''
# Space separated list of SASL mechanisms to use for auth
# qpid_sasl_mechanisms = ''
# Seconds between connection keepalive heartbeats
# qpid_heartbeat = 60
# Transport to use, either 'tcp' or 'ssl'
# qpid_protocol = tcp
# Disable Nagle algorithm
# qpid_tcp_nodelay = True
# ZMQ
# rpc_backend=quantum.openstack.common.rpc.impl_zmq
# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address.
# rpc_zmq_bind_address = *
# ============ Notification System Options =====================
# Notifications can be sent when network/subnet/port are create, updated or deleted.
# There are three methods of sending notifications: logging (via the
# log_file directive), rpc (via a message queue) and
# noop (no notifications sent, the default)
# Notification_driver can be defined multiple times
# Do nothing driver
# notification_driver = quantum.openstack.common.notifier.no_op_notifier
# Logging driver
# notification_driver = quantum.openstack.common.notifier.log_notifier
# RPC driver. DHCP agents needs it.
notification_driver = quantum.openstack.common.notifier.rpc_notifier
# default_notification_level is used to form actual topic name(s) or to set logging level
default_notification_level = INFO
# default_publisher_id is a part of the notification payload
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier, can be comma separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications
# Default maximum number of items returned in a single response,
# value == infinite and value < 0 means no max limit, and value must
# greater than 0. If the number of items requested is greater than
# pagination_max_limit, server will just return pagination_max_limit
# of number of items.
# pagination_max_limit = -1
# Maximum number of DNS nameservers per subnet
# max_dns_nameservers = 5
# Maximum number of host routes per subnet
# max_subnet_host_routes = 20
# Maximum number of fixed ips per port
# max_fixed_ips_per_port = 5
# =========== items for agent management extension =============
# Seconds to regard the agent as down.
# agent_down_time = 5
# =========== end of items for agent management extension =====
# =========== items for agent scheduler extension =============
# Driver to use for scheduling network to DHCP agent
# network_scheduler_driver = quantum.scheduler.dhcp_agent_scheduler.ChanceScheduler
# Driver to use for scheduling router to a default L3 agent
# router_scheduler_driver = quantum.scheduler.l3_agent_scheduler.ChanceScheduler
# Allow auto scheduling networks to DHCP agent. It will schedule non-hosted
# networks to first DHCP agent which sends get_active_networks message to
# quantum server
# network_auto_schedule = True
# Allow auto scheduling routers to L3 agent. It will schedule non-hosted
# routers to first L3 agent which sends sync_routers message to quantum server
# router_auto_schedule = True
# Number of DHCP agents scheduled to host a network. This enables redundant
# DHCP agents for configured networks.
# dhcp_agents_per_network = 1
# =========== end of items for agent scheduler extension =====
# =========== WSGI parameters related to the API server ==============
# Sets the value of TCP_KEEPIDLE in seconds to use for each server socket when
# starting API server. Not supported on OS X.
#tcp_keepidle = 600
# Number of seconds to keep retrying to listen
#retry_until_window = 30
# Number of backlog requests to configure the socket with.
#backlog = 4096
# Enable SSL on the API server
#use_ssl = False
# Certificate file to use when starting API server securely
#ssl_cert_file = /path/to/certfile
# Private key file to use when starting API server securely
#ssl_key_file = /path/to/keyfile
# CA certificate file to use when starting API server securely to
# verify connecting clients. This is an optional parameter only required if
# API clients need to authenticate to the API server using SSL certificates
# signed by a trusted CA
#ssl_ca_file = /path/to/cafile
# ======== end of WSGI parameters related to the API server ==========
[QUOTAS]
# resource name(s) that are supported in quota features
# quota_items = network,subnet,port
# default number of resource allowed per tenant, minus for unlimited
# default_quota = -1
# number of networks allowed per tenant, and minus means unlimited
# quota_network = 10
# number of subnets allowed per tenant, and minus means unlimited
# quota_subnet = 10
# number of ports allowed per tenant, and minus means unlimited
# quota_port = 50
# number of security groups allowed per tenant, and minus means unlimited
# quota_security_group = 10
# number of security group rules allowed per tenant, and minus means unlimited
# quota_security_group_rule = 100
# default driver to use for quota checks
# quota_driver = quantum.quota.ConfDriver
[DEFAULT_SERVICETYPE]
# Description of the default service type (optional)
# description = "default service type"
# Enter a service definition line for each advanced service provided
# by the default service type.
# Each service definition should be in the following format:
# <service>:<plugin>[:driver]
[AGENT]
# Use "sudo quantum-rootwrap /etc/quantum/rootwrap.conf" to use the real
# root filter facility.
# Change to "sudo" to skip the filtering and just run the comand directly
# root_helper = sudo
# =========== items for agent management extension =============
# seconds between nodes reporting state to server, should be less than
# agent_down_time
# report_interval = 4
# =========== end of items for agent management extension =====
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
signing_dir = /var/lib/quantum/keystone-signing
[LBAAS]
# ==================================================================================================
# driver_fqn is the fully qualified name of the lbaas driver that will be loaded by the lbass plugin
# ==================================================================================================
#driver_fqn = quantum.plugins.services.agent_loadbalancer.drivers.noop.noop_driver.NoopLbaaSDriver[DEFAULT] # List of directories to load filter definitions from (separated by ','). # These directories MUST all be only writeable by root ! filters_path=/etc/quantum/rootwrap.d,/usr/share/quantum/rootwrap # List of directories to search executables in, in case filters do not # explicitly specify a full path (separated by ',') # If not specified, defaults to system PATH environment variable. # These directories MUST all be only writeable by root ! exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin [XENAPI] # XenAPI configuration is only required by the L2 agent if it is to # target a XenServer/XCP compute host's dom0. xenapi_connection_url=<None> xenapi_connection_username=root xenapi_connection_password=<None>
[DEFAULT] bind_ip = 1.2.3.4 bind_port = 6002 workers = 2 [pipeline:main] pipeline = account-server [app:account-server] use = egg:swift#account [account-replicator] [account-auditor] [account-reaper]
[DEFAULT] bind_ip = 1.2.3.4 bind_port = 6001 workers = 2 [pipeline:main] pipeline = container-server [app:container-server] use = egg:swift#container [container-replicator] [container-updater] [container-auditor] [container-sync]
[DEFAULT] bind_ip = 1.2.3.4 bind_port = 6001 workers = 2 [pipeline:main] pipeline = container-server [app:container-server] use = egg:swift#container [container-replicator] [container-updater] [container-auditor] [container-sync]
[DEFAULT] bind_port = 8080 workers = 8 user = swift log_level = debug [pipeline:main] pipeline = healthcheck cache authtoken keystone proxy-server [app:proxy-server] use = egg:swift#proxy allow_account_management = true account_autocreate = true [filter:cache] use = egg:swift#memcache memcache_servers = 127.0.0.1:11211 [filter:catch_errors] use = egg:swift#catch_errors [filter:healthcheck] use = egg:swift#healthcheck [filter:keystone] use = egg:swift#keystoneauth operator_roles = admin, swift is_admin = true cache = swift.cache [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory admin_tenant_name = services admin_user = swift admin_password = Redhat123 auth_host = sgordon-quantum.usersys.redhat.com auth_port = 35357 auth_protocol = http signing_dir = /tmp/keystone-signing-swift
| Revision History | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Revision 3-40 | Thu Feb 13 2014 | ||||||||||
| |||||||||||
| Revision 3-39 | Wed Nov 20 2013 | ||||||||||
| |||||||||||
| Revision 3-38 | Mon Sep 16 2013 | ||||||||||
| |||||||||||
| Revision 3-37 | Fri Sep 06 2013 | ||||||||||
| |||||||||||
| Revision 3-36 | Fri Sep 06 2013 | ||||||||||
| |||||||||||
| Revision 3-35 | Tue Sep 03 2013 | ||||||||||
| |||||||||||
| Revision 3-34 | Mon Sep 02 2013 | ||||||||||
| |||||||||||
| Revision 3-33 | Thu Aug 08 2013 | ||||||||||
| |||||||||||
| Revision 3-32 | Tue Aug 06 2013 | ||||||||||
| |||||||||||
| Revision 3-31 | Tue Aug 06 2013 | ||||||||||
| |||||||||||
| Revision 3-30 | Wed Jul 17 2013 | ||||||||||
| |||||||||||
| Revision 3-29 | Mon Jul 08 2013 | ||||||||||
| |||||||||||
| Revision 3-26 | Mon Jul 01 2013 | ||||||||||
| |||||||||||
| Revision 3-24 | Mon Jun 24 2013 | , | |||||||||
| |||||||||||
| Revision 3-23 | Thu Jun 20 2013 | ||||||||||
| |||||||||||
| Revision 3-22 | Tue Jun 18 2013 | , | |||||||||
| |||||||||||
| Revision 3-21 | Thu Jun 13 2013 | ||||||||||
| |||||||||||
| Revision 3-13 | Wed May 29 2013 | ||||||||||
| |||||||||||































