1.4. Service Details

1.4.1. Dashboard Service Overview

The Dashboard service provides a graphical user interface for end users and administrators, allowing operations such as creating and launching instances, managing networking, and setting access controls. Its modular design allows interfacing with other products such as billing, monitoring, and additional management tools. The service provides three basic dashboards: user, system, and settings.
The following screen capture displays a user's dashboard after OpenStack is first installed:
User dashboard

Figure 1.2. User Dashboard

The identity of the logged-in user determines the dashboards and panels that are visible in the dashboard.

Table 1.2. Dashboard service components

Component Description
openstack-dashboard
A Django (Python) web application, provides access to the dashboard using any web browser.
An Apache HTTP server (httpd service)
Hosts the application.
The following diagram provides an overview of the dashboard architecture, where the dashboard service interacts with the OpenStack Identity service for authentication and authorization, the session backend for database services, the httpd service for hosting the application and all the other OpenStack services for API calls.
OpenStack Dashboard Architecture

Figure 1.3. OpenStack Dashboard Architecture

1.4.2. Identity Service Overview

The Identity service authenticates and authorizes OpenStack users; the service is used by all OpenStack components. The service supports multiple forms of authentication including user name and password credentials, token-based systems, and AWS-style logins (Amazon Web Services).
The Identity service also provides a central catalog of services and endpoints running in a particular OpenStack cloud, which acts as a service directory for other OpenStack systems. OpenStack services use the following endpoints:
  • adminURL, the URL for the administrative endpoint for the service. Only the Identity service might use a value here that is different from publicURL; all other services will use the same value.
  • internalURL, the URL of an internal-facing endpoint for the service (typically the same as the publicURL).
  • publicURL, the URL of the public-facing endpoint for the service.
  • region, in which the service is located. By default, if a region is not specified, the 'RegionOne' location is used.
The Identity service uses the following concepts:
  • Users, which have associated information (such as a name and password). In addition to custom users, a user must be defined for each cataloged service (for example, the 'glance' user for the Image service).
  • Tenants, which are generally the user's group, project, or organization.
  • Roles, which determine a user's permissions.

Table 1.3. Identity Service components

Component Description
keystone
Provides the administrative and public APIs.
Databases
For each of the internal services.

1.4.3. OpenStack Networking Service Overview

The OpenStack Networking service handles the creation and management of a virtual networking infrastructure in the OpenStack cloud. Elements include networks, subnets, and routers; advanced services such as firewalls or virtual private networks (VPN) can also be used.
Because OpenStack Networking is software-defined, it can easily and quickly react to changing network needs (for example, creating and assigning new IP addresses). Advantages include:
  • Users can create networks, control traffic, and connect servers and devices to one or more networks.
  • OpenStack offers flexible networking models, so that administrators can change the networking model to adapt to their volume and tenancy.
  • IPs can be dedicated or floating; floating IPs allow dynamic traffic rerouting.

Table 1.4. OpenStack Networking Service components

Component Description
neutron-server
A Python daemon, which manages user requests (and exposes the API). It is configured with a plug-in that implements the OpenStack Networking API operations using a specific set of networking mechanisms. A wide choice of plug-ins are also available. For example, the openvswitch and linuxbridge plug-ins use native Linux networking mechanisms, while other plug-ins interface with external devices or SDN controllers.
neutron-l3-agent
An agent providing L3/NAT forwarding.
neutron-*-agent
A plug-in agent that runs on each node to perform local networking configuration for the node's virtual machines and networking services.
neutron-dhcp-agent
An agent providing DHCP services to tenant networks.
RabbitMQ server (rabbitmq-server)
Provides the AMQP message queue. This server (also used by Block Storage) handles the OpenStack transaction management, including queuing, distribution, security, management, clustering, and federation. Messaging becomes especially important when an OpenStack deployment is scaled and its services are running on multiple machines.
Database
Provides persistent storage.

1.4.4. Block Storage Service Overview

The Block Storage (or volume) service provides persistent block storage management for virtual hard drives. Block Storage allows the user to create and delete block devices, and to manage the attachment of block devices to servers. The actual attachment and detachment of devices is handled through integration with the Compute service. Both regions and zones can be used to handle distributed block storage hosts (for details, see the Section 1.4.7, “Object Storage Service Overview”).
Block storage is appropriate for performance-sensitive scenarios such as database storage, expandable file systems, or providing a server with access to raw block-level storage. Additionally, snapshots can be taken to either restore data or to create new block storage volumes (snapshots are dependent upon driver support).
Basic operations include:
  • Create, list, and delete volumes.
  • Create, list, and delete snapshots.
  • Attach and detach volumes to running virtual machines.

Table 1.5. Block Storage Service components

Component Description
openstack-cinder-volume
Carves out storage for virtual machines on demand. A number of drivers are included for interaction with storage providers.
openstack-cinder-api
Responds to and handles requests, and places them in the message queue.
openstack-cinder-backup
Provides the ability to back up a Block Storage volume to an external storage repository.
openstack-cinder-scheduler
Assigns tasks to the queue and determines the provisioning volume server.
Database
Provides state information.
RabbitMQ server (rabbitmq-server)
Provides the AMQP message queue. This server (also used by Block Storage) handles the OpenStack transaction management, including queuing, distribution, security, management, clustering, and federation. Messaging becomes especially important when an OpenStack deployment is scaled and its services are running on multiple machines.

1.4.5. Compute Service Overview

The Compute service is the heart of the OpenStack cloud by providing virtual machines on demand. Compute schedules virtual machines to run on a set of nodes by defining drivers that interact with underlying virtualization mechanisms, and exposing the functionality to the other OpenStack components.
Compute interacts with the Identity service for authentication, Image service for images, and the Dashboard service for the user and administrative interface. Access to images is limited by project and by user; quotas are limited per project (for example, the number of instances). The Compute service is designed to scale horizontally on standard hardware, and can download images to launch instances as required.

Table 1.6. Ways to Segregate the Cloud

Concept Description
Regions
Each service cataloged in the Identity service is identified by its region, which typically represents a geographical location, and its endpoint. In a cloud with multiple Compute deployments, regions allow for the discrete separation of services, and are a robust way to share some infrastructure between Compute installations, while allowing for a high degree of failure tolerance.
Cells
A cloud's Compute hosts can be partitioned into groups called cells (to handle large deployments or geographically separate installations). Cells are configured in a tree. The top-level cell ('API cell') runs the nova-api service, but no nova-compute services. In contrast, each child cell runs all of the other typical nova-* services found in a regular installation, except for the nova-api service. Each cell has its own message queue and database service, and also runs nova-cells, which manages the communication between the API cell and its child cells.
This means that:
  • A single API server can be used to control access to multiple Compute installations.
  • A second level of scheduling at the cell level is available (versus host scheduling), which provides greater flexibility over the control of where virtual machines are run.
Host Aggregates and Availability Zones
A single Compute deployment can be partitioned into logical groups (for example, into multiple groups of hosts that share common resources like storage and network, or which have a special property such as trusted computing hardware).
If the user is:
  • An administrator, the group is presented as a Host Aggregate, which has assigned Compute hosts and associated metadata. An aggregate's metadata is commonly used to provide information for use with nova-scheduler (for example, limiting specific flavors or images to a subset of hosts).
  • A user, the group is presented as an Availability Zone. The user cannot view the group's metadata, nor which hosts make up the zone.
Aggregates, or zones, can be used to:
  • Handle load balancing and instance distribution.
  • Provide some form of physical isolation and redundancy from other zones (such as by using a separate power supply or network equipment).
  • Identify a set of servers that have some common attribute.
  • Separate out different classes of hardware.

Table 1.7. Compute Service components

Component Description
openstack-nova-api
Handles requests and provides access to the Compute services (such as booting an instance).
openstack-nova-cert
Provides the certificate manager.
openstack-nova-compute
Creates and terminates virtual instances. Interacts with the Hypervisor to bring up new instances, and ensures that the state is maintained in the Compute database.
openstack-nova-conductor
Provides database-access support for Compute nodes (thereby reducing security risks).
openstack-nova-consoleauth
Handles console authentication.
openstack-nova-network
Handles Compute network traffic (both private and public access). Handles such tasks as assigning an IP address to a new virtual instance, and implementing security group rules.
openstack-nova-novncproxy
Provides a VNC proxy for browsers (enabling VNC consoles to access virtual machines).
openstack-nova-scheduler
Dispatches requests for new virtual machines to the correct node.
RabbitMQ server (rabbitmq-server)
Provides the AMQP message queue. This server (also used by Block Storage) handles the OpenStack transaction management, including queuing, distribution, security, management, clustering, and federation. Messaging becomes especially important when an OpenStack deployment is scaled and its services are running on multiple machines.
libvirtd
The driver for the hypervisor. Enables the creation of virtual machines.
KVM Linux hypervisor
Creates virtual machines and enables their live migration from node to node.
Database
Provides build-time and run-time infrastructure state.

1.4.6. Image Service Overview

The Image service acts as a registry for virtual disk images. Users can add new images or take a snapshot (copy) of an existing server for immediate storage. Snapshots can be used as back up or as templates for new servers. Registered images can be stored in the Object Storage service, as well as in other locations (for example, in simple file systems or external web servers).
The following image formats are supported:
  • raw (unstructured format)
  • aki/ami/ari (Amazon kernel, ramdisk, or machine image)
  • iso (archive format for optical discs; for example, CD)
  • qcow2 (Qemu/KVM, supports Copy on Write)
  • vhd (Hyper-V, common for virtual machine monitors from VMware, Xen, Microsoft, VirtualBox, and others)
  • vdi (Qemu/VirtualBox)
  • vmdk (VMware)
Container formats can also be used by the Image service; the format determines the type of metadata stored in the image about the actual virtual machine. The following formats are supported.
  • bare (no metadata is included)
  • ovf (OVF format)
  • aki/ami/ari (Amazon kernel, ramdisk, or machine image)

Table 1.8. Image Service components

Component Description
openstack-glance-api
Handles requests and image delivery (interacts with storage back-ends for retrieval and storage). Uses the registry to retrieve image information (the registry service is never, and should never be, accessed directly).
openstack-glance-registry
Manages all metadata associated with each image.
Database
Stores image metadata.
RabbitMQ server (rabbitmq-server)
Provides the AMQP message queue. This server (also used by Block Storage) handles the OpenStack transaction management, including queuing, distribution, security, management, clustering, and federation. Messaging becomes especially important when an OpenStack deployment is scaled and its services are running on multiple machines.

1.4.7. Object Storage Service Overview

The Object Storage service provides a storage system for large amounts of data, accessible through HTTP. Static entities such as videos, images, emails, files, or VM images can all be stored. Objects are stored as binaries on the underlying file system (using metadata stored in the file’s extended attributes, xattrs). The service's distributed architecture supports horizontal scaling; redundancy as failure-proofing is provided through software-based data replication.
Because the service supports asynchronous eventual consistency replication, it is well suited to multiple data-center deployment. Object Storage uses the concept of:
  • Storage replicas, which are used to maintain the state of objects in the case of outage. A minimum of three replicas is recommended.
  • Storage zones, which are used to host replicas. Zones ensure that each replica of a given object can be stored separately. A zone might represent an individual disk drive or array, a server, all the servers in a rack, or even an entire data center.
  • Storage regions, which are essentially a group of zones sharing a location. Regions can be, for example, servers or server farms usually located in the same geographical area. Regions have a separate API endpoint per Object Storage service installation, which allows for a discrete separation of services.
The Object Storage service relies on other OpenStack services and components. For example, the Identity Service (keystone), the rsync daemon, and a load balancer are all required.

Table 1.9. Object Storage Service components

Component Description
openstack-swift-proxy
Exposes the public API, provides authentication, and is responsible for handling requests and routing them accordingly. Objects are streamed through the proxy server to the user (not spooled).
openstack-swift-object
Stores, retrieves, and deletes objects.
openstack-swift-account
Responsible for listings of containers, using the account database.
openstack-swift-container
Handles listings of objects (what objects are in a specific container), using the container database.
Ring files
Contain details of all the storage devices, and are used to deduce where a particular piece of data is stored (maps the names of stored entities to their physical location). One file is created for each object, account, and container server.
Account database
Stores account data.
Container database
Stores container data.
ext4 or XFS file system
Used for object storage.
Housekeeping processes
Replication, auditing, and updating processes.

1.4.8. Telemetry Service Overview

The Telemetry service provides user-level usage data for OpenStack-based clouds, which can be used for customer billing, system monitoring, or alerts. Data can be collected by notifications sent by existing OpenStack components (for example, usage events emitted from Compute) or by polling the infrastructure (for example, libvirt).
Telemetry includes a storage daemon that communicates with authenticated agents through a trusted messaging system, to collect and aggregate data. Additionally, the service uses a plug-in system, which makes it easy to add new monitors.

Table 1.10. Telemetry service components

Component Description
ceilometer-agent-compute
An agent that runs on each Compute node to poll for resource utilization statistics.
ceilometer-agent-central
An agent that runs on a central management server to poll for utilization statistics about resources not tied to instances or Compute nodes.
ceilometer-collector
An agent that runs on one or more central management servers to monitor the message queues. Notification messages are processed and turned into Telemetry messages, and sent back out on to the message bus using the appropriate topic. Telemetry messages are written to the data store without modification.
ceilometer-notification
An agent that pushes metrics to the collector service from various OpenStack services.
MongoDB database
For collected usage data from collector agents. Only the collector agents and the API server have access to the database.
API Server
Runs on one or more central management servers to provide access to data in the database.
RabbitMQ server (rabbitmq-server)
Provides the AMQP message queue. This server (also used by Block Storage) handles the OpenStack transaction management, including queuing, distribution, security, management, clustering, and federation. Messaging becomes especially important when an OpenStack deployment is scaled and its services are running on multiple machines.
Telemetry service dependencies
  • Each nova-compute node must have a ceilometer-compute agent deployed and running.
  • All nodes running the ceilometer-api service must have firewall rules granting appropriate access.
  • The ceilometer-central-agent cannot currently be horizontally scaled, so only a single instance of this service should be running at any given moment.
  • You can choose where to locate the additional Telemetry agents, as all intra-agent communication is either based on AMQP or REST calls to the ceilometer-api service; as is the case for the ceilometer-alarm-evaluator service.

1.4.9. Orchestration Service Overview

The Orchestration service provides a template-based way to create and manage cloud resources such as storage, networking, instances, or applications.
Templates are used to create stacks, which are collections of resources (for example instances, floating IPs, volumes, security groups, or users). The service offers access to all OpenStack core services using a single modular template, with additional orchestration capabilities such as auto-scaling and basic high availability.
Features include:
  • A single template provides access to all underlying service APIs.
  • Templates are modular (resource oriented).
  • Templates can be recursively defined, and therefore reusable (nested stacks). This means that the cloud infrastructure can be defined and reused in a modular way.
  • Resource implementation is pluggable, which allows for custom resources.
  • Autoscaling functionality (automatically adding or removing resources depending upon usage).
  • Basic high availability functionality.

Table 1.11. Orchestration service components

Component Description
heat
A CLI tool that communicates with the heat-api to execute AWS CloudFormation APIs.
heat-api
An OpenStack-native REST API that processes API requests by sending them to the heat-engine over RPC.
heat-api-cfn
Provides an AWS-Query API that is compatible with AWS CloudFormation and processes API requests by sending them to the heat-engine over RPC.
heat-engine
Orchestrates the launching of templates and provide events back to the API consumer.
heat-api-cloudwatch
Provides monitoring (metrics collection) for the Orchestration service.
heat-cfntools
A package of helper scripts (for example, cfn-hup, which handles updates to metadata and executes custom hooks).

Note

The heat-cfntools package is only installed on images that are launched by heat into Compute servers.