Cloud Administrator Guide
Managing and troubleshooting a Red Hat Enterprise Linux OpenStack Platform environment
Legal Notice
Abstract
- 1. Identity management
- 1.1. Identity concepts
- 1.2. Certificates for PKI
- 1.3. Configure the Identity Service with SSL
- 1.4. External authentication with Identity
- 1.5. Integrate Identity with LDAP
- 1.6. Configure Identity service for token binding
- 1.7. User CRUD
- 1.8. Logging
- 1.9. Monitoring
- 1.10. Start the Identity services
- 1.11. Example usage
- 1.12. Authentication middleware with user name and password
- 1.13. Identity API protection with role-based access control (RBAC)
- 1.14. Troubleshoot the Identity service
- 2. Dashboard
- 3. Compute
- 3.1. System architecture
- 3.2. Images and instances
- 3.3. Networking with nova-network
- 3.3.1. Networking concepts
- 3.3.2. DHCP server: dnsmasq
- 3.3.3. Configure Compute to use IPv6 addresses
- 3.3.4. Metadata service
- 3.3.5. Enable ping and SSH on VMs
- 3.3.6. Configure public (floating) IP addresses
- 3.3.7. Remove a network from a project
- 3.3.8. Multiple interfaces for your instances (multinic)
- 3.3.9. Troubleshoot Networking
- 3.4. System administration
- 3.4.1. Manage Compute users
- 3.4.2. Manage Volumes
- 3.4.3. Flavors
- 3.4.4. Compute service node firewall requirements
- 3.4.5. Inject administrator password
- 3.4.6. Manage the cloud
- 3.4.7. Manage logs
- 3.4.8. Secure with root wrappers
- 3.4.9. Configure migrations using KVM
- 3.4.10. Migrate instances
- 3.4.11. Configure remote console access
- 3.4.12. Configure Compute service groups
- 3.4.13. Security hardening
- 3.4.14. Recover from a failed compute node
- 3.5. Troubleshoot Compute
- 4. Object Storage
- 4.1. Introduction to Object Storage
- 4.2. Features and benefits
- 4.3. Object Storage characteristics
- 4.4. Components
- 4.5. Ring-builder
- 4.6. Cluster architecture
- 4.7. Replication
- 4.8. Account reaper
- 4.9. Configure tenant-specific image locations with Object Storage
- 4.10. Object Storage monitoring
- 4.11. Troubleshoot Object Storage
- 5. Block Storage
- 5.1. Introduction to Block Storage
- 5.2. Increase Block Storage API service throughput
- 5.3. Manage volumes
- 5.3.1. Boot from volume
- 5.3.2. Configure an NFS storage back end
- 5.3.3. Configure a GlusterFS back end
- 5.3.4. Configure a multiple-storage back-end
- 5.3.5. Back up Block Storage service disks
- 5.3.6. Migrate volumes
- 5.3.7. Gracefully remove a GlusterFS volume from usage
- 5.3.8. Back up and restore volumes
- 5.3.9. Export and import backup metadata
- 5.4. Troubleshoot your installation
- 5.4.1. Troubleshoot the Block Storage configuration
- 5.4.2. Multipath Call Failed Exit
- 5.4.3. Addressing discrepancies in reported volume sizes for EqualLogic storage
- 5.4.4. Failed to Attach Volume, Missing sg_scan
- 5.4.5. Failed to attach volume after detaching
- 5.4.6. Duplicate 3PAR host
- 5.4.7. Failed to attach volume after detaching
- 5.4.8. Failed to attach volume, systool is not installed
- 5.4.9. Failed to connect volume in FC SAN
- 5.4.10. Cannot find suitable emulator for x86_64
- 5.4.11. Non-existent host
- 5.4.12. Non-existent VLUN
- 6. Networking
- 6.1. Introduction to Networking
- 6.2. Networking architecture
- 6.3. Configure Identity Service for Networking
- 6.4. Networking scenarios
- 6.5. Advanced configuration options
- 6.6. Scalable and highly available DHCP agents
- 6.7. Use Networking
- 6.8. Advanced features through API extensions
- 6.9. Advanced operational features
- 6.10. Authentication and authorization
- 6.11. High availability
- 6.12. Plug-in pagination and sorting support
- A. Revision History
Chapter 1. Identity management
etc/keystone.conf configuration file and, possibly, a separate logging configuration file. You initialize data into Identity by using the keystone command-line client.
1.1. Identity concepts
1.1.1. User management
- User. Represents a human user. Has associated information such as user name, password, and email. This example creates a user named
alice:$keystone user-create --name=alice --pass=mypassword123 --email=alice@example.com - Tenant. A project, group, or organization. When you make requests to OpenStack services, you must specify a tenant. For example, if you query the Compute service for a list of running instances, you get a list of all running instances in the tenant that you specified in your query. This example creates a tenant named
acme:$keystone tenant-create --name=acmeNoteBecause the term project was used instead of tenant in earlier versions of OpenStack Compute, some command-line tools use--project_idinstead of--tenant-idor--os-tenant-idto refer to a tenant ID. - Role. Captures the operations that a user can perform in a given tenant.This example creates a role named
compute-user:$keystone role-create --name=compute-userNoteIndividual services, such as Compute and the Image Service, assign meaning to roles. In the Identity Service, a role is simply a name.
compute-user role to the alice user in the acme tenant:
$keystone user-list+--------+---------+-------------------+--------+ | id | enabled | email | name | +--------+---------+-------------------+--------+ | 892585 | True | alice@example.com | alice | +--------+---------+-------------------+--------+
$keystone role-list+--------+--------------+ | id | name | +--------+--------------+ | 9a764e | compute-user | +--------+--------------+
$keystone tenant-list+--------+------+---------+ | id | name | enabled | +--------+------+---------+ | 6b8fd2 | acme | True | +--------+------+---------+
$keystone user-role-add --user=892585 --role=9a764e --tenant-id=6b8fd2
admin role in the Cyberdyne tenant. A user can also have multiple roles in the same tenant.
/etc/[SERVICE_CODENAME]/policy.json file controls the tasks that users can perform for a given service. For example, /etc/nova/policy.json specifies the access policy for the Compute service, /etc/glance/policy.json specifies the access policy for the Image Service, and /etc/keystone/policy.json specifies the access policy for the Identity Service.
policy.json files in the Compute, Identity, and Image Service recognize only the admin role: all operations that do not require the admin role are accessible by any user that has any role in a tenant.
/etc/nova/policy.json so that this role is required for Compute operations.
/etc/nova/policy.json specifies that there are no restrictions on which users can create volumes: if the user has any role in a tenant, they can create volumes in that tenant.
"volume:create": [],
compute-user role in a particular tenant, you would add "role:compute-user", like so:
"volume:create": ["role:compute-user"],
{
"admin_or_owner": [
[
"role:admin"
],
[
"project_id:%(project_id)s"
]
],
"default": [
[
"rule:admin_or_owner"
]
],
"compute:create": [
"role:compute-user"
],
"compute:create:attach_network": [
"role:compute-user"
],
"compute:create:attach_volume": [
"role:compute-user"
],
"compute:get_all": [
"role:compute-user"
],
"compute:unlock_override": [
"rule:admin_api"
],
"admin_api": [
[
"role:admin"
]
],
"compute_extension:accounts": [
[
"rule:admin_api"
]
],
"compute_extension:admin_actions": [
[
"rule:admin_api"
]
],
"compute_extension:admin_actions:pause": [
[
"rule:admin_or_owner"
]
],
"compute_extension:admin_actions:unpause": [
[
"rule:admin_or_owner"
]
],
"compute_extension:admin_actions:suspend": [
[
"rule:admin_or_owner"
]
],
"compute_extension:admin_actions:resume": [
[
"rule:admin_or_owner"
]
],
"compute_extension:admin_actions:lock": [
[
"rule:admin_or_owner"
]
],
"compute_extension:admin_actions:unlock": [
[
"rule:admin_or_owner"
]
],
"compute_extension:admin_actions:resetNetwork": [
[
"rule:admin_api"
]
],
"compute_extension:admin_actions:injectNetworkInfo": [
[
"rule:admin_api"
]
],
"compute_extension:admin_actions:createBackup": [
[
"rule:admin_or_owner"
]
],
"compute_extension:admin_actions:migrateLive": [
[
"rule:admin_api"
]
],
"compute_extension:admin_actions:migrate": [
[
"rule:admin_api"
]
],
"compute_extension:aggregates": [
[
"rule:admin_api"
]
],
"compute_extension:certificates": [
"role:compute-user"
],
"compute_extension:cloudpipe": [
[
"rule:admin_api"
]
],
"compute_extension:console_output": [
"role:compute-user"
],
"compute_extension:consoles": [
"role:compute-user"
],
"compute_extension:createserverext": [
"role:compute-user"
],
"compute_extension:deferred_delete": [
"role:compute-user"
],
"compute_extension:disk_config": [
"role:compute-user"
],
"compute_extension:evacuate": [
[
"rule:admin_api"
]
],
"compute_extension:extended_server_attributes": [
[
"rule:admin_api"
]
],
"compute_extension:extended_status": [
"role:compute-user"
],
"compute_extension:flavorextradata": [
"role:compute-user"
],
"compute_extension:flavorextraspecs": [
"role:compute-user"
],
"compute_extension:flavormanage": [
[
"rule:admin_api"
]
],
"compute_extension:floating_ip_dns": [
"role:compute-user"
],
"compute_extension:floating_ip_pools": [
"role:compute-user"
],
"compute_extension:floating_ips": [
"role:compute-user"
],
"compute_extension:hosts": [
[
"rule:admin_api"
]
],
"compute_extension:keypairs": [
"role:compute-user"
],
"compute_extension:multinic": [
"role:compute-user"
],
"compute_extension:networks": [
[
"rule:admin_api"
]
],
"compute_extension:quotas": [
"role:compute-user"
],
"compute_extension:rescue": [
"role:compute-user"
],
"compute_extension:security_groups": [
"role:compute-user"
],
"compute_extension:server_action_list": [
[
"rule:admin_api"
]
],
"compute_extension:server_diagnostics": [
[
"rule:admin_api"
]
],
"compute_extension:simple_tenant_usage:show": [
[
"rule:admin_or_owner"
]
],
"compute_extension:simple_tenant_usage:list": [
[
"rule:admin_api"
]
],
"compute_extension:users": [
[
"rule:admin_api"
]
],
"compute_extension:virtual_interfaces": [
"role:compute-user"
],
"compute_extension:virtual_storage_arrays": [
"role:compute-user"
],
"compute_extension:volumes": [
"role:compute-user"
],
"compute_extension:volume_attachments:index": [
"role:compute-user"
],
"compute_extension:volume_attachments:show": [
"role:compute-user"
],
"compute_extension:volume_attachments:create": [
"role:compute-user"
],
"compute_extension:volume_attachments:delete": [
"role:compute-user"
],
"compute_extension:volumetypes": [
"role:compute-user"
],
"volume:create": [
"role:compute-user"
],
"volume:get_all": [
"role:compute-user"
],
"volume:get_volume_metadata": [
"role:compute-user"
],
"volume:get_snapshot": [
"role:compute-user"
],
"volume:get_all_snapshots": [
"role:compute-user"
],
"network:get_all_networks": [
"role:compute-user"
],
"network:get_network": [
"role:compute-user"
],
"network:delete_network": [
"role:compute-user"
],
"network:disassociate_network": [
"role:compute-user"
],
"network:get_vifs_by_instance": [
"role:compute-user"
],
"network:allocate_for_instance": [
"role:compute-user"
],
"network:deallocate_for_instance": [
"role:compute-user"
],
"network:validate_networks": [
"role:compute-user"
],
"network:get_instance_uuids_by_ip_filter": [
"role:compute-user"
],
"network:get_floating_ip": [
"role:compute-user"
],
"network:get_floating_ip_pools": [
"role:compute-user"
],
"network:get_floating_ip_by_address": [
"role:compute-user"
],
"network:get_floating_ips_by_project": [
"role:compute-user"
],
"network:get_floating_ips_by_fixed_address": [
"role:compute-user"
],
"network:allocate_floating_ip": [
"role:compute-user"
],
"network:deallocate_floating_ip": [
"role:compute-user"
],
"network:associate_floating_ip": [
"role:compute-user"
],
"network:disassociate_floating_ip": [
"role:compute-user"
],
"network:get_fixed_ip": [
"role:compute-user"
],
"network:add_fixed_ip_to_instance": [
"role:compute-user"
],
"network:remove_fixed_ip_from_instance": [
"role:compute-user"
],
"network:add_network_to_project": [
"role:compute-user"
],
"network:get_instance_nw_info": [
"role:compute-user"
],
"network:get_dns_domains": [
"role:compute-user"
],
"network:add_dns_entry": [
"role:compute-user"
],
"network:modify_dns_entry": [
"role:compute-user"
],
"network:delete_dns_entry": [
"role:compute-user"
],
"network:get_dns_entries_by_address": [
"role:compute-user"
],
"network:get_dns_entries_by_name": [
"role:compute-user"
],
"network:create_private_dns_domain": [
"role:compute-user"
],
"network:create_public_dns_domain": [
"role:compute-user"
],
"network:delete_dns_domain": [
"role:compute-user"
]
}1.1.2. Service management
keystone-all. Starts both the service and administrative APIs in a single process to provide Catalog, Authorization, and Authentication services for OpenStack.- Identity Service functions. Each has a pluggable back end that allows different ways to use the particular service. Most support standard back ends like LDAP or SQL.
1.1.3. Groups
- Create a group
- Delete a group
- Update a group (change its name or description)
- Add a user to a group
- Remove a user from a group
- List group members
- List groups for a user
- Assign a role on a tenant to a group
- Assign a role on a domain to a group
- Query role assignments to groups
- Group A is granted Role A on Tenant A. If User A is a member of Group A, when User A gets a token scoped to Tenant A, the token also includes Role A.
- Group B is granted Role B on Domain B. If User B is a member of Domain B, if User B gets a token scoped to Domain B, the token also includes Role B.
1.1.4. Domains
1.2. Certificates for PKI
keystone.conf as specified in the above section. Additionally, the private key should only be readable by the system user that will run the Identity Service.
[signing] section of the configuration file. The configuration values are:
token_format- Determines the algorithm used to generate tokens. Can be eitherUUIDorPKI. Defaults toPKI.certfile- Location of certificate used to verify tokens. Default is/etc/keystone/ssl/certs/signing_cert.pem.keyfile- Location of private key used to sign tokens. Default is/etc/keystone/ssl/private/signing_key.pem.ca_certs- Location of certificate for the authority that issued the above certificate. Default is/etc/keystone/ssl/certs/ca.pem.key_size- Default is1024.valid_days- Default is3650.ca_password- Password required to read theca_file. Default isNone.
token_format=UUID, a typical token looks like 53f7f6ef0cc344b5be706bcc8b1479e1. If token_format=PKI, a typical token is a much longer string, such as:
MIIKtgYJKoZIhvcNAQcCoIIKpzCCCqMCAQExCTAHBgUrDgMCGjCCCY8GCSqGSIb3DQEHAaCCCYAEggl8eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0wNS0z MFQxNTo1MjowNi43MzMxOTgiLCAiZXhwaXJlcyI6ICIyMDEzLTA1LTMxVDE1OjUyOjA2WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogbnVs bCwgImVuYWJsZWQiOiB0cnVlLCAiaWQiOiAiYzJjNTliNGQzZDI4NGQ4ZmEwOWYxNjljYjE4MDBlMDYiLCAibmFtZSI6ICJkZW1vIn19LCAic2VydmljZUNhdGFsb2ciOiBbeyJlbmRw b2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTkyLjE2OC4yNy4xMDA6ODc3NC92Mi9jMmM1OWI0ZDNkMjg0ZDhmYTA5ZjE2OWNiMTgwMGUwNiIsICJyZWdpb24iOiAiUmVnaW9u T25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo4Nzc0L3YyL2MyYzU5YjRkM2QyODRkOGZhMDlmMTY5Y2IxODAwZTA2IiwgImlkIjogIjFmYjMzYmM5M2Y5 ODRhNGNhZTk3MmViNzcwOTgzZTJlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC4yNy4xMDA6ODc3NC92Mi9jMmM1OWI0ZDNkMjg0ZDhmYTA5ZjE2OWNiMTgwMGUwNiJ9XSwg ImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJjb21wdXRlIiwgIm5hbWUiOiAibm92YSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3 LjEwMDozMzMzIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzE5Mi4xNjguMjcuMTAwOjMzMzMiLCAiaWQiOiAiN2JjMThjYzk1NWFiNDNkYjhm MGU2YWNlNDU4NjZmMzAiLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDozMzMzIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInMzIiwgIm5hbWUi OiAiczMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTkyLjE2OC4yNy4xMDA6OTI5MiIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjog Imh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo5MjkyIiwgImlkIjogIjczODQzNTJhNTQ0MjQ1NzVhM2NkOTVkN2E0YzNjZGY1IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC4yNy4x MDA6OTI5MiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpbWFnZSIsICJuYW1lIjogImdsYW5jZSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6 Ly8xOTIuMTY4LjI3LjEwMDo4Nzc2L3YxL2MyYzU5YjRkM2QyODRkOGZhMDlmMTY5Y2IxODAwZTA2IiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDov LzE5Mi4xNjguMjcuMTAwOjg3NzYvdjEvYzJjNTliNGQzZDI4NGQ4ZmEwOWYxNjljYjE4MDBlMDYiLCAiaWQiOiAiMzQ3ZWQ2ZThjMjkxNGU1MGFlMmJiNjA2YWQxNDdjNTQiLCAicHVi bGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo4Nzc2L3YxL2MyYzU5YjRkM2QyODRkOGZhMDlmMTY5Y2IxODAwZTA2In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBl IjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo4NzczL3NlcnZpY2VzL0FkbWluIiwg InJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzE5Mi4xNjguMjcuMTAwOjg3NzMvc2VydmljZXMvQ2xvdWQiLCAiaWQiOiAiMmIwZGMyYjNlY2U4NGJj YWE1NDAzMDMzNzI5YzY3MjIiLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo4NzczL3NlcnZpY2VzL0Nsb3VkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0 eXBlIjogImVjMiIsICJuYW1lIjogImVjMiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDozNTM1Ny92Mi4wIiwgInJlZ2lvbiI6ICJS ZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzE5Mi4xNjguMjcuMTAwOjUwMDAvdjIuMCIsICJpZCI6ICJiNTY2Y2JlZjA2NjQ0ZmY2OWMyOTMxNzY2Yjc5MTIyOSIsICJw dWJsaWNVUkwiOiAiaHR0cDovLzE5Mi4xNjguMjcuMTAwOjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0 b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAiZGVtbyIsICJyb2xlc19saW5rcyI6IFtdLCAiaWQiOiAiZTVhMTM3NGE4YTRmNDI4NWIzYWQ3MzQ1MWU2MDY4YjEiLCAicm9sZXMi OiBbeyJuYW1lIjogImFub3RoZXJyb2xlIn0sIHsibmFtZSI6ICJNZW1iZXIifV0sICJuYW1lIjogImRlbW8ifSwgIm1ldGFkYXRhIjogeyJpc19hZG1pbiI6IDAsICJyb2xlcyI6IFsi YWRiODM3NDVkYzQzNGJhMzk5ODllNjBjOTIzYWZhMjgiLCAiMzM2ZTFiNjE1N2Y3NGFmZGJhNWUwYTYwMWUwNjM5MmYiXX19fTGB-zCB-AIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYD VQQIEwVVbnNldDEOMAwGA1UEBxMFVW5zZXQxDjAMBgNVBAoTBVVuc2V0MRgwFgYDVQQDEw93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEgYCAHLpsEs2R nouriuiCgFayIqCssK3SVdhOMINiuJtqv0sE-wBDFiEj-Prcudqlz-n+6q7VgV4mwMPszz39-rwp+P5l4AjrJasUm7FrO-4l02tPLaaZXU1gBQ1jUG5e5aL5jPDP08HbCWuX6wr-QQQB SrWY8lF3HrTcJT23sZIleg==
1.2.1. Sign certificate issued by external CA
- all certificate and key files must be in Privacy Enhanced Mail (PEM) format
- private key files must not be protected by a password
key_size, valid_days, and ca_password as they will be ignored.
- Request Signing Certificate from External CA
- Convert certificate and private key to PEM if needed
- Install External Signing Certificate
1.2.2. Request a signing certificate from an external CA
cert_req.conf file, as follows:
[ req ] default_bits = 1024 default_keyfile = keystonekey.pem default_md = sha1 prompt = no distinguished_name = distinguished_name [ distinguished_name ] countryName = US stateOrProvinceName = CA localityName = Sunnyvale organizationName = OpenStack organizationalUnitName = Keystone commonName = Keystone Signing emailAddress = keystone@example.com
$openssl req -newkey rsa:1024 -keyout signing_key.pem -keyform PEM \-out signing_cert_req.pem -outform PEM -config cert_req.conf -nodes
signing_cert_req.pem and signing_key.pem. Send signing_cert_req.pem to your CA to request a token signing certificate and make sure to ask the certificate to be in PEM format. Also, make sure your trusted CA certificate chain is also in PEM format.
1.2.3. Install an external signing certificate
signing_cert.pem- (Keystone token) signing certificate in PEM formatsigning_key.pem- corresponding (non-encrypted) private key in PEM formatcacert.pem- trust CA certificate chain in PEM format
#mkdir -p /etc/keystone/ssl/certs#cp signing_cert.pem /etc/keystone/ssl/certs/#cp signing_key.pem /etc/keystone/ssl/certs/#cp cacert.pem /etc/keystone/ssl/certs/#chmod -R 700 /etc/keystone/ssl/certs
/etc/keystone/ssl/certs, make sure it is reflected in the [signing] section of the configuration file.
1.3. Configure the Identity Service with SSL
examples/pki/certs and examples/pki/private directories:
Certificate types
- cacert.pem
- Certificate Authority chain to validate against.
- ssl_cert.pem
- Public certificate for Identity Service server.
- middleware.pem
- Public and private certificate for Identity Service middleware/client.
- cakey.pem
- Private key for the CA.
- ssl_key.pem
- Private key for the Identity Service server.
1.3.1. SSL configuration
[ssl] section in the etc/keystone.conf file. The following SSL configuration example uses the included sample certificates:
[ssl] enable = True certfile = <path to keystone.pem> keyfile = <path to keystonekey.pem> ca_certs = <path to ca.pem> cert_required = True
Options
enable. True enables SSL. Default is False.certfile. Path to the Identity Service public certificate file.keyfile. Path to the Identity Service private certificate file. If you include the private key in the certfile, you can omit the keyfile.ca_certs. Path to the CA trust chain.cert_required. Requires client certificate. Default is False.
1.4. External authentication with Identity
apache-httpd, you can use external authentication methods that differ from the authentication provided by the identity store back end. For example, you can use an SQL identity back end together with X.509 authentication, Kerberos, and so on instead of using the user name and password combination.
1.4.1. Use HTTPD authentication
REMOTE_USER environment variable. This user must already exist in the Identity back end to get a token from the controller. To use this method, Identity should run on apache-httpd.
1.4.2. Use X.509
<VirtualHost _default_:5000>
SSLEngine on
SSLCertificateFile /etc/ssl/certs/ssl.cert
SSLCertificateKeyFile /etc/ssl/private/ssl.key
SSLCACertificatePath /etc/ssl/allowed_cas
SSLCARevocationPath /etc/ssl/allowed_cas
SSLUserName SSL_CLIENT_S_DN_CN
SSLVerifyClient require
SSLVerifyDepth 10
(...)
</VirtualHost>1.5. Integrate Identity with LDAP
authlogin_nsswitch_use_ldap boolean value for SELinux on the Identity server. To enable and make the option persistent across reboots:
#setsebool -P authlogin_nsswitch_use_ldap
/etc/keystone/keystone.conf file. Modify these examples as needed.
Procedure 1.1. To integrate Identity with LDAP
- Enable the LDAP driver in the
keystone.conffile:[identity] #driver = keystone.identity.backends.sql.Identity driver = keystone.identity.backends.ldap.Identity
- Define the destination LDAP server in the
keystone.conffile:[ldap] url = ldap://localhost user = dc=Manager,dc=example,dc=org password = samplepassword suffix = dc=example,dc=org use_dumb_member = False allow_subtree_delete = False
- Create the organizational units (OU) in the LDAP directory, and define their corresponding location in the
keystone.conffile:[ldap] user_tree_dn = ou=Users,dc=example,dc=org user_objectclass = inetOrgPerson tenant_tree_dn = ou=Groups,dc=example,dc=org tenant_objectclass = groupOfNames role_tree_dn = ou=Roles,dc=example,dc=org role_objectclass = organizationalRole
NoteThese schema attributes are extensible for compatibility with various schemas. For example, this entry maps to thepersonattribute in Active Directory:user_objectclass = person
- A read-only implementation is recommended for LDAP integration. These permissions are applied to object types in the
keystone.conffile:[ldap] user_allow_create = False user_allow_update = False user_allow_delete = False tenant_allow_create = False tenant_allow_update = False tenant_allow_delete = False role_allow_create = False role_allow_update = False role_allow_delete = False
- Restart the Identity service:
#service keystone restartWarningDuring service restart, authentication and authorization are unavailable.
keystone.conf file.
- Filters
- Use filters to control the scope of data presented through LDAP.
[ldap] user_filter = (memberof=cn=openstack-users,ou=workgroups,dc=example,dc=org) tenant_filter = role_filter =
- LDAP Account Status
- Mask account status values for compatibility with various directory services. Superfluous accounts are filtered with
user_filter.For example, you can mask Active Directory account status attributes in thekeystone.conffile:[ldap] user_enabled_attribute = userAccountControl user_enabled_mask = 2 user_enabled_default = 512
1.5.1. Separate role authorization and user authentication
Procedure 1.2. Separating role authorization and user authentication through Assignments
- Configure the Identity service to authenticate users through the LDAP driver. To do so, first find the
[identity]section in the/etc/keystone/keystone.confconfiguration file. Then, set thedriverconfiguration key in that section tokeystone.identity.backends.ldap.Identity:[identity] driver = keystone.identity.backends.ldap.Identity
- Next, enable the Assignment driver. To do so, find the
[assignment]section in the/etc/keystone/keystone.confconfiguration file. Then, set thedriverconfiguration key in that section tokeystone.assignment.backends.sql.Assignment:[assignment] driver = keystone.assignment.backends.sql.Assignment
#openstack-config --set /etc/keystone/keystone.conf \identity driver keystone.identity.backends.ldap.Identity#openstack-config --set /etc/keystone/keystone.conf \assignment driver keystone.assignment.backends.sql.Assignment
1.5.2. Secure the OpenStack Identity service connection to an LDAP back end
Procedure 1.3. Configuring TLS encryption on LDAP traffic
- Open the
/etc/keystone/keystone.confconfiguration file. - Find the
[ldap]section. - In the
[ldap]section, set theuse_tlsconfiguration key toTrue. Doing so will enable TLS. - Configure the Identity service to use your certificate authorities file. To do so, set the
tls_cacertfileconfiguration key in theldapsection to the certificate authorities file's path.NoteYou can also set thetls_cacertdir(also in theldapsection) to the directory where all certificate authorities files are kept. If bothtls_cacertfileandtls_cacertdirare set, then the latter will be ignored. - Specify what client certificate checks to perform on incoming TLS sessions from the LDAP server. To do so, set the
tls_req_certconfiguration key in the[ldap]section todemand,allow, ornever:demand: a certificate will always be requested from the LDAP server. The session will be terminated if no certificate is provided, or if the certificate provided cannot be verified against the existing certificate authorities file.allow: a certificate will always be requested from the LDAP server. The session will proceed as normal even if a certificate is not provided. If a certificate is provided but it cannot be verified against the existing certificate authorities file, the certificate will be ignored and the session will proceed as normal.never: a certificate will never be requested.
#openstack --config --set /etc/keystone/keystone.conf \ldap use_tls True#openstack-config --set /etc/keystone/keystone.conf \ldap tls_cacertfile CA_FILE#openstack-config --set /etc/keystone/keystone.conf \ldap tls_req_cert CERT_BEHAVIOR
- CA_FILE is the absolute path to the certificate authorities file that should be used to encrypt LDAP traffic.
- CERT_BEHAVIOR: specifies what client certificate checks to perform on an incoming TLS session from the LDAP server (
demand,allow, ornever).
1.6. Configure Identity service for token binding
keystone.conf file:
[token] bind = kerberos
[token] bind = x509
kerberos and x509 are supported.
enforce_token_bind option to one of these modes:
disabledDisables token bind checking.permissiveEnables bind checking. If a token is bound to an unknown authentication mechanism, the server ignores it. The default is this mode.strictEnables bind checking. If a token is bound to an unknown authentication mechanism, the server rejects it.requiredEnables bind checking. Requires use of at least authentication mechanism for tokens.kerberosEnables bind checking. Requires use of kerberos as the authentication mechanism for tokens:[token] enforce_token_bind = kerberos
x509Enables bind checking. Requires use of X.509 as the authentication mechanism for tokens:[token] enforce_token_bind = x509
1.7. User CRUD
user_crud_extension filter, insert it after the *_body middleware and before the public_service application in the public_api WSGI pipeline in keystone.conf. For example:
[filter:user_crud_extension] paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory [pipeline:public_api] pipeline = stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug ec2_extension user_crud_extension public_service
$curl -X PATCH http://localhost:5000/v2.0/OS-KSCRUD/users/USERID -H "Content-type: application/json" \-H "X_Auth_Token: AUTHTOKENID" -d '{"user": {"password": "ABCD", "original_password": "DCBA"}}'
1.8. Logging
[DEFAULT] section of the keystone.conf file under log_config. To route logging through syslog, set use_syslog=true option in the [DEFAULT] section.
etc/logging.conf.sample directory. Like other OpenStack projects, Identity uses the Python logging module, which includes extensive configuration options that let you define the output levels and formats.
etc/keystone.conf sample configuration files that are distributed with the Identity Service. For example, each server application has its own configuration file.
.ini files, you can configure auth_token middleware in the [keystone_authtoken] section in the main configuration file, such as nova.conf. For example in Compute, you can remove the middleware parameters from api-paste.ini, as follows:
[filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
nova.conf file:
[DEFAULT] ... auth_strategy=keystone [keystone_authtoken] auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http auth_uri = http://127.0.0.1:5000/ admin_user = admin admin_password = SuperSekretPassword admin_tenant_name = service
[keystone_authtoken] section.
1.9. Monitoring
stats_monitoring filter and including it at the beginning of any desired WSGI pipelines:
[filter:stats_monitoring] paste.filter_factory = keystone.contrib.stats:StatsMiddleware.factory [pipeline:public_api] pipeline = stats_monitoring [...] public_service
stats_reporting filter and including it near the end of your admin_api WSGI pipeline (After *_body middleware and before *_extension filters is recommended):
[filter:stats_reporting] paste.filter_factory = keystone.contrib.stats:StatsExtension.factory [pipeline:admin_api] pipeline = [...] json_body stats_reporting ec2_extension [...] admin_service
$curl -H 'X-Auth-Token: ADMIN' http://localhost:35357/v2.0/OS-STATS/stats
$curl -H 'X-Auth-Token: ADMIN' -X DELETE \http://localhost:35357/v2.0/OS-STATS/stats
1.10. Start the Identity services
$keystone-all
keystone.conf file as described previously. One of these wsgi servers is admin (the administration API) and the other is main (the primary/public API interface). Both run in a single process.
1.11. Example usage
keystone client is set up to expect commands in the general form of keystone command argument, followed by flag-like keyword arguments to provide additional (often optional) information. For example, the command user-list and tenant-create can be invoked as follows:
# Using token auth env variables export OS_SERVICE_ENDPOINT=http://127.0.0.1:5000/v2.0/ export OS_SERVICE_TOKEN=secrete_token keystone user-list keystone tenant-create --name=demo # Using token auth flags keystone --os-token=secrete --os-endpoint=http://127.0.0.1:5000/v2.0/ user-list keystone --os-token=secrete --os-endpoint=http://127.0.0.1:5000/v2.0/ tenant-create --name=demo # Using user + password + tenant_name env variables export OS_USERNAME=admin export OS_PASSWORD=secrete export OS_TENANT_NAME=admin keystone user-list keystone tenant-create --name=demo # Using user + password + tenant_name flags keystone --username=admin --password=secrete --tenant_name=admin user-list keystone --username=admin --password=secrete --tenant_name=admin tenant-create --name=demo
1.12. Authentication middleware with user name and password
admin_user and admin_password options. When using the admin_user and admin_password options the admin_token parameter is optional. If admin_token is specified, it is used only if the specified token is still valid.
[keystone_authtoken] section of the main configuration file, such as nova.conf. In Compute, for example, you can remove the middleware parameters from api-paste.ini, as follows:
[filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
nova.conf as follows:
[DEFAULT] ... auth_strategy=keystone [keystone_authtoken] auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http auth_uri = http://127.0.0.1:5000/ admin_user = admin admin_password = SuperSekretPassword admin_tenant_name = service
admin_user and admin_password options:
[filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory service_port = 5000 service_host = 127.0.0.1 auth_port = 35357 auth_host = 127.0.0.1 auth_token = 012345SECRET99TOKEN012345 admin_user = admin admin_password = keystone123
1.13. Identity API protection with role-based access control (RBAC)
keystone.conf. Typically this file is named policy.json, and it contains the rules for which roles have access to certain actions in defined services.
API_NAME: RULE_STATEMENT or MATCH_STATEMENT
RULE_STATEMENT can contain RULE_STATEMENT or MATCH_STATEMENT.
MATCH_STATEMENT is a set of identifiers that must match between the token provided by the caller of the API and the parameters or target entities of the API call in question. For example:
"identity:create_user": [["role:admin", "domain_id:%(user.domain_id)s"]]
domain_id in your token (which implies this must be a domain-scoped token) must match the domain_id in the user object that you are trying to create. In other words, you must have the admin role on the domain in which you are creating the user, and the token that you use must be scoped to that domain.
ATTRIB_FROM_TOKEN:CONSTANT or ATTRIB_RELATED_TO_API_CALL
user_id, the domain_id or project_id depending on the scope, and the list of roles you have within that scope.
user.domain_id). The target objects of an API are also available using a target.object.attribute syntax. For instance:
"identity:delete_user": [["role:admin", "domain_id:%(target.user.domain_id)s"]]
role:
target.role.id
target.role.name
user:
target.user.default_project_id
target.user.description
target.user.domain_id
target.user.enabled
target.user.id
target.user.name
group:
target.group.description
target.group.domain_id
target.group.id
target.group.name
domain:
target.domain.enabled
target.domain.id
target.domain.name
project:
target.project.description
target.project.domain_id
target.project.enabled
target.project.id
target.project.namepolicy.json file supplied provides a somewhat basic example of API protection, and does not assume any particular use of domains. Refer to policy.v3cloudsample.json as an example of multi-domain configuration installations where a cloud provider wants to delegate administration of the contents of a domain to a particular admin domain. This example policy file also shows the use of an admin_domain to allow a cloud provider to enable cloud administrators to have wider access across the APIs.
policy.v3cloudsample.json, and then enable it as the main policy file.
1.14. Troubleshoot the Identity service
/var/log/keystone/keystone.log file.
/etc/keystone/logging.conf file to configure the location of log files.
--debug parameter. Pass the --debug parameter before the command parameters.
1.14.1. Debug PKI middleware
Invalid OpenStack Identity Credentials message when you talk to an OpenStack service, it might be caused by the changeover from UUID tokens to PKI tokens in the Grizzly release. Learn how to troubleshoot this error.
signing_dir configuration option. In your services configuration file, look for a section like this:
[keystone_authtoken] signing_dir = /var/cache/glance/api auth_uri = http://127.0.0.1:5000/ auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = glance
signing_dir does, in fact, exist. If it does, check for the presence of the certificate files inside there:
$ls -la /var/cache/glance/api/total 24 drwx------. 2 ayoung root 4096 Jul 22 10:58 . drwxr-xr-x. 4 root root 4096 Nov 7 2012 .. -rw-r-----. 1 ayoung ayoung 1424 Jul 22 10:58 cacert.pem -rw-r-----. 1 ayoung ayoung 15 Jul 22 10:58 revoked.pem -rw-r-----. 1 ayoung ayoung 4518 Jul 22 10:58 signing_cert.pem
$curl http://localhost:35357/v2.0/certificates/signing
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1 (0x1)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C=US, ST=Unset, L=Unset, O=Unset, CN=www.example.com
Validity
Not Before: Jul 22 14:57:31 2013 GMT
Not After : Jul 20 14:57:31 2023 GMT
Subject: C=US, ST=Unset, O=Unset, CN=www.example.comNot Before: Jul 22 14:57:31 2013 GMT Not After : Jul 20 14:57:31 2023 GMT
debug = True and verbose = True in your Identity configuration file and restart the Identity server.
(keystone.common.wsgi): 2013-07-24 12:18:11,461 DEBUG wsgi __call__
arg_dict: {}
(access): 2013-07-24 12:18:11,462 INFO core __call__ 127.0.0.1 - - [24/Jul/2013:16:18:11 +0000]
"GET http://localhost:35357/v2.0/certificates/signing HTTP/1.0" 200 4518- Your service is configured incorrectly and cannot talk to Identity. Check the
auth_portandauth_hostvalues and make sure that you can talk to that service through cURL, as shown previously. - Your signing directory is not writable. Use the chmod command to change its permissions so that the service (POSIX) user can write to it. Verify the change through su and touch commands.
- The SELinux policy is denying access to the directory.
/var/cache/ directory, run the following command:
#restorecon /var/cache/
/var/cache sub-directory, you should. Modify the signing_dir configuration option for your service and restart.
setenforce enforcing to confirm that your changes solve the problem.
1.14.2. Debug signing key file errors
/etc/keystone/ssl*, which is owned by root:root.
--keystone-user and --keystone-group parameters, you get an error, as follows:
2012-07-31 11:10:53 ERROR [keystone.common.cms] Error opening signing key file
/etc/keystone/ssl/private/signing_key.pem
140380567730016:error:0200100D:system library:fopen:Permission
denied:bss_file.c:398:fopen('/etc/keystone/ssl/private/signing_key.pem','r')
140380567730016:error:20074002:BIO routines:FILE_CTRL:system lib:bss_file.c:400:
unable to load signing key file1.14.3. Flush expired tokens from the token database table
Chapter 2. Dashboard
- Section 2.1, “Customize the dashboard”, for customizing the dashboard.
- Section 2.2, “Set up session storage for the dashboard”, for setting up session storage for the dashboard.
- To launch and manage instances, see OpenStack dashboard chapter in the Red Hat Enterprise Linux OpenStack Platform 5 End User Guide available at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/.
2.1. Customize the dashboard
openstack-dashboard package. You can customize the dashboard with your own colors, logo, and site title through a CSS file.
- Create a graphical logo with a transparent background.Use 200×27 for the logged-in banner graphic, and 365×50 for the login screen graphic.
- Set the HTML title, which appears at the top of the browser window, by adding the following line to
/etc/openstack-dashboard/local_settings.py:SITE_BRANDING = "Example, Inc. Cloud" - Upload your new graphic files to the following location:
/usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/ - Create a CSS style sheet in the following directory:
/usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/css/ - Change the colors and image file names as appropriate, though the relative directory paths should be the same. The following example file shows you how to customize your CSS file:
/* * New theme colors for dashboard that override the defaults: * dark blue: #355796 / rgb(53, 87, 150) * light blue: #BAD3E1 / rgb(186, 211, 225) * * By Preston Lee <plee@tgen.org> */ h1.brand { background: #355796 repeat-x top left; border-bottom: 2px solid #BAD3E1; } h1.brand a { background: url(../img/my_cloud_logo_small.png) top left no-repeat; } #splash .login { background: #355796 url(../img/my_cloud_logo_medium.png) no-repeat center 35px; } #splash .login .modal-header { border-top: 1px solid #BAD3E1; } .btn-primary { background-image: none !important; background-color: #355796 !important; border: none !important; box-shadow: none; } .btn-primary:hover, .btn-primary:active { border: none; box-shadow: none; background-color: #BAD3E1 !important; text-decoration: none; } - Open the following HTML template in an editor:
/usr/share/openstack-dashboard/openstack_dashboard/templates/_stylesheets.html - Add a line to include your
custom.cssfile:... <link href='{{ STATIC_URL }}bootstrap/css/bootstrap.min.css' media='screen' rel='stylesheet' /> <link href='{{ STATIC_URL }}dashboard/css/{% choose_css %}' media='screen' rel='stylesheet' /> <link href='{{ STATIC_URL }}dashboard/css/custom.css' media='screen' rel='stylesheet' /> ... - Restart the httpd service:
#service httpd restart - Reload the dashboard in your browser to view your changes.
2.2. Set up session storage for the dashboard
SESSION_ENGINE setting in your local_settings file: /etc/openstack-dashboard/local_settings
2.2.1. Local memory cache
- No shared storage across processes or workers.
- No persistence after a process terminates.
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default' : {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache'
}
}2.2.2. Key-value stores
2.2.2.1. Memcached
- Memcached service running and accessible.
- Python module
python-memcachedinstalled.
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache'
'LOCATION': 'my_memcached_host:11211',
}
}2.2.2.2. Redis
- Redis service running and accessible.
- Python modules
redisanddjango-redisinstalled.
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
"default": {
"BACKEND": "redis_cache.cache.RedisCache",
"LOCATION": "127.0.0.1:6379:1",
"OPTIONS": {
"CLIENT_CLASS": "redis_cache.client.DefaultClient",
}
}
}2.2.3. Initialize and configure the database
- Start the mysql command-line client:
$mysql -u root -p - Enter the MySQL root user's password when prompted.
- To configure the MySQL database, create the dash database:
mysql>CREATE DATABASE dash; - Create a MySQL user for the newly created dash database that has full control of the database. Replace DASH_DBPASS with a password for the new user:
mysql>GRANT ALL PRIVILEGES ON dash.* TO 'dash'@'%' IDENTIFIED BY 'DASH_DBPASS';mysql>GRANT ALL PRIVILEGES ON dash.* TO 'dash'@'localhost' IDENTIFIED BY 'DASH_DBPASS'; - Enter quit at the
mysql>prompt to exit MySQL. - In the
local_settingsfile :/etc/openstack-dashboard/local_settings, change these options:SESSION_ENGINE = 'django.core.cache.backends.db.DatabaseCache' DATABASES = { 'default': { # Database configuration here 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dash', 'USER': 'dash', 'PASSWORD': 'DASH_DBPASS', 'HOST': 'localhost', 'default-character-set': 'utf8' } } - After configuring the
local_settingsas shown, you can run the manage.py syncdb command to populate this newly created database.$/usr/share/openstack-dashboard/manage.py syncdbAs a result, the following output is returned:Installing custom SQL ... Installing indexes ... DEBUG:django.db.backends:(0.008) CREATE INDEX `django_session_c25c2c28` ON `django_session` (`expire_date`);; args=() No fixtures found.
- Restart Apache to pick up the default site and symbolic link settings:
#service httpd restart#service apache2 restart
2.2.4. Cached database
SESSION_ENGINE = "django.contrib.sessions.backends.cached_db"
2.2.5. Cookies
Chapter 3. Compute
3.1. System architecture
- The cloud controller represents the global state and interacts with the other components. The
API serveracts as the web services front end for the cloud controller. Thecompute controllerprovides compute server resources and usually also contains the Compute service. - The
object storeis an optional component that provides storage services; you can also instead use OpenStack Object Storage. - An
auth managerprovides authentication and authorization services when used with the Compute system; you can also instead use OpenStack Identity as a separate authentication service. - A
volume controllerprovides fast and permanent block-level storage for the compute servers. - The
network controllerprovides virtual networks to enable compute servers to interact with each other and with the public network. You can also instead use OpenStack Networking. - The
scheduleris used to select the most suitable compute controller to host an instance.
shared nothing architecture. All major components exist on multiple servers, including the compute,volume, and network controllers, and the object store or image service. The state of the entire system is stored in a database. The cloud controller communicates with the internal object store using HTTP, but it communicates with the scheduler, network controller, and volume controller using AMQP (advanced message queueing protocol). To avoid blocking a component while waiting for a response, Compute uses asynchronous calls, with a callback that is triggered when a response is received.
3.1.1. Hypervisors
3.1.2. Tenants, users, and roles
:project_id to their access key. If no tenant is specified in the API request, Compute attempts to use a tenant with the same ID as the user.
- Number of volumes that may be launched.
- Number of processor cores and the amount of RAM that can be allocated.
- Floating IP addresses assigned to any instance when it launches. This allows instances to have the same publicly accessible IP addresses.
- Fixed IP addresses assigned to the same instance when it launches. This allows instances to have the same publicly or privately accessible IP addresses.
policy.json file for user roles. For example, a rule can be defined so that a user must have the admin role in order to be able to allocate a public IP address.
project instead of tenant. Because of this legacy terminology, some command-line tools use --project_id where you would normally expect to enter a tenant ID.
3.1.3. Block storage
m1.tiny, also provide an additional ephemeral block device of between 20 and 160 GB. These sizes can be configured to suit your environment. This is presented as a raw block device with no partition table or file system. Cloud-aware operating system images can discover, format, and mount these storage devices. This is a feature of the guest operating system you are using, and is not an OpenStack mechanism. OpenStack only provisions the raw storage.
3.1.4. EC2 compatibility API
- Euca2ools
- A popular open source command-line tool for interacting with the EC2 API. This is convenient for multi-cloud environments where EC2 is the common API, or for transitioning from EC2-based clouds to OpenStack. For more information, see the euca2ools site.
- Hybridfox
- A Firefox browser add-on that provides a graphical interface to many popular public and private cloud technologies, including OpenStack. For more information, see the hybridfox site.
- boto
- A Python library for interacting with Amazon Web Services. It can be used to access OpenStack through the EC2 compatibility API. For more information, see the boto project page on GitHub.
- fog
- A Ruby cloud services library. It provides methods for interacting with a large number of cloud and virtualization platforms, including OpenStack. For more information, see the fog site.
- php-opencloud
- A PHP SDK designed to work with most OpenStack- based cloud deployments, as well as Rackspace public cloud. For more information, see the php-opencloud site.
3.1.5. Building blocks
$nova image-list+--------------------------------------+-------------------------------+--------+--------------------------------------+ | ID | Name | Status | Server | +--------------------------------------+-------------------------------+--------+--------------------------------------+ | aee1d242-730f-431f-88c1-87630c0f07ba | Red Hat Enterprise Linux amd64| ACTIVE | | | 0b27baa1-0ca6-49a7-b3f4-48388e440245 | Red Hat Enterprise Linux amd64| ACTIVE | | | df8d56fc-9cea-4dfd-a8d3-28764de3cb08 | jenkins | ACTIVE | | +--------------------------------------+-------------------------------+--------+--------------------------------------+
ID- Automatically generated UUID of the image
Name- Free form, human-readable name for image
Status- The status of the image. Images marked
ACTIVEare available for use. Server- For images that are created as snapshots of running instances, this is the UUID of the instance the snapshot derives from. For uploaded images, this field is blank.
flavors. The default installation provides five flavors. By default, these are configurable by admin users, however that behavior can be changed by redefining the access controls for compute_extension:flavormanage in /etc/nova/policy.json on the compute-api server.
$nova flavor-list+----+-----------+-----------+------+-----------+------+-------+-------------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | +----+-----------+-----------+------+-----------+------+-------+-------------+ | 1 | m1.tiny | 512 | 1 | N/A | 0 | 1 | | | 2 | m1.small | 2048 | 20 | N/A | 0 | 1 | | | 3 | m1.medium | 4096 | 40 | N/A | 0 | 2 | | | 4 | m1.large | 8192 | 80 | N/A | 0 | 4 | | | 5 | m1.xlarge | 16384 | 160 | N/A | 0 | 8 | | +----+-----------+-----------+------+-----------+------+-------+-------------+
3.1.6. Compute service architecture
API server
Message queue
Compute worker
- Run instances
- Terminate instances
- Reboot instances
- Attach volumes
- Detach volumes
- Get console output
Network Controller
- Allocate fixed IP addresses
- Configuring VLANs for projects
- Configuring networks for compute nodes
3.2. Images and instances
flavor, which represents a set of virtual resources. Flavors define how many virtual CPUs an instance has and the amount of RAM and size of its ephemeral disks. OpenStack provides a number of predefined flavors that you can edit or add to. Users must select from the set of available flavors defined on their cloud.
- For basic usage information about images, refer to the Upload and manage images section in the OpenStack dashboard chapter, in the End User Guide from https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/.
- For more information about image configuration options, refer to the Image Service chapter in the Configuration Reference Guide from https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/.
- For more information about flavors, see Section 3.4.3, “Flavors”.
cinder-volume service, which provides persistent block storage, instead of the ephemeral storage provided by the selected instance flavor.
cinder-volume service provides a number of predefined volumes.
Figure 3.1. Base image state with no running instances

vda in this diagram, and additional ephemeral storage, labeled vdb. In this example, the cinder-volume store is mapped to the third virtual disk on this instance, vdc.
Figure 3.2. Instance creation from image and runtime state

vda. By using smaller images, your instances start up faster as less data needs to be copied across the network.
vdb is also created. This is an empty ephemeral disk, which is destroyed when you delete the instance.
cinder-volume using iSCSI, and maps to the third disk, vdc. The vCPU and memory resources are provisioned and the instance is booted from vda. The instance runs and changes data on the disks as indicated in red in the diagram.
vda and vdb could be backed by network storage rather than a local disk.
Figure 3.3. End state of image and volume after instance exits

3.2.1. Image management
- File system
- The OpenStack Image Service stores virtual machine images in the file system back end by default. This simple back end writes image files to the local file system.
- Object Storage service
- The OpenStack highly available service for storing objects.
- S3
- The Amazon S3 service.
- HTTP
- OpenStack Image Service can read virtual machine images that are available on the internet using HTTP. This store is read only.
- Rados block device (RBD)
- Stores images inside of a Ceph storage cluster using Ceph's RBD interface.
- GridFS
- Stores images using MongoDB.
3.2.2. Image property protection
Procedure 3.1. To configure property protection
- Define roles in the
policy.jsonfile. - Define which roles can manage which properties in the
/etc/glance/property-protections.conffile.
3.2.3. Instance building blocks
$nova image-list+--------------------------------------+-------------------------------+--------+--------------------------------------+ | ID | Name | Status | Server | +--------------------------------------+-------------------------------+--------+--------------------------------------+ | aee1d242-730f-431f-88c1-87630c0f07ba | Red Hat Enterprise Linux amd64| ACTIVE | | | 0b27baa1-0ca6-49a7-b3f4-48388e440245 | Red Hat Enteprise Linux amd64 | ACTIVE | | | df8d56fc-9cea-4dfd-a8d3-28764de3cb08 | jenkins | ACTIVE | | +--------------------------------------+-------------------------------+--------+--------------------------------------+
ID- Automatically generated UUID of the image.
Name- Free form, human-readable name for image.
Status- The status of the image. Images marked
ACTIVEare available for use. Server- For images that are created as snapshots of running instances, this is the UUID of the instance the snapshot derives from. For uploaded images, this field is blank.
flavors. The default installation provides five flavors. By default, these are configurable by administrative users. However, you can change this behavior by redefining the access controls for compute_extension:flavormanage in /etc/nova/policy.json on the compute-api server.
$nova flavor-list+----+-----------+-----------+------+-----------+------+-------+-------------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | +----+-----------+-----------+------+-----------+------+-------+-------------+ | 1 | m1.tiny | 512 | 1 | N/A | 0 | 1 | | | 2 | m1.small | 2048 | 20 | N/A | 0 | 1 | | | 3 | m1.medium | 4096 | 40 | N/A | 0 | 2 | | | 4 | m1.large | 8192 | 80 | N/A | 0 | 4 | | | 5 | m1.xlarge | 16384 | 160 | N/A | 0 | 8 | | +----+-----------+-----------+------+-----------+------+-------+-------------+
3.2.4. Instance management tools
$nova --debug listconnect: (10.0.0.15, 5000) send: 'POST /v2.0/tokens HTTP/1.1\r\nHost: 10.0.0.15:5000\r\nContent-Length: 116\r\ncontent-type: application/json\r\naccept-encoding: gzip, deflate\r\naccept: application/json\r\nuser-agent: python-novaclient\r\n\r\n{"auth": {"tenantName": "demoproject", "passwordCredentials": {"username": "demouser", "password": "demopassword"}}}' reply: 'HTTP/1.1 200 OK\r\n' header: Content-Type: application/json header: Vary: X-Auth-Token header: Date: Thu, 13 Sep 2012 20:27:36 GMT header: Transfer-Encoding: chunked connect: (128.52.128.15, 8774) send: u'GET /v2/fa9dccdeadbeef23ae230969587a14bf/servers/detail HTTP/1.1\r\nHost: 10.0.0.15:8774\r\nx-auth-project-id: demoproject\r\nx-auth-token: deadbeef9998823afecc3d552525c34c\r\naccept-encoding: gzip, deflate\r\naccept: application/json\r\nuser-agent: python-novaclient\r\n\r\n' reply: 'HTTP/1.1 200 OK\r\n' header: X-Compute-Request-Id: req-bf313e7d-771a-4c0b-ad08-c5da8161b30f header: Content-Type: application/json header: Content-Length: 15 header: Date: Thu, 13 Sep 2012 20:27:36 GMT !!removed matrix for validation!!
3.2.5. Control where instances run
3.3. Networking with nova-network
nova-network for networking between VMs or use the OpenStack Networking service (neutron) for networking. To configure Compute networking options with OpenStack Networking, see Chapter 6, Networking.
3.3.1. Networking concepts
nova-network only supports Linux bridge networking that enables the virtual interfaces to connect to the outside network through the physical interface.) Compute makes a distinction between fixed IPs and floating IPs. Fixed IPs are IP addresses that are assigned to an instance on creation and stay the same until the instance is explicitly terminated. By contrast, floating IPs are addresses that can be dynamically associated with an instance. A floating IP address can be disassociated and associated with another instance at any time. A user can reserve a floating IP for their project.
nova-network provides virtual networks to enable compute servers to interact with each other and with the public network. Compute with nova-network supports the following network modes, which are implemented as “Network Manager” types.
- Flat Network Manager
- In Flat mode, a network administrator specifies a subnet. IP addresses for VM instances are assigned from the subnet, and then injected into the image on launch. Each instance receives a fixed IP address from the pool of available addresses. A system administrator must create the Linux networking bridge (typically named
br100, although this is configurable) on the systems running thenova-networkservice. All instances of the system are attached to the same bridge, and this is configured manually by the network administrator.NoteConfiguration injection currently only works on Linux-style systems that keep networking configuration in/etc/network/interfaces. - Flat DHCP Network Manager
- In FlatDHCP mode, OpenStack starts a DHCP server (
dnsmasq) to allocate IP addresses to VM instances from the specified subnet, in addition to manually configuring the networking bridge. IP addresses for VM instances are assigned from a subnet specified by the network administrator.Like Flat Mode, all instances are attached to a single bridge on the compute node. Additionally, a DHCP server is running to configure instances (depending on single-/multi-host mode, alongside eachnova-network). In this mode, Compute does a bit more configuration in that it attempts to bridge into an ethernet device (flat_interface, eth0 by default). For every instance, Compute allocates a fixed IP address and configures dnsmasq with the MAC/IP pair for the VM. Dnsmasq does not take part in the IP address allocation process, it only hands out IPs according to the mapping done by Compute. Instances receive their fixed IPs by doing a dhcpdiscover. These IPs are not assigned to any of the host's network interfaces, only to the VM's guest-side interface.In any setup with flat networking, the hosts providing thenova-networkservice are responsible for forwarding traffic from the private network. They also run and configurednsmasqas a DHCP server listening on this bridge, usually on IP address 10.0.0.1 (see DHCP server: dnsmasq ). Compute can determine the NAT entries for each network, although sometimes NAT is not used, such as when configured with all public IPs or a hardware router is used (one of the HA options). Such hosts need to havebr100configured and physically connected to any other nodes that are hosting VMs. You must set theflat_network_bridgeoption or create networks with the bridge parameter in order to avoid raising an error. Compute nodes have iptables/ebtables entries created for each project and instance to protect against IP/MAC address spoofing and ARP poisoning.NoteIn single-host Flat DHCP mode you will be able to ping VMs through their fixed IP from thenova-networknode, but you cannot ping them from the compute nodes. This is expected behavior. - VLAN Network Manager
- VLANManager mode is the default mode for OpenStack Compute. In this mode, Compute creates a VLAN and bridge for each tenant. For multiple-machine installation, the VLAN Network Mode requires a switch that supports VLAN tagging (IEEE 802.1Q). The tenant gets a range of private IPs that are only accessible from inside the VLAN. In order for a user to access the instances in their tenant, a special VPN instance (code named cloudpipe) needs to be created. Compute generates a certificate and key for the user to access the VPN and starts the VPN automatically. It provides a private network segment for each tenant's instances that can be accessed through a dedicated VPN connection from the Internet. In this mode, each tenant gets its own VLAN, Linux networking bridge, and subnet.The subnets are specified by the network administrator, and are assigned dynamically to a tenant when required. A DHCP Server is started for each VLAN to pass out IP addresses to VM instances from the subnet assigned to the tenant. All instances belonging to one tenant are bridged into the same VLAN for that tenant. OpenStack Compute creates the Linux networking bridges and VLANs when required.
l3.py and linux_net.py), which makes use of iptables, route and other network management facilities, and libvirt's network filtering facilities. The driver is not tied to any particular network manager; all network managers use the same driver. The driver usually initializes (creates bridges and so on) only when the first VM lands on this host node.
nova-network service provides a default gateway for VMs and hosts a single DHCP server (dnsmasq). In multi-host mode, each compute node runs its own nova-network service. In both cases, all traffic between VMs and the outer world flows through nova-network. .
br100.
public_interface for the public interface, and flat_interface and vlan_interface for the internal interface with flat / VLAN managers). This guide refers to the public network as the external network and the private network as the internal or tenant network.
$nova network-create vmnet \--fixed-range-v4=10.0.0.0/24 --fixed-cidr=10.20.0.0/16 --bridge=br100
--fixed-range-v4-specifies the network subnet.--fixed-cidrspecifies a range of fixed IP addresses to allocate, and can be a subset of the--fixed-range-v4argument.--bridgespecifies the bridge device to which this network is connected on every compute node.
3.3.2. DHCP server: dnsmasq
nova-network service is responsible for starting up dnsmasq processes.
dnsmasq can be customized by creating a dnsmasq configuration file. Specify the configuration file using the dnsmasq_config_file configuration option. For example:
dnsmasq_config_file=/etc/dnsmasq-nova.conf
dnsmasq using a dnsmasq configuration file, see the Configuration Reference Guide from https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/.
dnsmasq documentation also has a more comprehensive dnsmasq configuration file example.
dnsmasq also acts as a caching DNS server for instances. You can explicitly specify the DNS server that dnsmasq should use by setting the dns_server configuration option in /etc/nova/nova.conf. The following example would configure dnsmasq to use Google's public DNS server:
dns_server=8.8.8.8
dnsmasq goes to the /var/log/messages file. dnsmasq logging output can be useful for troubleshooting if VM instances boot successfully but are not reachable over the network.
nova-manage fixed reserve --address=x.x.x.x to specify the starting point IP address (x.x.x.x) to reserve with the DHCP server. This reservation only affects which IP address the VMs start at, not the fixed IP addresses that the nova-network service places on the bridges.
3.3.3. Configure Compute to use IPv6 addresses
nova-network, you can put Compute into IPv4/IPv6 dual-stack mode, so that it uses both IPv4 and IPv6 addresses for communication. In IPv4/IPv6 dual-stack mode, instances can acquire their IPv6 global unicast address by using a stateless address auto configuration mechanism [RFC 4862/2462]. IPv4/IPv6 dual-stack mode works with both VlanManager and FlatDHCPManager networking modes. In VlanManager, each project uses a different 64-bit global routing prefix. In FlatDHCPManager, all instances use one 64-bit global routing prefix.
nova-* service must have python-netaddr and radvd installed.
Procedure 3.2. Switch into IPv4/IPv6 dual-stack mode
- On all nodes running a
nova-*service, installpython-netaddr:#yum install python-netaddr - On all
nova-networknodes, installradvdand configure IPv6 networking:#yum install radvd#echo 1 > /proc/sys/net/ipv6/conf/all/forwarding#echo 0 > /proc/sys/net/ipv6/conf/all/accept_ra - Edit the
nova.conffile on all nodes to specifyuse_ipv6 = True. - Restart all
nova-*services.
public or private after the network-create parameter.
$nova network-create public --fixed-range-v4 fixed_range_v4 --vlan vlan_id --vpn vpn_start --fixed-range-v6 fixed_range_v6
--fixed_range_v6 parameter. The default value for the parameter is: fd00::/48.
- When you use
FlatDHCPManager, the command uses the original--fixed_range_v6value. For example:$nova network-create public --fixed-range-v4 10.0.2.0/24 --fixed-range-v6 fd00:1::/48 - When you use
VlanManager, the command increments the subnet ID to create subnet prefixes. Guest VMs use this prefix to generate their IPv6 global unicast address. For example:$nova network-create public --fixed-range-v4 10.0.1.0/24 --vlan 100 --vpn 1000 --fixed-range-v6 fd00:1::/48
Table 3.1. Description of configuration options for ipv6
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| fixed_range_v6 = fd00::/48 | (StrOpt) Fixed IPv6 address block |
| gateway_v6 = None | (StrOpt) Default IPv6 gateway |
| ipv6_backend = rfc2462 | (StrOpt) Backend to use for IPv6 generation |
| use_ipv6 = False | (BoolOpt) Use IPv6 |
3.3.4. Metadata service
Introduction
http://169.254.169.254. The metadata service supports two sets of APIs: an OpenStack metadata API and an EC2-compatible API. Each of the APIs is versioned by date.
http://169.254.169.254/openstack For example:
$curl http://169.254.169.254/openstack2012-08-10 latest
http://169.254.169.254.
$curl http://169.254.169.2541.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 2009-04-04 latest
OpenStack metadata API
http://169.254.169.254/openstack/2012-08-10/meta_data.json.
$curl http://169.254.169.254/openstack/2012-08-10/meta_data.json
{
"uuid": "d8e02d56-2648-49a3-bf97-6be8f1204f38",
"availability_zone": "nova",
"hostname": "test.novalocal",
"launch_index": 0,
"meta": {
"priority": "low",
"role": "webserver"
},
"public_keys": {
"mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova\n"
},
"name": "test"
}user_data parameter in the API call or by the --user_data flag in the nova boot command) through the metadata service, by making a GET request to http://169.254.169.254/openstack/2012-08-10/user_data.
$curl http://169.254.169.254/openstack/2012-08-10/user_data#!/bin/bash echo 'Extra user data here'
EC2 metadata API
http://169.254.169.254/2009-04-04/meta-data/
$curl http://169.254.169.254/2009-04-04/meta-data/ami-id ami-launch-index ami-manifest-path block-device-mapping/ hostname instance-action instance-id instance-type kernel-id local-hostname local-ipv4 placement/ public-hostname public-ipv4 public-keys/ ramdisk-id reservation-id security-groups
$curl http://169.254.169.254/2009-04-04/meta-data/block-device-mapping/ami
$curl http://169.254.169.254/2009-04-04/meta-data/placement/availability-zone
$curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0=mykey
http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key.
$curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-keyssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDYVEprvtYJXVOBN0XNKVVRNCRX6BlnNbI+USLGais1sUWPwtSg7z9K9vhbYAPUZcq8c/s5S9dg5vTHbsiyPCIDOKyeHba4MUJq8Oh5b2i71/3BISpyxTBH/uZDHdslW2a+SrPDCeuMMoss9NFhBdKtDkdG9zyi0ibmCP6yMdEX8Q== Generated by Nova
http://169.254.169.254/2009-04-04/user-data.
$curl http://169.254.169.254/2009-04-04/user-data#!/bin/bash echo 'Extra user data here'
Run the metadata service
nova-api service or the nova-api-metadata service. (The nova-api-metadata service is generally only used when running in multi-host mode, it retrieves instance-specific metadata). If you are running the nova-api service, you must have metadata as one of the elements of the list of the enabled_apis configuration option in /etc/nova/nova.conf. The default enabled_apis configuration setting includes the metadata service, so you should not need to modify it.
169.254.169.254:80, and this is translated to metadata_host:metadata_port by an iptables rule established by the nova-network servce. In multi-host mode, you can set metadata_host to 127.0.0.1.
nova-network service configures iptables to NAT port 80 of the 169.254.169.254 address to the IP address specified in metadata_host (default $my_ip, which is the IP address of the nova-network service) and port specified in metadata_port (default 8775) in /etc/nova/nova.conf.
metadata_host configuration option must be an IP address, not a host name.
nova-network service and the nova-api service are running on the same host. If this is not the case, you must make this change in the /etc/nova/nova.conf file on the host running the nova-network service:
metadata_host configuration option to the IP address of the host where the nova-api service runs.
Table 3.2. Description of configuration options for metadata
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| metadata_host = $my_ip | (StrOpt) The IP address for the metadata API server |
| metadata_listen = 0.0.0.0 | (StrOpt) The IP address on which the metadata API will listen. |
| metadata_listen_port = 8775 | (IntOpt) The port on which the metadata API will listen. |
| metadata_manager = nova.api.manager.MetadataManager | (StrOpt) OpenStack metadata service manager |
| metadata_port = 8775 | (IntOpt) The port for the metadata API port |
| metadata_workers = None | (IntOpt) Number of workers for metadata service. The default will be the number of CPUs available. |
| vendordata_driver = nova.api.metadata.vendordata_json.JsonFileVendorData | (StrOpt) Driver to use for vendor data |
| vendordata_jsonfile_path = None | (StrOpt) File to load json formatted vendor data from |
3.3.5. Enable ping and SSH on VMs
nova-api are in /root/.bashrc. If the EC2 credentials are the .bashrc file for another user, you must run these commands as the user.
$nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0$nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
$euca-authorize -P icmp -t -1:-1 -s 0.0.0.0/0 default$euca-authorize -P tcp -p 22 -s 0.0.0.0/0 default
dnsmasq processes that are running. If you have a running instance, check to see that TWO dnsmasq processes are running. If not, perform the following commands as root:
#killall dnsmasq#service nova-network restart
3.3.6. Configure public (floating) IP addresses
nova-network instead of OpenStack Networking (neutron) for networking in OpenStack, use procedures in this section to configure floating IP addresses. For instructions on how to configure OpenStack Networking (neutron) to provide access to instances through floating IP addresses, see Section 6.8.2, “L3 routing and NAT”.
3.3.6.1. Private and public IP addresses
/etc/nova/nova.conf file to specify to which interface the nova-network service binds public IP addresses, as follows:
public_interface=vlan100
/etc/nova/nova.conf file while the nova-network service is running, you must restart the service.
dmz_cidr=x.x.x.x/yThe
x.x.x.x/y value specifies the range of floating IPs for each pool of floating IPs that you define. If the VMs in the source group have floating IPs, this configuration is also required.
3.3.6.2. Enable IP forwarding
nova-network service. If you use multi_host mode, ensure that you enable it on all compute nodes. Otherwise, enable it on only the node that runs the nova-network service.
$cat /proc/sys/net/ipv4/ip_forward0
$sysctl net.ipv4.ip_forwardnet.ipv4.ip_forward = 0
#sysctl -w net.ipv4.ip_forward=1
#echo 1 > /proc/sys/net/ipv4/ip_forward
/etc/sysctl.conf file and update the IP forwarding setting:
net.ipv4.ip_forward = 1
#sysctl -p
#service network restart
3.3.6.3. Create a list of available floating IP addresses
#nova-manage floating create --pool=nova --ip_range=68.99.26.170/31
#nova-manage floating listLists the floating IP addresses in the pool.#nova-manage floating create --pool=[pool name] --ip_range=[CIDR]Creates specific floating IPs for either a single address or a subnet.#nova-manage floating delete [CIDR]Removes floating IP addresses using the same parameters as the create command.
3.3.6.4. Automatically add floating IPs
nova-network service to automatically allocate and assign a floating IP address to virtual instances when they are launched. Add the following line to the /etc/nova/nova.conf file and restart the nova-network service:
auto_assign_floating_ip=True
3.3.7. Remove a network from a project
#nova-manage project scrub --project=<id>
3.3.8. Multiple interfaces for your instances (multinic)
- SSL Configurations (VIPs)
- Services failover/ HA
- Bandwidth Allocation
- Administrative/ Public access to your instances
Figure 3.4. multinic flat manager

Figure 3.5. multinic flatdhcp manager

Figure 3.6. multinic VLAN manager

3.3.8.1. Use the multinic feature
Now every time you spawn a new instance, it gets two IP addresses from the respective DHCP servers:$nova network-create first-net --fixed-range-v4=20.20.0.0/24 --project-id=$your-project$nova network-create second-net --fixed-range-v4=20.20.10.0/24 --project-id=$your-project
$nova list+-----+------------+--------+----------------------------------------+ | ID | Name | Status | Networks | +-----+------------+--------+----------------------------------------+ | 124 | Server 124 | ACTIVE | network2=20.20.0.3; private=20.20.10.14| +-----+------------+--------+----------------------------------------+
/etc/network/interfaces
# The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet dhcp
--nic flag when invoking the nova command:
$nova boot --image ed8b2a37-5535-4a5f-a615-443513036d71 --flavor 1 --nic net-id= <id of first network> --nic net-id= <id of second network> test-vm1
3.3.9. Troubleshoot Networking
Cannot reach floating IPs
- Ensure the default security group allows ICMP (ping) and SSH (port 22), so that you can reach the instances:
$nova secgroup-list-rules default+-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | icmp | -1 | -1 | 0.0.0.0/0 | | | tcp | 22 | 22 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+ - Ensure the NAT rules have been added to
iptableson the node thatnova-networkis running on, as root:#iptables -L -nv -t nat-A nova-network-PREROUTING -d 68.99.26.170/32 -j DNAT --to-destination 10.0.0.3 -A nova-network-floating-snat -s 10.0.0.3/32 -j SNAT --to-source 68.99.26.170 - Check that the public address, in this example "68.99.26.170", has been added to your public interface. You should see the address in the listing when you enter "ip addr" at the command prompt.
$ip addr2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether xx:xx:xx:17:4b:c2 brd ff:ff:ff:ff:ff:ff inet 13.22.194.80/24 brd 13.22.194.255 scope global eth0 inet 68.99.26.170/32 scope global eth0 inet6 fe80::82b:2bf:fe1:4b2/64 scope link valid_lft forever preferred_lft foreverNote that you cannot SSH to an instance with a public IP from within the same server as the routing configuration won't allow it. - You can use tcpdump to identify if packets are being routed to the inbound interface on the compute host. If the packets are reaching the compute hosts but the connection is failing, the issue may be that the packet is being dropped by reverse path filtering. Try disabling reverse-path filtering on the inbound interface. For example, if the inbound interface is
eth2, as root, run:#sysctl -w net.ipv4.conf.eth2.rp_filter=0If this solves your issue, add the following line to/etc/sysctl.confso that the reverse-path filter is disabled the next time the compute host reboots:net.ipv4.conf.rp_filter=0
Disable firewall
/etc/nova/nova.conf:
firewall_driver=nova.virt.firewall.NoopFirewallDriver
Packet loss from instances to nova-network server (VLANManager mode)
vlan100) and the associated bridge interface (for example, br100) on the host running the nova-network service.
- In the first terminal, on the host running nova-network, use tcpdump on the VLAN interface to monitor DNS-related traffic (UDP, port 53). As root, run:
#tcpdump -K -p -i vlan100 -v -vv udp port 53 - In the second terminal, also on the host running nova-network, use tcpdump to monitor DNS-related traffic on the bridge interface. As root, run:
#tcpdump -K -p -i br100 -v -vv udp port 53 - In the third terminal, SSH inside of the instance and generate DNS requests by using the nslookup command:
$nslookup www.google.comThe symptoms may be intermittent, so try running nslookup multiple times. If the network configuration is correct, the command should return immediately each time. If it is not functioning properly, the command hangs for several seconds. - If the nslookup command sometimes hangs, and there are packets that appear in the first terminal but not the second, then the problem may be due to filtering done on the bridges. Try to disable filtering, run the following commands as root:
#sysctl -w net.bridge.bridge-nf-call-arptables=0#sysctl -w net.bridge.bridge-nf-call-iptables=0#sysctl -w net.bridge.bridge-nf-call-ip6tables=0If this solves your issue, add the following line to/etc/sysctl.confso that these changes take effect the next time the host reboots:net.bridge.bridge-nf-call-arptables=0 net.bridge.bridge-nf-call-iptables=0 net.bridge.bridge-nf-call-ip6tables=0
3.4. System administration
nova-* that reside persistently on the host machine or machines. These binaries can all run on the same machine or be spread out on multiple boxes in a large deployment. The responsibilities of services and drivers are:
- Services:
nova-api. Receives xml requests and sends them to the rest of the system. It is a wsgi app that routes and authenticate requests. It supports the EC2 and OpenStack APIs. There is anova-api.conffile created when you install Compute.nova-cert. Provides the certificate manager.nova-compute. Responsible for managing virtual machines. It loads a Service object which exposes the public methods on ComputeManager through Remote Procedure Call (RPC).nova-conductor. Provides database-access support for Compute nodes (thereby reducing security risks).nova-consoleauth. Handles console authentication.nova-objectstore: Thenova-objectstoreservice is an ultra simple file-based storage system for images that replicates most of the S3 API. It can be replaced with OpenStack Image Service and a simple image manager or use OpenStack Object Storage as the virtual machine image storage facility. It must reside on the same node asnova-compute.nova-network. Responsible for managing floating and fixed IPs, DHCP, bridging and VLANs. It loads a Service object which exposes the public methods on one of the subclasses of NetworkManager. Different networking strategies are available to the service by changing the network_manager configuration option to FlatManager, FlatDHCPManager, or VlanManager (default is VLAN if no other is specified).nova-scheduler. Dispatches requests for new virtual machines to the correct node.nova-novncproxy. Provides a VNC proxy for browsers (enabling VNC consoles to access virtual machines).
- Some services have drivers that change how the service implements the core of its functionality. For example, the
nova-computeservice supports drivers that let you choose with which hypervisor type it will talk.nova-networkandnova-scheduleralso have drivers.
3.4.1. Manage Compute users
3.4.2. Manage Volumes
volume-attach Attach a volume to a server.
volume-create Add a new volume.
volume-delete Remove a volume.
volume-detach Detach a volume from a server.
volume-list List all the volumes.
volume-show Show details about a volume.
volume-snapshot-create Add a new snapshot.
volume-snapshot-delete Remove a snapshot.
volume-snapshot-list List all the snapshots.
volume-snapshot-show Show details about a snapshot.
volume-type-create Create a new volume type.
volume-type-delete Delete a specific flavor
volume-type-list Print a list of available 'volume types'.
volume-update Update an attached volume.$nova volume-list+--------------------------------------+-----------+--------------+------+-------------+-------------+ | ID | Status | Display Name | Size | Volume Type | Attached to | +--------------------------------------+-----------+--------------+------+-------------+-------------+ | 1af4cb93-d4c6-4ee3-89a0-4b7885a3337e | available | PerfBlock | 1 | Performance | | +--------------------------------------+-----------+--------------+------+-------------+-------------+
3.4.3. Flavors
$nova help | grep flavor-flavor-access-add Add flavor access for the given tenant. flavor-access-list Print access information about the given flavor. flavor-access-remove Remove flavor access for the given tenant. flavor-create Create a new flavor flavor-delete Delete a specific flavor flavor-key Set or unset extra_spec for a flavor. flavor-list Print a list of available 'flavors' (sizes of flavor-show Show details about the given flavor.
- Configuration rights can be delegated to additional users by redefining the access controls for
compute_extension:flavormanagein/etc/nova/policy.jsonon thenova-apiserver. - To modify an existing flavor in the dashboard, you must delete the flavor and create a modified one with the same name.
Table 3.3. Identity Service configuration file sections
| Element | Description |
Name
|
A descriptive name. XX.SIZE_NAME is typically not required, though some third party tools may rely on it. |
Memory_MB
|
Virtual machine memory in megabytes. |
Disk
|
Virtual root disk size in gigabytes. This is an ephemeral disk that the base image is copied into. When booting from a persistent volume it is not used. The "0" size is a special case which uses the native base image size as the size of the ephemeral root volume. |
Ephemeral
|
Specifies the size of a secondary ephemeral data disk. This is an empty, unformatted disk and exists only for the life of the instance. |
Swap
|
Optional swap space allocation for the instance. |
VCPUs
|
Number of virtual CPUs presented to the instance. |
RXTX_Factor
|
Optional property allows created servers to have a different bandwidth cap than that defined in the network they are attached to. This factor is multiplied by the rxtx_base property of the network. Default value is 1.0. That is, the same as attached network. |
Is_Public
|
Boolean value, whether flavor is available to all users or private to the tenant it was created in. Defaults to True. |
extra_specs
|
Key and value pairs that define on which compute nodes a flavor can run. These pairs must match corresponding pairs on the compute nodes. Use to implement special resources, such as flavors that run on only compute nodes with GPU hardware.
|
libvirt driver enables quotas on CPUs available to a VM, disk tuning, bandwidth I/O, watchdog behavior, random number generator device control, and instance VIF traffic control.
- CPU limits
- You can configure the CPU limits with control parameters with the nova client. For example, to configure the I/O limit, use:
$nova flavor-key m1.small set quota:read_bytes_sec=10240000$nova flavor-key m1.small set quota:write_bytes_sec=10240000There are optional CPU control parameters for weight shares, enforcement intervals for runtime quotas, and a quota for maximum allowed bandwidth:cpu_sharesspecifies the proportional weighted share for the domain. If this element is omitted, the service defaults to the OS provided defaults. There is no unit for the value; it is a relative measure based on the setting of other VMs. For example, a VM configured with value 2048 gets twice as much CPU time as a VM configured with value 1024.cpu_periodspecifies the enforcement interval (unit: microseconds) for QEMU and LXC hypervisors. Within a period, each VCPU of the domain is not allowed to consume more than the quota worth of runtime. The value should be in range[1000, 1000000]. A period with value 0 means no value.cpu_quotaspecifies the maximum allowed bandwidth (unit: microseconds). A domain with a negative-value quota indicates that the domain has infinite bandwidth, which means that it is not bandwidth controlled. The value should be in range[1000, 18446744073709551]or less than 0. A quota with value 0 means no value. You can use this feature to ensure that all vCPUs run at the same speed. For example:$nova flavor-key m1.low_cpu set quota:cpu_quota=10000$nova flavor-key m1.low_cpu set quota:cpu_period=20000In this example, the instance ofm1.low_cpucan only consume a maximum of 50% CPU of a physical CPU computing capability.
- Disk tuning
- Using disk I/O quotas, you can set maximum disk write to 10 MB per second for a VM user. For example:
$nova flavor-key m1.medium set disk_write_bytes_sec=10485760The disk I/O options are:- disk_read_bytes_sec
- disk_read_iops_sec
- disk_write_bytes_sec
- disk_write_iops_sec
- disk_total_bytes_sec
- disk_total_iops_sec
The vif I/O options are:- vif_inbound_ average
- vif_inbound_burst
- vif_inbound_peak
- vif_outbound_ average
- vif_outbound_burst
- vif_outbound_peak
- Bandwidth I/O
- Incoming and outgoing traffic can be shaped independently. The bandwidth element can have at most one inbound and at most one outbound child element. If you leave any of these children element out, no quality of service (QoS) is applied on that traffic direction. So, if you want to shape only the network's incoming traffic, use inbound only (and vice versa). Each element has one mandatory attribute average, which specifies the average bit rate on the interface being shaped.There are also two optional attributes (integer):
peak, which specifies maximum rate at which bridge can send data (kilobytes/second), andburst, the amount of bytes that can be burst at peak speed (kilobytes). The rate is shared equally within domains connected to the network.The following example configures a bandwidth limit for instance network traffic:$nova flavor-key m1.small set quota:inbound_average=10240$nova flavor-key m1.small set quota:outbound_average=10240 - Watchdog behavior
- For the
libvirtdriver, you can enable and set the behavior of a virtual hardware watchdog device for each flavor. Watchdog devices keep an eye on the guest server, and carry out the configured action if the server hangs. The watchdog uses the i6300esb device (emulating a PCI Intel 6300ESB). Ifhw_watchdog_actionis not specified, the watchdog is disabled.To set the behavior, use:$nova flavor-key FLAVOR-NAME set hw_watchdog_action=ACTIONValid ACTION values are:disabled—(default) The device is not attached.reset—Forcefully reset the guest.poweroff—Forcefully power off the guest.pause—Pause the guest.none—Only enable the watchdog; do nothing if the server hangs.
NoteWatchdog behavior set using a specific image's properties will override behavior set using flavors. - Random-number generator
- If a random-number generator device has been added to the instance through its image properties, the device can be enabled and configured using:
$nova flavor-key FLAVOR-NAME set hw_rng:allowed=True$nova flavor-key FLAVOR-NAME set hw_rng:rate_bytes=RATE-BYTES$nova flavor-key FLAVOR-NAME set hw_rng:rate_period=RATE-PERIODWhere:- RATE-BYTES—(Integer) Allowed amount of bytes that the guest can read from the host's entropy per period.
- RATE-PERIOD—(Integer) Duration of the read period in seconds.
- Instance VIF traffic control
- Flavors can also be assigned to particular projects. By default, a flavor is public and available to all projects. Private flavors are only accessible to those on the access list and are invisible to other projects. To create and assign a private flavor to a project, run these commands:
$nova flavor-create --is-public false p1.medium auto 512 40 4$nova flavor-access-add 259d06a0-ba6d-4e60-b42d-ab3144411d58 86f94150ed744e08be565c2ff608eef9
3.4.4. Compute service node firewall requirements
5900 to 5999. You must configure the firewall on each Compute service node to enable network traffic on these ports.
Procedure 3.3. Configure the service-node firewall
- On the server that hosts the Compute service, log in as
root. - Edit the
/etc/sysconfig/iptablesfile. - Add an INPUT rule that allows TCP traffic on ports that range from
5900to5999:-A INPUT -p tcp -m multiport --dports 5900:5999 -j ACCEPT
The new rule must appear before any INPUT rules that REJECT traffic. - Save the changes to the
/etc/sysconfig/iptablesfile. - Restart the
iptablesservice to ensure that the change takes effect.$service iptables restart
iptables firewall now enables incoming connections to the Compute services. Repeat this process for each Compute service node.
3.4.5. Inject administrator password
admin password from the dashboard.
Dashboard
admin password and allow the user to modify it.
local_settings file /etc/openstack-dashboard/local_settings,
OPENSTACK_HYPERVISOR_FEATURE = {
...
'can_set_password': False,
}
Libvirt-based hypervisors (KVM, QEMU, LXC)
admin password injection is disabled by default. To enable it, set the following option in /etc/nova/nova.conf:
[libvirt] inject_password=true
/etc/shadow file inside of the virtual machine instance.
- The virtual machine image is a Linux distribution
- The virtual machine has been configured to allow users to ssh as the root user.
Windows images (all hypervisors)
admin password for Windows virtual machines, you must configure the Windows image to retrieve the admin password on boot by installing an agent such as cloudbase-init.
3.4.6. Manage the cloud
Procedure 3.4. To use the nova client
- Installing the python-novaclient package gives you a
novashell command that enables Compute API interactions from the command line. Install the client, and then provide your user name and password (typically set as environment variables for convenience), and then you have the ability to send commands to your cloud on the command line.To install python-novaclient, download the tarball from http://pypi.python.org/pypi/python-novaclient/2.6.3#downloads and then install it in your favorite python environment.$curl -O http://pypi.python.org/packages/source/p/python-novaclient/python-novaclient-2.6.3.tar.gz$tar -zxvf python-novaclient-2.6.3.tar.gz$cd python-novaclient-2.6.3Asrootexecute:#python setup.py install - Confirm the installation by running:
$nova helpusage: nova [--version] [--debug] [--os-cache] [--timings] [--timeout <seconds>] [--os-username <auth-user-name>] [--os-password <auth-password>] [--os-tenant-name <auth-tenant-name>] [--os-tenant-id <auth-tenant-id>] [--os-auth-url <auth-url>] [--os-region-name <region-name>] [--os-auth-system <auth-system>] [--service-type <service-type>] [--service-name <service-name>] [--volume-service-name <volume-service-name>] [--endpoint-type <endpoint-type>] [--os-compute-api-version <compute-api-ver>] [--os-cacert <ca-certificate>] [--insecure] [--bypass-url <bypass-url>] <subcommand> ...NoteThis command returns a list of nova commands and parameters. To obtain help for a subcommand, run:$nova help subcommandFor a complete listing of nova commands and parameters, see the Command-line Interface Reference Guide from https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/. - Set the required parameters as environment variables to make running commands easier. For example, you can add
--os-usernameas a nova option, or set it as an environment variable. To set the user name, password, and tenant as environment variables, use:$export OS_USERNAME=joecool$export OS_PASSWORD=coolword$export OS_TENANT_NAME=coolu - Using the Identity Service, you are supplied with an authentication endpoint, which Compute recognizes as the
OS_AUTH_URL.$export OS_AUTH_URL=http://hostname:5000/v2.0$export NOVA_VERSION=1.1
3.4.6.1. Use the euca2ools commands
3.4.6.2. Show usage statistics for hosts and instances
3.4.6.2.1. Show host usage statistics
devstack.
- List the hosts and the nova-related services that run on them:
$nova host-list+-----------+-------------+----------+ | host_name | service | zone | +-----------+-------------+----------+ | devstack | conductor | internal | | devstack | compute | nova | | devstack | cert | internal | | devstack | network | internal | | devstack | scheduler | internal | | devstack | consoleauth | internal | +-----------+-------------+----------+ - Get a summary of resource usage of all of the instances running on the host:
$nova host-describe devstack+-----------+----------------------------------+-----+-----------+---------+ | HOST | PROJECT | cpu | memory_mb | disk_gb | +----------+----------------------------------+-----+-----------+---------+ | devstack | (total) | 2 | 4003 | 157 | | devstack | (used_now) | 3 | 5120 | 40 | | devstack | (used_max) | 3 | 4608 | 40 | | devstack | b70d90d65e464582b6b2161cf3603ced | 1 | 512 | 0 | | devstack | 66265572db174a7aa66eba661f58eb9e | 2 | 4096 | 40 | +----------+----------------------------------+-----+-----------+---------+Thecpucolumn shows the sum of the virtual CPUs for instances running on the host.Thememory_mbcolumn shows the sum of the memory (in MB) allocated to the instances that run on the host.Thedisk_gbcolumn shows the sum of the root and ephemeral disk sizes (in GB) of the instances that run on the host.The row that has the valueused_nowin thePROJECTcolumn shows the sum of the resources allocated to the instances that run on the host, plus the resources allocated to the virtual machine of the host itself.The row that has the valueused_maxrow in thePROJECTcolumn shows the sum of the resources allocated to the instances that run on the host.
3.4.6.2.2. Show instance usage statistics
- Get CPU, memory, I/O, and network statistics for an instance.
- List instances:
$nova list+--------------------------------------+----------------------+--------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------------------+--------+------------+-------------+------------------+ | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 | myCirrosServer | ACTIVE | None | Running | private=10.0.0.3 | | 8a99547e-7385-4ad1-ae50-4ecfaaad5f42 | myInstanceFromVolume | ACTIVE | None | Running | private=10.0.0.4 | +--------------------------------------+----------------------+--------+------------+-------------+------------------+ - Get diagnostic statistics:
$nova diagnostics myCirrosServer+------------------+----------------+ | Property | Value | +------------------+----------------+ | vnet1_rx | 1210744 | | cpu0_time | 19624610000000 | | vda_read | 0 | | vda_write | 0 | | vda_write_req | 0 | | vnet1_tx | 863734 | | vnet1_tx_errors | 0 | | vnet1_rx_drop | 0 | | vnet1_tx_packets | 3855 | | vnet1_tx_drop | 0 | | vnet1_rx_errors | 0 | | memory | 2097152 | | vnet1_rx_packets | 5485 | | vda_read_req | 0 | | vda_errors | -1 | +------------------+----------------+
- Get summary statistics for each tenant:
$nova usage-listUsage from 2013-06-25 to 2013-07-24: +----------------------------------+-----------+--------------+-----------+---------------+ | Tenant ID | Instances | RAM MB-Hours | CPU Hours | Disk GB-Hours | +----------------------------------+-----------+--------------+-----------+---------------+ | b70d90d65e464582b6b2161cf3603ced | 1 | 344064.44 | 672.00 | 0.00 | | 66265572db174a7aa66eba661f58eb9e | 3 | 671626.76 | 327.94 | 6558.86 | +----------------------------------+-----------+--------------+-----------+---------------+
3.4.7. Manage logs
Logging module
/etc/nova/nova.conf file . To change the logging level, such as DEBUG, INFO, WARNING, ERROR), use:
log-config=/etc/nova/logging.conf
logger_nova, which controls the behavior of the logging facility in the nova-* services. For example:
[logger_nova] level = INFO handlers = stderr qualname = nova
INFO (which less verbose than the default DEBUG setting).
- For more details on the logging configuration syntax, including the meaning of the
handlersandquanamevariables, see the Python documentation on logging configuration file format f. - For an example
logging.conffile with various defined handlers, see the Configuration Reference.
Syslog
syslog. This is useful if you want to use rsyslog, which forwards the logs to a remote machine. You need to separately configure the Compute service (nova), the Identity service (keystone), the Image Service (glance), and, if you are using it, the Block Storage service (cinder) to send log messages to syslog. To do so, add the following lines to:
/etc/nova/nova.conf/etc/keystone/keystone.conf/etc/glance/glance-api.conf/etc/glance/glance-registry.conf/etc/cinder/cinder.conf
verbose = False debug = False use_syslog = True syslog_log_facility = LOG_LOCAL0
syslog, these settings also turn off more verbose output and debugging output from the log.
LOG_LOCAL0, which corresponds to syslog facility LOCAL0), we recommend that you configure a separate local facility for each service, as this provides better isolation and more flexibility. For example, you may want to capture logging information at different severity levels for different services. syslog allows you to define up to seven local facilities, LOCAL0, LOCAL1, ..., LOCAL7. For more details, see the syslog documentation.
Rsyslog
rsyslog is a useful tool for setting up a centralized log server across multiple machines. We briefly describe the configuration to set up an rsyslog server; a full treatment of rsyslog is beyond the scope of this document. We assume rsyslog has already been installed on your hosts .
/etc/rsyslog.conf on the log server host, which receives the log files:
# provides TCP syslog reception $ModLoad imtcp $InputTCPServerRun 1024
/etc/rsyslog.conf which looks for a host name. The example below uses compute-01 as an example of a compute host name:
:hostname, isequal, "compute-01" /mnt/rsyslog/logs/compute-01.log
/etc/rsyslog.d/60-nova.conf, with the following content:
# prevent debug from dnsmasq with the daemon.none parameter *.*;auth,authpriv.none,daemon.none,local0.none -/var/log/syslog # Specify a log level of ERROR local0.error @@172.20.1.43:1024
rsyslog daemon. Error-level log messages on the compute hosts should now be sent to your log server.
3.4.8. Secure with root wrappers
sudoers file that listed every command that the Compute user was allowed to run, and used sudo to run that command as root. However this was difficult to maintain (the sudoers file was in packaging), and did not enable complex filtering of parameters (advanced filters). The rootwrap was designed to solve those issues.
How rootwrap works
no_root_squash option enabled.
Security model
Details of rootwrap.conf
rootwrap.conf file. Because it's in the trusted security path, it must be owned and writable by only the root user. The file's location is specified both in the sudoers entry and in the nova.conf configuration file with the rootwrap_config=entry.
rootwrap.conf file uses an INI file format with these sections and parameters:
Table 3.4. rootwrap.conf configuration options
|
Configuration option=Default value
|
(Type) Description
|
|
[DEFAULT]
filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap
|
(ListOpt) Comma-separated list of directories containing filter definition files. Defines where filters for root wrap are stored. Directories defined on this line should all exist, be owned and writable only by the root user.
|
Details of .filters files
rootwrap.conf file.
Table 3.5. .filters configuration options
|
Configuration option=Default value
|
(Type) Description
|
|
[Filters]
filter_name=kpartx: CommandFilter, /sbin/kpartx, root
|
(ListOpt) Comma-separated list containing first the Filter class to use, followed by that Filter arguments (which vary depending on the Filter class selected).
|
3.4.9. Configure migrations using KVM
- Migration (or non-live migration). The instance is shut down (and the instance knows that it was rebooted) for a period of time to be moved to another hypervisor.
- Live migration (or true live migration). Almost no instance downtime. Useful when the instances must be kept running during the migration. The types of live migration are:The following procedures describes how to configure your hosts and compute nodes for migration using the KVM hypervisor.
- Shared storage-based live migration. Both hypervisors have access to shared storage.
- Block live migration. No shared storage is required. Incompatible with read-only devices such as CD-ROMs and Configuration Drive (config_drive) (see the Store metadata on a configuration drive section in the OpenStack Command-line clients chapter, in the End User Guide from https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/).
- Volume-backed live migration. When instances are backed by volumes rather than ephemeral disk, no shared storage is required, and migration is supported (currently only in libvirt-based hypervisors).
3.4.9.1. Prerequisites
- Hypervisor: KVM with libvirt
- Shared storage:
NOVA-INST-DIR/instances/(for example,/var/lib/nova/instances) has to be mounted by shared storage. This guide uses NFS but other options, including the OpenStack Gluster Connector are available. - Instances: Instance can be migrated with iSCSI based volumes
- Because the Compute service does not use the libvirt live migration functionality by default, guests are suspended before migration and might experience several minutes of downtime. For details, see Section 3.4.9.4, “Enable true live migration”.
- This guide assumes the default value for
instances_pathin yournova.conffile (NOVA-INST-DIR/instances). If you have changed thestate_pathorinstances_pathvariables, modify accordingly. - You must specify
vncserver_listen=0.0.0.0or live migration does not work correctly. - If you migrate an instance to a system running a different version of the qemu-kvm package, the
qemu-kvmprocess can enter an infinite loop and start to overutilize the CPU. To avoid this problem, update the qemu-kvm package on the source host beforehand.
3.4.9.2. Ensure Node Communication
/etc/nova/nova.conf file.
- On both nodes, make
novaa login user.#
usermod -s /bin/bash nova - On the first compute node, generate a key pair for the
novauser.#su nova#ssh-keygen#echo 'StrictHostKeyChecking no' >> /var/lib/nova/.ssh/configThe pair,id_rsaandid_rsa.pub, will be generated in/var/lib/nova/.ssh. - On each additional compute node, copy the created key pair and then add it into SSH with:
#su nova#mkdir -p /var/lib/nova/.ssh#cp id_rsa /var/lib/nova/.ssh/#cat id_rsa.pub >> /var/lib/nova/.ssh/authorized_keys#echo 'StrictHostKeyChecking no' >> /var/lib/nova/.ssh/config - Ensure that the
novauser can now log into the node without a password:#ssh nova@computeHost
3.4.9.3. Example Compute installation environment
- Prepare at least three servers; for example,
HostA,HostB, andHostC:HostAis the Cloud Controller, and should run these services:nova-api,nova-scheduler,nova-network,cinder-volume, andnova-objectstore.HostBandHostCare the compute nodes that runnova-compute.
Ensure thatNOVA-INST-DIR(set withstate_pathin thenova.conffile) is the same on all hosts. - In this example,
HostAis the NFSv4 server that exportsNOVA-INST-DIR/instances, andHostBandHostCmount it.
Procedure 3.5. To configure your system
- Configure your DNS or
/etc/hostsand ensure it is consistent across all hosts. Make sure that the three hosts can perform name resolution with each other. As a test, use the ping command to ping each host from one another.$ping HostA$ping HostB$ping HostC - Ensure that the UID and GID of your Compute and libvirt users are identical between each of your servers. This ensures that the permissions on the NFS mount works correctly.
- Export
NOVA-INST-DIR/instancesfromHostA, and have it readable and writable by the Compute user onHostBandHostC. - Configure the NFS server at
HostAby adding the following line to the/etc/exportsfile:NOVA-INST-DIR/instances HostA/255.255.0.0(rw,sync,fsid=0,no_root_squash)
Change the subnet mask (255.255.0.0) to the appropriate value to include the IP addresses ofHostBandHostC. Then restart the NFS server:#/etc/init.d/nfs-kernel-server restart#/etc/init.d/idmapd restart - Set the 'execute/search' bit on your shared directory.On both compute nodes, make sure to enable the 'execute/search' bit to allow qemu to be able to use the images within the directories. On all hosts, run the following command:
$chmod o+x NOVA-INST-DIR/instances - Configure NFS at HostB and HostC by adding the following line to the
/etc/fstabfile:HostA:/ /NOVA-INST-DIR/instances nfs4 defaults 0 0
Ensure that you can mount the exported directory can be mounted:$mount -a -vCheck that HostA can see the "NOVA-INST-DIR/instances/" directory:$ls -lddrwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/NOVA-INST-DIR/instances/Perform the same check at HostB and HostC, paying special attention to the permissions (Compute should be able to write):$ls -lddrwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/NOVA-INST-DIR/instances/$df -kFilesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 921514972 4180880 870523828 1% / none 16498340 1228 16497112 1% /dev none 16502856 0 16502856 0% /dev/shm none 16502856 368 16502488 1% /var/run none 16502856 0 16502856 0% /var/lock none 16502856 0 16502856 0% /lib/init/rw HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( <--- this line is important.) - Update the libvirt configurations so that the calls can be made securely. These methods enable remote access over TCP and are not documented here, please consult your network administrator for assistance in deciding how to configure access.
- SSH tunnel to libvirtd's UNIX socket
- libvirtd TCP socket, with GSSAPI/Kerberos for auth+data encryption
- libvirtd TCP socket, with TLS for encryption and x509 client certs for authentication
- libvirtd TCP socket, with TLS for encryption and Kerberos for authentication
Restart libvirt. After you run the command, ensure that libvirt is successfully restarted:#stop libvirt-bin && start libvirt-bin$ps -ef | grep libvirtroot 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l - Configure your firewall to allow libvirt to communicate between nodes.By default, libvirt listens on TCP port 16509, and an ephemeral TCP range from 49152 to 49261 is used for the KVM communications. Based on the secure remote access TCP configuration you chose, be careful choosing what ports you open and understand who has access. For information about ports that are used with libvirt, see the libvirt documentation.
- You can now configure options for live migration. In most cases, you do not need to configure any options. The following chart is for advanced usage only.
Table 3.6. Description of configuration options for livemigration
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| live_migration_retry_count = 30 | (IntOpt) Number of 1 second retries needed in live_migration |
| [libvirt] | |
| live_migration_bandwidth = 0 | (IntOpt) Maximum bandwidth to be used during migration, in Mbps |
| live_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER | (StrOpt) Migration flags to be set for live migration |
| live_migration_uri = qemu+tcp://%s/system | (StrOpt) Migration target URI (any included "%s" is replaced with the migration target hostname) |
3.4.9.4. Enable true live migration
nova.conf file:
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
3.4.10. Migrate instances
Procedure 3.6. To migrate instances
- Look at the running instances, to get the ID of the instance you wish to migrate.
$nova list+--------------------------------------+-----+--------+------------+-------------+---------------------+ | ID | Name| Status | Task State | Power State | Networks | +--------------------------------------+-----+--------+------------+-------------+---------------------+ | d1df1b5a-70c4-4fed-98b7-423362f2c47c | vm1 | ACTIVE | - | Running | private=a.b.c.d | | d693db9e-a7cf-45ef-a7c9-b3ecb5f22645 | vm2 | ACTIVE | - | Running | private=e.f.g.h | +--------------------------------------+---------------+--------+------------+-------------+---------------------+> - Look at information associated with that instance. This example uses 'vm1' from above.
$nova show d1df1b5a-70c4-4fed-98b7-423362f2c47c+-------------------------------------+----------------------------------------------------------+ | Property | Value | +-------------------------------------+----------------------------------------------------------+ ... | OS-EXT-SRV-ATTR:host | HostB | ... | flavor | m1.tiny (1) | | hostId | 4ccc85b891895ae5322c055bc0cb3c82a58f558d15cc0a58cc9bd834 | | id | d1df1b5a-70c4-4fed-98b7-423362f2c47c | | image | cirros (6f5e047d-794a-42d0-a158-98bcb66f9cd4) | | key_name | demo_keypair | | metadata | {} | name | vm1 | ... | private network | a.b.c.d | | security_groups | default | | status | ACTIVE | ... +-------------------------------------+----------------------------------------------------------+In this example, vm1 is running on HostB. - Select the server to which instances will be migrated:
#nova service-list+------------------+------------+----------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+------------+----------+---------+-------+----------------------------+-----------------+ | nova-consoleauth | HostA | internal | enabled | up | 2014-03-25T10:33:25.000000 | - | | nova-scheduler | HostA | internal | enabled | up | 2014-03-25T10:33:25.000000 | - | | nova-conductor | HostA | internal | enabled | up | 2014-03-25T10:33:27.000000 | - | | nova-compute | HostB | nova | enabled | up | 2014-03-25T10:33:31.000000 | - | | nova-compute | HostC | nova | enabled | up | 2014-03-25T10:33:31.000000 | - | | nova-cert | HostA | internal | enabled | up | 2014-03-25T10:33:31.000000 | - | +------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+In this example, HostC can be picked up becausenova-computeis running on it. - Ensure that HostC has enough resources for migration.
#nova host-describe HostC+-----------+------------+-----+-----------+---------+ | HOST | PROJECT | cpu | memory_mb | disk_gb | +-----------+------------+-----+-----------+---------+ | HostC | (total) | 16 | 32232 | 878 | | HostC | (used_now) | 13 | 21284 | 442 | | HostC | (used_max) | 13 | 21284 | 442 | | HostC | p1 | 13 | 21284 | 442 | | HostC | p2 | 13 | 21284 | 442 | +-----------+------------+-----+-----------+---------+- cpu:the number of cpu
- memory_mb:total amount of memory (in MB)
- disk_gb:total amount of space for NOVA-INST-DIR/instances (in GB)
- 1st line shows total amount of resources for the physical server.
- 2nd line shows currently used resources.
- 3rd line shows maximum used resources.
- 4th line and under shows the resource for each project.
- Use the nova live-migration command to migrate the instances:
$nova live-migration server host_nameWhere server can be either the server's ID or name. For example:$nova live-migration d1df1b5a-70c4-4fed-98b7-423362f2c47c HostCMigration of d1df1b5a-70c4-4fed-98b7-423362f2c47c initiated.Ensure instances are migrated successfully with nova list. If instances are still running on HostB, check log files (src/destnova-computeandnova-scheduler) to determine why.NoteAlthough the nova command is called live-migration, under the default Compute configuration options the instances are suspended before migration.For more details, see the Configuration Reference Guide.
3.4.11. Configure remote console access
3.4.11.1. VNC console proxy
- A user connects to the API and gets an
access_urlsuch as,http://ip:port/?token=xyz. - The user pastes the URL in a browser or uses it as a client parameter.
- The browser or client connects to the proxy.
- The proxy talks to
nova-consoleauthto authorize the token for the user, and maps the token to the private host and port of the VNC server for an instance.The compute host specifies the address that the proxy should use to connect through thenova.conffile option,vncserver_proxyclient_address. In this way, the VNC proxy works as a bridge between the public network and private host network. - The proxy initiates the connection to VNC server and continues to proxy until the session ends.
noVNC client can talk to VNC servers. In general, the VNC proxy:
- Bridges between the public network where the clients live and the private network where VNC servers live.
- Mediates token authentication.
- Transparently deals with hypervisor-specific connection details to provide a uniform client experience.
Figure 3.7. noVNC process

3.4.11.1.1. About nova-consoleauth
nova-consoleauth. This service must be running for either proxy to work. Many proxies of either type can be run against a single nova-consoleauth service in a cluster configuration.
3.4.11.1.2. Typical deployment
- A
nova-consoleauthprocess. Typically runs on the controller host. - One or more
nova-novncproxyservices. Supports browser-based noVNC clients. For simple deployments, this service typically runs on the same machine asnova-apibecause it operates as a proxy between the public network and the private compute host network. - One or more
nova-xvpvncproxyservices. Supports the special Java client discussed here. For simple deployments, this service typically runs on the same machine asnova-apibecause it acts as a proxy between the public network and the private compute host network. - One or more compute hosts. These compute hosts must have correctly configured options, as follows.
3.4.11.1.3. VNC configuration options
Table 3.7. Description of configuration options for vnc
| Configuration option = Default value | Description |
|---|---|
| [DEFAULT] | |
| novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html | (StrOpt) Location of VNC console proxy, in the form "http://127.0.0.1:6080/vnc_auto.html" |
| vnc_enabled = True | (BoolOpt) Enable VNC related features |
| vnc_keymap = en-us | (StrOpt) Keymap for VNC |
| vncserver_listen = 127.0.0.1 | (StrOpt) IP address on which instance vncservers should listen |
| vncserver_proxyclient_address = 127.0.0.1 | (StrOpt) The address to which proxy clients (like nova-xvpvncproxy) should connect |
| [vmware] | |
| vnc_port = 5900 | (IntOpt) VNC starting port |
| vnc_port_total = 10000 | (IntOpt) Total number of VNC ports |
vncserver_listen, because that IP address does not exist on the destination host.
- The
vncserver_proxyclient_addressdefaults to127.0.0.1, which is the address of the compute host that Compute instructs proxies to use when connecting to instance servers. - For multi-host libvirt deployments, set to a host management IP on the same network as the proxies.
3.4.11.1.4. nova-novncproxy (noVNC)
nova-novncproxy service. As root, run the following command:
#yum install novnc
#service novnc restart
nova.conf file, which includes the message queue server address and credentials.
nova-novncproxy binds on 0.0.0.0:6080.
nova.conf file:
vncserver_listen=0.0.0.0Specifies the address on which the VNC service should bind. Make sure it is assigned one of the compute node interfaces. This address is the one used by your domain file.<graphics type="vnc" autoport="yes" keymap="en-us" listen="0.0.0.0"/>
NoteTo use live migration, use the 0.0.0.0 address.vncserver_proxyclient_address=127.0.0.1The address of the compute host that Compute instructs proxies to use when connecting to instancevncservers.
3.4.11.1.5. Frequently asked questions about VNC access to virtual machines
- Q: What is the difference between
nova-xvpvncproxyandnova-novncproxy?A:nova-xvpvncproxy, which ships with OpenStack Compute, is a proxy that supports a simple Java client.nova-novncproxyuses noVNC to provide VNC support through a web browser. - Q: I want VNC support in the OpenStack dashboard. What services do I need?A: You need
nova-novncproxy,nova-consoleauth, and correctly configured compute hosts. - Q: When I use nova get-vnc-console or click on the VNC tab of the OpenStack dashboard, it hangs. Why?A: Make sure you are running
nova-consoleauth(in addition tonova-novncproxy). The proxies rely onnova-consoleauthto validate tokens, and waits for a reply from them until a timeout is reached. - Q: My VNC proxy worked fine during my all-in-one test, but now it doesn't work on multi host. Why?A: The default options work for an all-in-one install, but changes must be made on your compute hosts once you start to build a cluster. As an example, suppose you have two servers:
PROXYSERVER (public_ip=172.24.1.1, management_ip=192.168.1.1) COMPUTESERVER (management_ip=192.168.1.2)
Yournova-computeconfiguration file must set the following values:# These flags help construct a connection data structure vncserver_proxyclient_address=192.168.1.2 novncproxy_base_url=http://172.24.1.1:6080/vnc_auto.html xvpvncproxy_base_url=http://172.24.1.1:6081/console # This is the address where the underlying vncserver (not the proxy) # will listen for connections. vncserver_listen=192.168.1.2
Notenovncproxy_base_urlandxvpvncproxy_base_urluse a public IP; this is the URL that is ultimately returned to clients, which generally do not have access to your private network. Your PROXYSERVER must be able to reachvncserver_proxyclient_address, because that is the address over which the VNC connection is proxied. - Q: My noVNC does not work with recent versions of web browsers. Why?A: Make sure you have installed
python-numpy, which is required to support a newer version of the WebSocket protocol (HyBi-07+). - Q: How do I adjust the dimensions of the VNC window image in the OpenStack dashboard?A: These values are hard-coded in a Django HTML template. To alter them, edit the
_detail_vnc.htmltemplate file.Modify thewidthandheightoptions, as follows:<iframe src="{{ vnc_url }}" width="720" height="430"></iframe>
3.4.11.2. SPICE console
nova-spicehtml5proxy service by using SPICE-over-websockets. The nova-spicehtml5proxy service communicates directly with the hypervisor process by using SPICE.
vnc_enabled option to False in the [DEFAULT] section to disable the VNC console.
Table 3.8. Description of configuration options for spice
| Configuration option = Default value | Description |
|---|---|
| [spice] | |
| agent_enabled = True | (BoolOpt) Enable spice guest agent support |
| enabled = False | (BoolOpt) Enable spice related features |
| html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html | (StrOpt) Location of spice HTML5 console proxy, in the form "http://127.0.0.1:6082/spice_auto.html" |
| keymap = en-us | (StrOpt) Keymap for spice |
| server_listen = 127.0.0.1 | (StrOpt) IP address on which instance spice server should listen |
| server_proxyclient_address = 127.0.0.1 | (StrOpt) The address to which proxy clients (like nova-spicehtml5proxy) should connect |
3.4.12. Configure Compute service groups
nova-compute daemon) starts, it calls the join API to join the compute group. Any interested service (for example, the scheduler) can query the group's membership and the status of its nodes. Internally, the ServiceGroup client driver automatically updates the compute worker status.
3.4.12.1. Database ServiceGroup driver
service_down_time) to determine whether a node is dead.
3.4.12.2. ZooKeeper ServiceGroup driver
nova-compute daemon crashes, or a network partition is in place between the worker and the ZooKeeper server quorums, the ephemeral znodes are removed automatically. The driver gets the group membership by running the ls command in the group directory.
python-zookeeper – the official Zookeeper Python binding and evzookeeper – the library to make the binding work with the eventlet threading model.
192.168.2.1:2181, 192.168.2.2:2181, and 192.168.2.3:2181.
/etc/nova/nova.conf file (on every node) are required for the ZooKeeper driver:
# Driver for the ServiceGroup serice servicegroup_driver="zk" [zookeeper] address="192.168.2.1:2181,192.168.2.2:2181,192.168.2.3:2181"
Table 3.9. Description of configuration options for zookeeper
| Configuration option = Default value | Description |
|---|---|
| [zookeeper] | |
| address = None | (StrOpt) The ZooKeeper addresses for servicegroup service in the format of host1:port,host2:port,host3:port |
| recv_timeout = 4000 | (IntOpt) The recv_timeout parameter for the zk session |
| sg_prefix = /servicegroups | (StrOpt) The prefix used in ZooKeeper to store ephemeral nodes |
| sg_retry_interval = 5 | (IntOpt) Number of seconds to wait until retrying to join the session |
3.4.12.3. Memcache ServiceGroup driver
memcache ServiceGroup driver uses memcached, which is a distributed memory object caching system that is often used to increase site performance. For more details, see memcached.org.
memcache driver, you must install memcached. However, because memcached is often used for both OpenStack Object Storage and OpenStack dashboard, it might already be installed. If memcached is not installed, refer to Deploying OpenStack: Learning Environments (Manual Setup) for more information.
/etc/nova/nova.conf file (on every node) are required for the memcache driver:
# Driver for the ServiceGroup serice servicegroup_driver="mc" # Memcached servers. Use either a list of memcached servers to use for caching (list value), # or "<None>" for in-process caching (default). memcached_servers=<None> # Timeout; maximum time since last check-in for up service (integer value). # Helps to define whether a node is dead service_down_time=60
3.4.13. Security hardening
3.4.13.1. Trusted compute pools
- Compute nodes boot with Intel TXT technology enabled.
- The compute node BIOS, hypervisor, and OS are measured.
- Measured data is sent to the attestation server when challenged by the attestation server.
- The attestation server verifies those measurements against a good and known database to determine node trustworthiness.

3.4.13.1.1. Configure Compute to use trusted compute pools
- Enable scheduling support for trusted compute pools by adding the following lines in the
DEFAULTsection in the/etc/nova/nova.conffile:[DEFAULT] compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler scheduler_available_filters=nova.scheduler.filters.all_filters scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter,TrustedFilter
- Specify the connection information for your attestation service by adding the following lines to the
trusted_computingsection in the/etc/nova/nova.conffile:[trusted_computing] server=10.1.71.206 port=8443 server_ca_file=/etc/nova/ssl.10.1.71.206.crt # If using OAT v1.5, use this api_url: api_url=/AttestationService/resources # If using OAT pre-v1.5, use this api_url: #api_url=/OpenAttestationWebServices/V1.0 auth_blob=i-am-openstack
Where:- server
- Host name or IP address of the host that runs the attestation service.
- port
- HTTPS port for the attestation service.
- server_ca_file
- Certificate file used to verify the attestation server's identity.
- api_url
- The attestation service's URL path.
- auth_blob
- An authentication blob, which is required by the attestation service.
- Restart the
nova-computeandnova-schedulerservices.
3.4.13.1.1.1. Configuration reference
Table 3.10. Description of configuration options for trustedcomputing
| Configuration option = Default value | Description |
|---|---|
| [trusted_computing] | |
| attestation_api_url = /OpenAttestationWebServices/V1.0 | (StrOpt) Attestation web API URL |
| attestation_auth_blob = None | (StrOpt) Attestation authorization blob - must change |
| attestation_auth_timeout = 60 | (IntOpt) Attestation status cache valid period length |
| attestation_port = 8443 | (StrOpt) Attestation server port |
| attestation_server = None | (StrOpt) Attestation server HTTP |
| attestation_server_ca_file = None | (StrOpt) Attestation server Cert file for Identity verification |
3.4.13.1.2. Specify trusted flavors
- Configure one or more flavors as trusted by using the nova flavor-key set command. For example, to set the
m1.tinyflavor as trusted:$nova flavor-key m1.tiny set trust:trusted_host trusted - Request that your instance be run on a trusted host, by specifying a trusted flavor when booting the instance. For example:
$nova boot --flavor m1.tiny --key_name myKeypairName --image myImageID --nic net-id=myNetID newInstanceNameFigure 3.8. Trusted compute pool

3.4.13.2. Encrypt Compute metadata traffic
metadata_agent.ini file:
- Enable the HTTPS protocol:
nova_metadata_protocol = https
- Determine whether insecure SSL connections are accepted for Compute metadata server requests. The default value is
False:nova_metadata_insecure = False
- Specify the path to the client certificate:
nova_client_cert = PATH_TO_CERT
- Specify the path to the private key:
nova_client_priv_key = PATH_TO_KEY
3.4.14. Recover from a failed compute node
3.4.14.1. Evacuate instances
3.4.14.2. Manual recovery
Procedure 3.7. Review host information
- Identify the VMs on the affected hosts, using tools such as a combination of
nova listandnova showoreuca-describe-instances. For example, the following output displays information about instancei-000015b9that is running on nodenp-rcc54:$euca-describe-instancesi-000015b9 at3-ui02 running nectarkey (376, np-rcc54) 0 m1.xxlarge 2012-06-19T00:48:11.000Z 115.146.93.60 - Review the status of the host by querying the Compute database. Some of the important information is highlighted below. The following example converts an EC2 API instance ID into an OpenStack ID; if you used the
novacommands, you can substitute the ID directly. You can find the credentials for your database in/etc/nova.conf.mysql>SELECT * FROM instances WHERE id = CONV('15b9', 16, 10) \G;*************************** 1. row *************************** created_at: 2012-06-19 00:48:11 updated_at: 2012-07-03 00:35:11 deleted_at: NULL ... id: 5561 ... power_state: 5 vm_state: shutoff ... hostname: at3-ui02 host: np-rcc54 ... uuid: 3f57699a-e773-4650-a443-b4b37eed5a06 ... task_state: NULL ...
Procedure 3.8. Recover the VM
- After you have determined the status of the VM on the failed host, decide to which compute host the affected VM should be moved. For example, run the following database command to move the VM to
np-rcc46:mysql>UPDATE instances SET host = 'np-rcc46' WHERE uuid = '3f57699a-e773-4650-a443-b4b37eed5a06'; - If using a hypervisor that relies on libvirt (such as KVM), it is a good idea to update the
libvirt.xmlfile (found in/var/lib/nova/instances/[instance ID]). The important changes to make are:- Change the
DHCPSERVERvalue to the host IP address of the compute host that is now the VM's new home. - Update the VNC IP, if it isn't already updated, to:
0.0.0.0.
- Reboot the VM:
$nova reboot --hard 3f57699a-e773-4650-a443-b4b37eed5a06
nova reboot command are all that is required to recover a VM from a failed host. However, if further problems occur, consider looking at recreating the network filter configuration using virsh, restarting the Compute services or updating the vm_state and power_state in the Compute database.
3.4.14.3. Recover from a UID/GID mismatch
nova-compute hosts, based on the KVM hypervisor, and could help to restore the situation:
Procedure 3.9. To recover from a UID/GID mismatch
- Ensure you do not use numbers that are already used for some other user/group.
- Set the nova uid in
/etc/passwdto the same number in all hosts (for example, 112). - Set the libvirt-qemu uid in
/etc/passwdto the same number in all hosts (for example, 119). - Set the nova group in
/etc/groupfile to the same number in all hosts (for example, 120). - Set the libvirtd group in
/etc/groupfile to the same number in all hosts (for example, 119). - Stop the services on the compute node.
- Change all the files owned by user
novaor by groupnova. For example:#find / -uid 108 -exec chown nova {} \;# note the 108 here is the old nova uid before the change#find / -gid 120 -exec chgrp nova {} \; - Repeat the steps for the libvirt-qemu owned files if those needed to change.
- Restart the services.
- Now you can run the find command to verify that all files using the correct identifiers.
3.4.14.4. Recover cloud after disaster
Disaster recovery example
- A cloud controller (
nova-api,nova-objectstore,nova-network) - A compute node (
nova-compute) - A Storage Area Network (SAN) used by OpenStack Block Storage (
cinder-volumes)
- From the SAN to the cloud controller, we have an active iSCSI session (used for the "cinder-volumes" LVM's VG).
- From the cloud controller to the compute node, we also have active iSCSI sessions (managed by
cinder-volume). - For every volume, an iSCSI session is made (so 14 ebs volumes equals 14 sessions).
- From the cloud controller to the compute node, we also have iptables/ ebtables rules which allow access from the cloud controller to the running instance.
- And at least, from the cloud controller to the compute node; saved into database, the current state of the instances (in that case "running" ), and their volumes attachment (mount point, volume ID, volume status, and so on.)
- From the SAN to the cloud, the iSCSI session no longer exists.
- From the cloud controller to the compute node, the iSCSI sessions no longer exist.
- From the cloud controller to the compute node, the iptables and ebtables are recreated, since at boot,
nova-networkreapplies configurations. - From the cloud controller, instances are in a shutdown state (because they are no longer running).
- In the database, data was not updated at all, since Compute could not have anticipated the crash.
- Get the current relation from a volume to its instance, so that you can recreate the attachment.
- Update the database to clean the stalled state. (After that, you cannot perform the first step).
- Restart the instances. In other words, go from a shutdown to running state.
- After the restart, reattach the volumes to their respective instances (optional).
- SSH into the instances to reboot them.
Recover after a disaster
Procedure 3.10. To perform disaster recovery
Get the instance-to-volume relationship
You must determine the current relationship from a volume to its instance, because you will re-create the attachment.You can find this relationship by running nova volume-list. Note that the nova client includes the ability to get volume information from OpenStack Block Storage.Update the database
Update the database to clean the stalled state. You must restore for every volume, using these queries to clean up the database:mysql>use cinder;mysql>update volumes set mountpoint=NULL;mysql>update volumes set status="available" where status <>"error_deleting";mysql>update volumes set attach_status="detached";mysql>update volumes set instance_id=0;You can then run nova volume-list commands to list all volumes.Restart instances
Restart the instances using the nova reboot $instance command.At this stage, depending on your image, some instances completely reboot and become reachable, while others stop on the "plymouth" stage.DO NOT reboot a second time
Do not reboot instances that are stopped at this point. Instance state depends on whether you added an/etc/fstabentry for that volume. Images built with the cloud-init package remain in a pending state, while others skip the missing volume and start. The idea of that stage is only to ask Compute to reboot every instance, so the stored state is preserved.Reattach volumes
After the restart, and Compute has restored the right status, you can reattach the volumes to their respective instances using the nova volume-attach command. The following snippet uses a file of listed volumes to reattach them:#!/bin/bash while read line; do volume=`echo $line | $CUT -f 1 -d " "` instance=`echo $line | $CUT -f 2 -d " "` mount_point=`echo $line | $CUT -f 3 -d " "` echo "ATTACHING VOLUME FOR INSTANCE - $instance" nova volume-attach $instance $volume $mount_point sleep 2 done < $volumes_tmp_fileAt this stage, instances that were pending on the boot sequence (plymouth) automatically continue their boot, and restart normally, while the ones that booted see the volume.SSH into instances
If some services depend on the volume, or if a volume has an entry intofstab, you should now simply restart the instance. This restart needs to be made from the instance itself, not through nova.SSH into the instance and perform a reboot:#shutdown -r now
- Use the
errors=remountparameter in thefstabfile, which prevents data corruption.The system locks any write to the disk if it detects an I/O error. This configuration option should be added into thecinder-volumeserver (the one which performs the iSCSI connection to the SAN), but also into the instances'fstabfile. - Do not add the entry for the SAN's disks to the
cinder-volume'sfstabfile.Some systems hang on that step, which means you could lose access to your cloud-controller. To re-run the session manually, run the following command before performing the mount:#iscsiadm -m discovery -t st -p $SAN_IP $ iscsiadm -m node --target-name $IQN -p $SAN_IP -l - For your instances, if you have the whole
/home/directory on the disk, leave a user's directory with the user's bash files and theauthorized_keysfile (instead of emptying the/homedirectory and mapping the disk on it).This enables you to connect to the instance, even without the volume attached, if you allow only connections through public keys.
Script the DRP
- An array is created for instances and their attached volumes.
- The MySQL database is updated.
- Using
euca2ools, all instances are restarted. - The volume attachment is made.
- An SSH connection is performed into every instance using Compute credentials.
#iscsiadm -m session -u -r 15
-r flag. Otherwise, you close ALL sessions.
3.5. Troubleshoot Compute
3.5.1. Compute service logging
/var/log/nova. For example, nova-compute.log is the log for the nova-compute service. You can set the following options to format log strings for the nova.log module in the nova.conf file:
logging_context_format_stringlogging_default_format_string
debug, you can also specify logging_debug_format_suffix to append extra formatting. For information about what variables are available for the formatter see: http://docs.python.org/library/logging.html#formatter.
nova.conf, include the logfile option to enable logging. Alternatively you can set use_syslog = 1 so that the nova daemon logs to syslog.
3.5.2. Guru Meditation reports
SIGUSR1 signal. This report is a general-purpose error report, including a complete report of the service's current state, and is sent to stderr.
nova-api-err.log using nova-api 2>/var/log/nova/nova-api-err.log, resulting in the process ID 8675, you can then run:
#kill -USR1 8675
/var/log/nova/nova-api-err.log.
- Package - Displays information about the package to which the process belongs, including version information.
- Threads - Displays stack traces and thread IDs for each of the threads within the process.
- Green Threads - Displays stack traces for each of the green threads within the process (green threads do not have thread IDs).
- Configuration - Lists all configuration options currently accessible through the CONF object for the current process.
3.5.3. Common errors and fixes for Compute
3.5.3.1. Credential errors, 401, and 403 forbidden errors
- Manual method. Get the
novarcfile from the project ZIP file, save existing credentials in case of override. and manually source thenovarcfile. - Script method. Generates
novarcfrom the project ZIP file and sources it for you.
nova-api the first time, it generates the certificate authority information, including openssl.cnf. If you start the CA services before this, you might not be able to create your ZIP file. Restart the services. When your CA information is available, create your ZIP file.
novarc creation.
3.5.3.2. Instance errors
pending or you cannot SSH to it. Sometimes the image itself is the problem. For example, when you use flat manager networking, you do not have a DHCP server and certain images do not support interface injection; you cannot connect to them. The fix for this problem is to use an image that obtains an IP address correctly with FlatManager network settings.
/var/lib/nova/instances on the nova-compute host and make sure that these files are present:
libvirt.xmldiskdisk-rawkernelramdisk- After the instance starts,
console.log
nova-compute service did not successfully download the images from the Image Service.
nova-compute.log for exceptions. Sometimes they do not appear in the console output.
/var/log/libvirt/qemu directory to see if it exists and has any useful error messages in it.
/var/lib/nova/instances directory for the instance, see if this command returns an error:
#virsh create libvirt.xml
3.5.3.3. Empty log output for Linux instances
console=tty0 console=ttyS0,115200n8
3.5.4. Reset the state of an instance
deleting, you can use the nova reset-state command to manually reset the state of an instance to an error state. You can then delete the instance. For example:
$nova reset-state c6bbbf26-b40a-47e7-8d5c-eb17bf65c485$nova delete c6bbbf26-b40a-47e7-8d5c-eb17bf65c485
--active parameter to force the instance back to an active state instead of an error state. For example:
$nova reset-state --active c6bbbf26-b40a-47e7-8d5c-eb17bf65c485
3.5.5. Injection problems
nova.conf:
[libvirt] inject_partition = -2
Chapter 4. Object Storage
4.1. Introduction to Object Storage
4.2. Features and benefits
| Features | Benefits |
|---|---|
| Leverages commodity hardware | No lock-in, lower price/GB. |
| HDD/node failure agnostic | Self-healing, reliable, data redundancy protects from failures. |
| Unlimited storage | Large and flat namespace, highly scalable read/write access, able to serve content directly from storage system. |
| Multi-dimensional scalability | Scale-out architecture: Scale vertically and horizontally-distributed storage. Backs up and archives large amounts of data with linear performance. |
| Account/container/object structure | No nesting, not a traditional file system: Optimized for scale, it scales to multiple petabytes and billions of objects. |
| Built-in replication 3✕ + data redundancy (compared with 2✕ on RAID) | A configurable number of accounts, containers and object copies for high availability. |
| Easily add capacity (unlike RAID resize) | Elastic data scaling with ease |
| No central database | Higher performance, no bottlenecks |
| RAID not required | Handle many small, random reads and writes efficiently |
| Built-in management utilities | Account management: Create, add, verify, and delete users; Container management: Upload, download, and verify; Monitoring: Capacity, host, network, log trawling, and cluster health. |
| Drive auditing | Detect drive failures preempting data corruption |
| Expiring objects | Users can set an expiration time or a TTL on an object to control access |
| Direct object access | Enable direct browser access to content, such as for a control panel |
| Realtime visibility into client requests | Know what users are requesting. |
| Supports S3 API | Utilize tools that were designed for the popular S3 API. |
| Restrict containers per account | Limit access to control usage by user. |
| Support for NetApp, Nexenta, SolidFire | Unified support for block volumes using a variety of storage systems. |
| Snapshot and backup API for block volumes | Data protection and recovery for VM data. |
| Standalone volume API available | Separate endpoint and API for integration with other compute systems. |
| Integration with Compute | Fully integrated with Compute for attaching block volumes and reporting on usage. |
4.3. Object Storage characteristics
- All objects stored in Object Storage have a URL.
- All objects stored are replicated 3✕ in as-unique-as-possible zones, which can be defined as a group of drives, a node, a rack, and so on.
- All objects have their own metadata.
- Developers interact with the object storage system through a RESTful HTTP API.
- Object data can be located anywhere in the cluster.
- The cluster scales by adding additional nodes without sacrificing performance, which allows a more cost-effective linear storage expansion than fork-lift upgrades.
- Data doesn't have to be migrate to an entirely new storage system.
- New nodes can be added to the cluster without downtime.
- Failed nodes and disks can be swapped out without downtime.
- It runs on industry-standard hardware, such as Dell, HP, and Supermicro.
Figure 4.1. Object Storage (swift)

4.4. Components
- Proxy servers. Handle all of the incoming API requests.
- Rings. Map logical names of data to locations on particular disks.
- Zones. Isolate data from other zones. A failure in one zone doesn’t impact the rest of the cluster because data is replicated across zones.
- Accounts and containers. Each account and container are individual databases that are distributed across the cluster. An account database contains the list of containers in that account. A container database contains the list of objects in that container.
- Objects. The data itself.
- Partitions. A partition stores objects, account databases, and container databases and helps manage locations where data lives in the cluster.
Figure 4.2. Object Storage building blocks

4.4.1. Proxy servers
4.4.2. Rings
Figure 4.3. The ring

4.4.3. Zones
Figure 4.4. Zones

4.4.4. Accounts and containers
Figure 4.5. Accounts and containers

4.4.5. Partitions
Figure 4.6. Partitions

4.4.6. Replicators
Figure 4.7. Replication

4.4.7. Use cases
4.4.7.1. Upload
Figure 4.8. Object Storage in use

4.4.7.2. Download
4.5. Ring-builder
4.5.1. Ring data structure
4.5.2. Partition assignment list
array(‘H’) of devices ids. The outermost list contains an array(‘H’) for each replica. Each array(‘H’) has a length equal to the partition count for the ring. Each integer in the array(‘H’) is an index into the above list of devices. The partition list is known internally to the Ring class as _replica2part2dev_id.
devices = [self.devs[part2dev_id[partition]] for part2dev_id in self._replica2part2dev_id]
array(‘H’) is used for memory conservation as there may be millions of partitions.
4.5.3. Replica counts
$swift-ring-builder account.builder set_replicas 4$swift-ring-builder account.builder rebalance
$swift-ring-builder object.builder set_replicas 3.01$swift-ring-builder object.builder rebalance<distribute rings and wait>...$swift-ring-builder object.builder set_replicas 3.02$swift-ring-builder object.builder rebalance<creatdistribute rings and wait>...
2.01, no data is lost.
4.5.4. Partition shift value
_part_shift. This value is used to shift an MD5 hash to calculate the partition where the data for that hash should reside. Only the top four bytes of the hash is used in this process. For example, to compute the partition for the /account/container/object path, the Python code might look like the following code:
partition = unpack_from('>I',
md5('/account/container/object').digest())[0] >>
self._part_shift
32 - P.
4.5.5. Build the ring
- The utility calculates the number of partitions to assign to each device based on the weight of the device. For example, for a partition at the power of 20, the ring has 1,048,576 partitions. One thousand devices of equal weight will each want 1,048.576 partitions. The devices are sorted by the number of partitions they desire and kept in order throughout the initialization process.NoteEach device is also assigned a random tiebreaker value that is used when two devices desire the same number of partitions. This tiebreaker is not stored on disk anywhere, and so two different rings created with the same parameters will have different partition assignments. For repeatable partition assignments,
RingBuilder.rebalance()takes an optional seed value that seeds the Python pseudo-random number generator. - The ring builder assigns each partition replica to the device that requires most partitions at that point while keeping it as far away as possible from other replicas. The ring builder prefers to assign a replica to a device in a region that does not already have a replica. If no such region is available, the ring builder searches for a device in a different zone, or on a different server. If it does not find one, it looks for a device with no replicas. Finally, if all options are exhausted, the ring builder assigns the replica to the device that has the fewest replicas already assigned.NoteThe ring builder assigns multiple replicas to one device only if the ring has fewer devices than it has replicas.
- When building a new ring from an old ring, the ring builder recalculates the desired number of partitions that each device wants.
- The ring builder unassigns partitions and gathers these partitions for reassignment, as follows:
- The ring builder unassigns any assigned partitions from any removed devices and adds these partitions to the gathered list.
- The ring builder unassigns any partition replicas that can be spread out for better durability and adds these partitions to the gathered list.
- The ring builder unassigns random partitions from any devices that have more partitions than they need and adds these partitions to the gathered list.
- The ring builder reassigns the gathered partitions to devices by using a similar method to the one described previously.
- When the ring builder reassigns a replica to a partition, the ring builder records the time of the reassignment. The ring builder uses this value when it gathers partitions for reassignment so that no partition is moved twice in a configurable amount of time. The RingBuilder class knows this configurable amount of time as
min_part_hours. The ring builder ignores this restriction for replicas of partitions on removed devices because removal of a device happens on device failure only, and reassignment is the only choice.
4.6. Cluster architecture
4.6.1. Access tier
Figure 4.9. Object Storage architecture

4.6.1.1. Factors to consider
4.6.2. Storage nodes
Figure 4.10. Object Storage (swift)

4.6.2.1. Factors to consider
4.7. Replication
get_more_nodes interface for the ring to choose an alternate node with which to synchronize. The replicator can maintain desired levels of replication during disk failures, though some replicas might not be in an immediately usable location.
- Database replication. Replicates containers and objects.
- Object replication. Replicates object data.
4.7.1. Database replication
4.7.2. Object replication
4.8. Account reaper
DELETE request on the account’s storage URL. This action sets the status column of the account_stat table in the account database and replicas to DELETED, marking the account's data for deletion.
delay_reaping value in the [account-reaper] section of the account-server.conf to delay the actual deletion of data. At this time, to undelete you have to update the account database replicas directly, setting the status column to an empty string and updating the put_timestamp to be greater than the delete_timestamp.
“Account <name> has not been reaped since <date>” in the log. You can control when this is logged with the reap_warn_after value in the [account-reaper] section of the account-server.conf file. The default value is 30 days.
4.9. Configure tenant-specific image locations with Object Storage
- The tenant who owns the image
- Tenants that are defined in
swift_store_admin_tenantsand that have admin-level accounts
Procedure 4.1. To configure tenant-specific image locations
- Configure swift as your
default_storein theglance-api.conffile. - Set these configuration options in the
glance-api.conffile:swift_store_multi_tenant. Set toTrueto enable tenant-specific storage locations. Default isFalse.swift_store_admin_tenants. Specify a list of tenant IDs that can grant read and write access to all Object Storage containers that are created by the Image Service.
4.10. Object Storage monitoring
4.10.1. Swift Recon
/proc/meminfo contents, and so on, as well as Swift-specific metrics:
- The MD5 sum of each ring file.
- The most recent object replication time.
- Count of each type of quarantined file: Account, container, or object.
- Count of “async_pendings” (deferred container updates) on disk.
async_pendings, you must set up an additional cron job for each object server. You access data by either sending HTTP requests directly to the object server or using the swift-recon command-line client.
collectd and gmond, probably already runs on the storage node. So, you can choose to either talk to Swift Recon or collect the metrics directly.
4.10.2. Swift-Informant
- A counter increment for a metric like
obj.GET.200orcont.PUT.404. - Timing data for a metric like
acct.GET.200orobj.GET.200. [The README says the metrics look likeduration.acct.GET.200, but I do not see thedurationin the code. I am not sure what the Etsy server does but our StatsD server turns timing metrics into five derivative metrics with new segments appended, so it probably works as coded. The first metric turns intoacct.GET.200.lower,acct.GET.200.upper,acct.GET.200.mean,acct.GET.200.upper_90, andacct.GET.200.count]. - A counter increase by the bytes transferred for a metric like
tfer.obj.PUT.201.
4.10.3. Statsdlog
4.10.4. Swift StatsD logging
sendto of one UDP packet. If that overhead is still too high, the StatsD client library can send only a random portion of samples and StatsD approximates the actual number when flushing metrics upstream.
log_statsd_host in the relevant config file. You can also specify the port and a default sample rate. The specified default sample rate is used unless a specific call to a statsd logging method (see the list below) overrides it. Currently, no logging calls override the sample rate, but it is conceivable that some metrics may require accuracy (sample_rate == 1) while others may not.
[DEFAULT]
...
log_statsd_host = 127.0.0.1
log_statsd_port = 8125
log_statsd_default_sample_rate = 1get_logger(), usually stored in self.logger, has these new methods:
set_statsd_prefix(self, prefix)Sets the client library stat prefix value which gets prefixed to every metric. The default prefix is the “name” of the logger (such as, . “object-server”, “container-auditor”, etc.). This is currently used to turn “proxy-server” into one of “proxy-server.Account”, “proxy-server.Container”, or “proxy-server.Object” as soon as the Controller object is determined and instantiated for the request.update_stats(self, metric, amount, sample_rate=1)Increments the supplied metric by the given amount. This is used when you need to add or subtract more that one from a counter, like incrementing “suffix.hashes” by the number of computed hashes in the object replicator.increment(self, metric, sample_rate=1)Increments the given counter metric by one.decrement(self, metric, sample_rate=1)Lowers the given counter metric by one.timing(self, metric, timing_ms, sample_rate=1)Record that the given metric took the supplied number of milliseconds.timing_since(self, metric, orig_time, sample_rate=1)Convenience method to record a timing metric whose value is “now” minus an existing timestamp.
# swift/obj/replicator.py
def update(self, job):
# ...
begin = time.time()
try:
hashed, local_hash = tpool.execute(tpooled_get_hashes, job['path'],
do_listdir=(self.replication_count % 10) == 0,
reclaim_age=self.reclaim_age)
# See tpooled_get_hashes "Hack".
if isinstance(hashed, BaseException):
raise hashed
self.suffix_hash += hashed
self.logger.update_stats('suffix.hashes', hashed)
# ...
finally:
self.partition_times.append(time.time() - begin)
self.logger.timing_since('partition.update.timing', begin)# swift/container/updater.py
def process_container(self, dbfile):
# ...
start_time = time.time()
# ...
for event in events:
if 200 <= event.wait() < 300:
successes += 1
else:
failures += 1
if successes > failures:
self.logger.increment('successes')
# ...
else:
self.logger.increment('failures')
# ...
# Only track timing data for attempted updates:
self.logger.timing_since('timing', start_time)
else:
self.logger.increment('no_changes')
self.no_changes += 14.11. Troubleshoot Object Storage
/var/log/syslog (or messages on some distros). Several settings enable further customization of logging, such as log_name, log_facility, and log_level, within the object server configuration files.
4.11.1. Drive failure
/var/log/kern.log for hints of drive failure.
4.11.2. Server failure
4.11.3. Emergency recovery of ring builder files
ring.gz file. However, if you have a knowledge of Python, it is possible to construct a builder file that is pretty close to the one you have lost. The following is what you will need to do.
>>> from swift.common.ring import RingData, RingBuilder
>>> ring = RingData.load('/path/to/account.ring.gz')
>>> import math
>>> partitions = len(ring._replica2part2dev_id[0])
>>> replicas = len(ring._replica2part2dev_id)
>>> builder = RingBuilder(int(Math.log(partitions, 2)), replicas, 1)
>>> builder.devs = ring.devs
>>> builder._replica2part2dev = ring.replica2part2dev_id
>>> builder._last_part_moves_epoch = 0
>>> builder._last_part_moves = array('B', (0 for _ in xrange(self.parts)))
>>> builder._set_parts_wanted()
>>> for d in builder._iter_devs():
d['parts'] = 0
>>> for p2d in builder._replica2part2dev:
for dev_id in p2d:
builder.devs[dev_id]['parts'] += 1min_part_hours you'll either have to remember what the value you used was, or just make up a new one.
>>> builder.change_min_part_hours(24) # or whatever you want it to be
>>> builder.validate()
>>> import pickle
>>> pickle.dump(builder.to_dict(), open('account.builder', 'wb'), protocol=2)swift-ring-builder account.builder write_ring and compare the new account.ring.gz to the account.ring.gz that you started from. They probably won't be byte-for-byte identical, but if you load them up in a REPL and their _replica2part2dev_id and devs attributes are the same (or nearly so), then you're in good shape.
container.ring.gz and object.ring.gz, and you might get usable builder files.
Chapter 5. Block Storage
cinder-* that reside persistently on the host machine or machines. The binaries can all be run from a single node, or spread across multiple nodes. They can also be run on the same node as other OpenStack services.
5.1. Introduction to Block Storage
5.2. Increase Block Storage API service throughput
openstack-cinder-api .
osapi_volume_workers. This option allows you to specify the number of API service workers (or OS processes) to launch for the Block Storage API service.
/etc/cinder/cinder.conf configuration file and set the osapi_volume_workers configuration key to the number of CPU cores/threads on a machine.
#openstack-config --set /etc/cinder/cinder.conf \DEFAULT osapi_volume_workers CORES
5.3. Manage volumes
- You must configure both OpenStack Compute and the OpenStack Block Storage service through the
cinder.conffile. - Create a volume through the cinder create command. This command creates an LV into the volume group (VG)
cinder-volumes. - Attach the volume to an instance through the nova volume-attach command. This command creates a unique iSCSI IQN that is exposed to the compute node.
- The compute node, which runs the instance, now has an active ISCSI session and new local storage (usually a
/dev/sdXdisk). - libvirt uses that local storage as storage for the instance. The instance gets a new disk (usually a
/dev/vdXdisk).
nova-api, nova-scheduler, nova-objectstore, nova-network and cinder-* services. Two additional compute nodes run nova-compute. The walk through uses a custom partitioning scheme that carves out 60 GB of space and labels it as LVM. The network uses the FlatManager and NetworkManager settings for OpenStack Compute.
5.3.1. Boot from volume
5.3.2. Configure an NFS storage back end
cinder volume service.
cinder volume service is named openstack-cinder-volume
Procedure 5.1. Configure Block Storage to use an NFS storage back end
- Log in as
rootto the system hosting thecindervolume service. - Create a text file named
nfssharesin/etc/cinder/. - Add an entry to
/etc/cinder/nfssharesfor each NFS share that thecindervolume service should use for back end storage. Each entry should be a separate line, and should use the following format:HOST:SHARE
Where:- HOST is the IP address or host name of the NFS server.
- SHARE is the absolute path to an existing and accessible NFS share.
- Set
/etc/cinder/nfssharesto be owned by therootuser and thecindergroup:#chown root:cinder /etc/cinder/nfsshares - Set
/etc/cinder/nfssharesto be readable by members of thecindergroup:#chmod 0640 /etc/cinder/nfsshares - Configure the
cindervolume service to use the/etc/cinder/nfssharesfile created earlier. To do so, open the/etc/cinder/cinder.confconfiguration file and set thenfs_shares_configconfiguration key to/etc/cinder/nfsshares.On Using openstack-config, you can configure this by running the following command instead:#openstack-config --set /etc/cinder/cinder.conf \DEFAULT nfs_shares_config /etc/cinder/nfsshares - Optionally, provide any additional NFS mount options required in your environment in the
nfs_mount_optionsconfiguration key of/etc/cinder/cinder.conf. If your NFS shares do not require any additional mount options (or if you are unsure), skip this step.Using openstack-config, you can configure this by running the following command instead:#openstack-config --set /etc/cinder/cinder.conf \DEFAULT nfs_mount_options OPTIONSReplace OPTIONS with the mount options to be used when accessing NFS shares. See the manual page for NFS for more information on available mount options (man nfs). - Configure the
cindervolume service to use the correct volume driver, namelycinder.volume.drivers.nfs.NfsDriver. To do so, open the/etc/cinder/cinder.confconfiguration file and set thevolume_driverconfiguration key tocinder.volume.drivers.nfs.NfsDriver.Using openstack-config, you can configure this by running the following command instead:#openstack-config --set /etc/cinder/cinder.conf \DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver - You can now restart the service to apply the configuration.To restart the
cindervolume service, run:#service openstack-cinder-volume restart
nfs_sparsed_volumes configuration key determines whether volumes are created as sparse files and grown as needed or fully allocated up front. The default and recommended value is true, which ensures volumes are initially created as sparse files.
nfs_sparsed_volumes to false will result in volumes being fully allocated at the time of creation. This leads to increased delays in volume creation.
nfs_sparsed_volumes to false, you can do so directly in /etc/cinder/cinder.conf.
#openstack-config --set /etc/cinder/cinder.conf \DEFAULT nfs_sparsed_volumes false
virt_use_nfs Boolean should also be enabled if the host requires access to NFS volumes on an instance. To enable this Boolean, run the following command as the root user:
#setsebool -P virt_use_nfs on
5.3.3. Configure a GlusterFS back end
cinder volume service.
cinder volume service is named openstack-cinder-volume .
Procedure 5.2. Configure GlusterFS for OpenStack Block Storage
- Log in as
rootto the GlusterFS server. - Set each Gluster volume to use the same UID and GID as the
cinderuser:#gluster volume set VOL_NAME storage.owner-uid cinder-uid#gluster volume set VOL_NAME storage.owner-gid cinder-gidWhere:- VOL_NAME is the Gluster volume name.
- cinder-uid is the UID of the
cinderuser. - cinder-gid is the GID of the
cinderuser.
- Configure each Gluster volume to accept
libgfapiconnections. To do this, set each Gluster volume to allow insecure ports:#gluster volume set VOL_NAME server.allow-insecure on - Enable client connections from unprivileged ports. To do this, add the following line to
/etc/glusterfs/glusterd.vol:option rpc-auth-allow-insecure on
- Restart the
glusterdservice:#service glusterd restart
Procedure 5.3. Configure Block Storage to use a GlusterFS back end
- Log in as
rootto the system hosting the cinder volume service. - Create a text file named
glusterfsin/etc/cinder/. - Add an entry to
/etc/cinder/glusterfsfor each GlusterFS share that OpenStack Block Storage should use for back end storage. Each entry should be a separate line, and should use the following format:HOST:/VOL_NAME
Where:- HOST is the IP address or host name of the Red Hat Storage server.
- VOL_NAME is the name an existing and accessible volume on the GlusterFS server.
Optionally, if your environment requires additional mount options for a share, you can add them to the share's entry:HOST:/VOL_NAME -o OPTIONS
Replace OPTIONS with a comma-separated list of mount options. - Set
/etc/cinder/glusterfsto be owned by therootuser and thecindergroup.#chown root:cinder /etc/cinder/glusterfs - Set
/etc/cinder/glusterfsto be readable by members of thecindergroup:#chmod 0640 FILE - Configure OpenStack Block Storage to use the
/etc/cinder/glusterfsfile created earlier. To do so, open the/etc/cinder/cinder.confconfiguration file and set theglusterfs_shares_configconfiguration key to/etc/cinder/glusterfs.Using openstack-config, you can configure this by running the following command instead:#openstack-config --set /etc/cinder/cinder.conf \DEFAULT glusterfs_shares_config /etc/cinder/glusterfs - Configure OpenStack Block Storage to use the correct volume driver, namely
cinder.volume.drivers.glusterfs. To do so, open the/etc/cinder/cinder.confconfiguration file and set thevolume_driverconfiguration key tocinder.volume.drivers.glusterfs.Using openstack-config, you can configure this by running the following command instead:#openstack-config --set /etc/cinder/cinder.conf \DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver - You can now restart the service to apply the configuration.To restart the
cindervolume service, run:#service openstack-cinder-volume restart
/etc/cinder/cinder.conf, the glusterfs_sparsed_volumes configuration key determines whether volumes are created as sparse files and grown as needed or fully allocated up front. The default and recommended value of this key is true, which ensures volumes are initially created as sparse files.
glusterfs_sparsed_volumes to false will result in volumes being fully allocated at the time of creation. This leads to increased delays in volume creation.
glusterfs_sparsed_volumes to false, you can do so directly in /etc/cinder/cinder.conf.
#openstack-config --set /etc/cinder/cinder.conf \DEFAULT glusterfs_sparsed_volumes false
virt_use_fusefs Boolean should also be enabled if the host requires access to GlusterFS volumes on an instance. To enable this Boolean, run the following command as the root user:
#setsebool -P virt_use_fusefs on
5.3.4. Configure a multiple-storage back-end
cinder-volume for each back-end or back-end pool.
volume_backend_name). Several back-ends can have the same name. In that case, the scheduler properly decides which back-end the volume has to be created in.
volume_backend_name=LVM_iSCSI). When a volume is created, the scheduler chooses an appropriate back-end to handle the request, according to the volume type specified by the user.
Enable multi back-end
enabled_backends flag in the cinder.conf file. This flag defines the names (separated by a comma) of the configuration groups for the different back-ends: one name is associated to one configuration group for a back-end (such as, [lvmdriver-1]).
volume_backend_name.
volume_group, volume_driver, and so on) might be used in a configuration group. Configuration values in the [DEFAULT] configuration group are not used.
enabled_backends=lvmdriver-1,lvmdriver-2,lvmdriver-3 [lvmdriver-1] volume_group=cinder-volumes-1 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_backend_name=LVM_iSCSI [lvmdriver-2] volume_group=cinder-volumes-2 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_backend_name=LVM_iSCSI [lvmdriver-3] volume_group=cinder-volumes-3 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_backend_name=LVM_iSCSI_b
lvmdriver-1 and lvmdriver-2 have the same volume_backend_name. If a volume creation requests the LVM_iSCSI back-end name, the scheduler uses the capacity filter scheduler to choose the most suitable driver, which is either lvmdriver-1 or lvmdriver-2. The capacity filter scheduler is enabled by default. The next section provides more information. In addition, this example presents a lvmdriver-3 back-end.
enabled_backends=backend1,backend2 san_ssh_port=22 ssh_conn_timeout=30 san_thin_provision=true [backend1] volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver volume_backend_name=backend1 san_ip=IP_EQLX1 san_login=SAN_UNAME san_password=SAN_PW eqlx_group_name=EQLX_GROUP eqlx_pool=EQLX_POOL [backend2] volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver volume_backend_name=backend2 san_ip=IP_EQLX2 san_login=SAN_UNAME san_password=SAN_PW eqlx_group_name=EQLX_GROUP eqlx_pool=EQLX_POOL
- Thin provisioning for SAN volumes is enabled (
san_thin_provision=true). This is recommended when setting up Dell EqualLogic back-ends. - Each Dell EqualLogic back-end configuration (
[backend1]and[backend2]) has the same required settings as a single back-end configuration, with the addition ofvolume_backend_name. - The
san_ssh_portoption is set to its default value,22. This option sets the port used for SSH. - The
ssh_conn_timeoutoption is also set to its default value,30. This option sets the timeout (in seconds) for CLI commands over SSH. - The IP_EQLX1 and IP_EQLX2 refer to the IP addresses used to reach the Dell EqualLogic Group of
backend1andbackend2through SSH, respectively.
Configure Block Storage scheduler multi back-end
filter_scheduler option to use multi back-end. Filter scheduler acts in two steps:
- The filter scheduler filters the available back-ends. By default,
AvailabilityZoneFilter,CapacityFilterandCapabilitiesFilterare enabled. - The filter scheduler weighs the previously filtered back-ends. By default,
CapacityWeigheris enabled. TheCapacityWeigherattributes higher scores to back-ends with the most available capacity.
Volume type
$cinder --os-username admin --os-tenant-name admin type-create lvm
$cinder --os-username admin --os-tenant-name admin type-key lvm set volume_backend_name=LVM_iSCSI
lvm volume type with volume_backend_name=LVM_iSCSI as extra-specifications.
$cinder --os-username admin --os-tenant-name admin type-create lvm_gold
$cinder --os-username admin --os-tenant-name admin type-key lvm_gold set volume_backend_name=LVM_iSCSI_b
lvm_gold and has LVM_iSCSI_b as back-end name.
$cinder --os-username admin --os-tenant-name admin extra-specs-list
volume_backend_name that does not exist in the Block Storage configuration, the filter_scheduler returns an error that it cannot find a valid host with the suitable back-end.
Usage
Considering the$cinder create --volume_type lvm --display_name test_multi_backend 1
cinder.conf described previously, the scheduler creates this volume on lvmdriver-1 or lvmdriver-2.
$cinder create --volume_type lvm_gold --display_name test_multi_backend 1
lvmdriver-3.
5.3.5. Back up Block Storage service disks
volume-00000001 was created for an instance while only 4 GB are used. This example uses these commands to back up only those 4 GB:
- lvm2 command. Directly manipulates the volumes.
- kpartx command. Discovers the partition table created inside the instance.
- tar command. Creates a minimum-sized backup.
- sha1sum command. Calculates the backup checksum to check its consistency.
Procedure 5.4. To back up Block Storage service disks
Create a snapshot of a used volume
- Use this command to list all volumes:
#lvdisplay - Create the snapshot; you can do this while the volume is attached to an instance:
#lvcreate --size 10G --snapshot --name volume-00000001-snapshot /dev/cinder-volumes/volume-00000001Use the--snapshotconfiguration option to tell LVM that you want a snapshot of an already existing volume. The command includes the size of the space reserved for the snapshot volume, the name of the snapshot, and the path of an already existing volume. Generally, this path is/dev/cinder-volumes/$volume_name.The size does not have to be the same as the volume of the snapshot. Thesizeparameter defines the space that LVM reserves for the snapshot volume. As a precaution, the size should be the same as that of the original volume, even if the whole space is not currently used by the snapshot. - Run the lvdisplay command again to verify the snapshot:
--- Logical volume --- LV Name /dev/cinder-volumes/volume-00000001 VG Name cinder-volumes LV UUID gI8hta-p21U-IW2q-hRN1-nTzN-UC2G-dKbdKr LV Write Access read/write LV snapshot status source of /dev/cinder-volumes/volume-00000026-snap [active] LV Status available # open 1 LV Size 15,00 GiB Current LE 3840 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:13 --- Logical volume --- LV Name /dev/cinder-volumes/volume-00000001-snap VG Name cinder-volumes LV UUID HlW3Ep-g5I8-KGQb-IRvi-IRYU-lIKe-wE9zYr LV Write Access read/write LV snapshot status active destination for /dev/cinder-volumes/volume-00000026 LV Status available # open 0 LV Size 15,00 GiB Current LE 3840 COW-table size 10,00 GiB COW-table LE 2560 Allocated to snapshot 0,00% Snapshot chunk size 4,00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:14
Partition table discovery
- To exploit the snapshot with the tar command, mount your partition on the Block Storage service server.The kpartx utility discovers and maps table partitions. You can use it to view partitions that are created inside the instance. Without using the partitions created inside instances, you cannot see its content and create efficient backups.
#kpartx -av /dev/cinder-volumes/volume-00000001-snapshotIf the tools successfully find and map the partition table, no errors are returned. - To check the partition table map, run this command:
$ls /dev/mapper/nova*You can see thecinder--volumes-volume--00000001--snapshot1partition.If you created more than one partition on that volume, you see several partitions; for example:cinder--volumes-volume--00000001--snapshot2,cinder--volumes-volume--00000001--snapshot3, and so on. - Mount your partition:
#mount /dev/mapper/cinder--volumes-volume--volume--00000001--snapshot1 /mntIf the partition mounts successfully, no errors are returned.You can directly access the data inside the instance. If a message prompts you for a partition or you cannot mount it, determine whether enough space was allocated for the snapshot or the kpartx command failed to discover the partition table.Allocate more space to the snapshot and try the process again.
Use the tar command to create archives
Create a backup of the volume:$tar --exclude="lost+found" --exclude="some/data/to/exclude" -czf volume-00000001.tar.gz -C /mnt/ /backup/destinationThis command creates atar.gzfile that contains the data, and data only. This ensures that you do not waste space by backing up empty sectors.Checksum calculation I
You should always have the checksum for your backup files. When you transfer the same file over the network, you can run a checksum calculation to ensure that your file was not corrupted during its transfer. The checksum is a unique ID for a file. If the checksums are different, the file is corrupted.Run this command to run a checksum for your file and save the result to a file:$sha1sum volume-00000001.tar.gz > volume-00000001.checksumNoteUse the sha1sum command carefully because the time it takes to complete the calculation is directly proportional to the size of the file.For files larger than around 4 to 6 GB, and depending on your CPU, the process might take a long time.After work cleaning
Now that you have an efficient and consistent backup, use this command to clean up the file system:- Unmount the volume:
unmount /mnt - Delete the partition table:
kpartx -dv /dev/cinder-volumes/volume-00000001-snapshot - Remove the snapshot:
lvremove -f /dev/cinder-volumes/volume-00000001-snapshot
Repeat these steps for all your volumes.Automate your backups
Because more and more volumes might be allocated to your Block Storage service, you might want to automate your backups. The SCR_5005_V01_NUAC-OPENSTACK-EBS-volumes-backup.sh script assists you with this task. The script performs the operations from the previous example, but also provides a mail report and runs the backup based on thebackups_retention_dayssetting.Launch this script from the server that runs the Block Storage service.This example shows a mail report:Backup Start Time - 07/10 at 01:00:01 Current retention - 7 days The backup volume is mounted. Proceed... Removing old backups... : /BACKUPS/EBS-VOL/volume-00000019/volume-00000019_28_09_2011.tar.gz /BACKUPS/EBS-VOL/volume-00000019 - 0 h 1 m and 21 seconds. Size - 3,5G The backup volume is mounted. Proceed... Removing old backups... : /BACKUPS/EBS-VOL/volume-0000001a/volume-0000001a_28_09_2011.tar.gz /BACKUPS/EBS-VOL/volume-0000001a - 0 h 4 m and 15 seconds. Size - 6,9G --------------------------------------- Total backups size - 267G - Used space : 35% Total execution time - 1 h 75 m and 35 secondsThe script also enables you to SSH to your instances and run a mysqldump command into them. To make this work, enable the connection to the Compute project keys. If you do not want to run the mysqldump command, you can addenable_mysql_dump=0to the script to turn off this functionality.
5.3.6. Migrate volumes
- If the storage can migrate the volume on its own, it is given the opportunity to do so. This allows the Block Storage driver to enable optimizations that the storage might be able to perform. If the back-end is not able to perform the migration, the Block Storage uses one of two generic flows, as follows.
- If the volume is not attached, the Block Storage service creates a volume and copies the data from the original to the new volume.NoteWhile most back-ends support this function, not all do. For more information, see Volume drivers section in the Block Storage chapter of the Red Hat Enterprise Linux OpenStack Platform 5 Configuration Reference Guide available at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/.
- If the volume is attached to a VM instance, the Block Storage creates a volume, and calls Compute to copy the data from the original to the new volume. Currently this is supported only by the Compute libvirt driver.
#cinder-manage host listserver1@lvmstorage-1 zone1 server2@lvmstorage-2 zone1
$cinder show 6088f80a-f116-4331-ad48-9afb0dfb196c+--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [...] | | availability_zone | zone1 | | bootable | False | | created_at | 2013-09-01T14:53:22.000000 | | display_description | test | | display_name | test | | id | 6088f80a-f116-4331-ad48-9afb0dfb196c | | metadata | {} | | os-vol-host-attr:host | server1@lvmstorage-1 | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 6bdd8f41203e4149b5d559769307365e | | size | 2 | | snapshot_id | None | | source_volid | None | | status | in-use | | volume_type | None | +--------------------------------+--------------------------------------+
os-vol-host-attr:host- the volume's current back-end.os-vol-mig-status-attr:migstat- the status of this volume's migration (Nonemeans that a migration is not currently in progress).os-vol-mig-status-attr:name_id- the volume ID that this volume's name on the back-end is based on. Before a volume is ever migrated, its name on the back-end storage may be based on the volume's ID (see thevolume_name_templateconfiguration parameter). For example, ifvolume_name_templateis kept as the default value (volume-%s), your first LVM back-end has a logical volume namedvolume-6088f80a-f116-4331-ad48-9afb0dfb196c. During the course of a migration, if you create a volume and copy over the data, the volume get the new name but keeps its original ID. This is exposed by thename_idattribute.
cinder volume service on the node after performing the migration. Run:
#service openstack-cinder-volume stop#chkconfig openstack-cinder-volume off
#chkconfig cinder-volume off
cinder volume service will prevent volumes from being allocated to the node.
$cinder migrate 6088f80a-f116-4331-ad48-9afb0dfb196c server2@lvmstorage-2
migstat attribute shows states such as migrating or completing. On error, migstat is set to None and the host attribute shows the original host. On success, in this example, the output looks like:
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [...] |
| availability_zone | zone1 |
| bootable | False |
| created_at | 2013-09-01T14:53:22.000000 |
| display_description | test |
| display_name | test |
| id | 6088f80a-f116-4331-ad48-9afb0dfb196c |
| metadata | {} |
| os-vol-host-attr:host | server2@lvmstorage-2 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | 133d1f56-9ffc-4f57-8798-d5217d851862 |
| os-vol-tenant-attr:tenant_id | 6bdd8f41203e4149b5d559769307365e |
| size | 2 |
| snapshot_id | None |
| source_volid | None |
| status | in-use |
| volume_type | None |
+--------------------------------+--------------------------------------+migstat is None, host is the new host, and name_id holds the ID of the volume created by the migration. If you look at the second LVM back end, you find the logical volume volume-133d1f56-9ffc-4f57-8798-d5217d851862.
status). However, some operations are not allowed while a migration is taking place, such as attaching/detaching a volume and deleting a volume. If a user performs such an action during a migration, an error is returned.
5.3.7. Gracefully remove a GlusterFS volume from usage
cinder volume service to use GlusterFS involves creating a shares file (for example, /etc/cinder/glusterfs). This shares file lists each GlusterFS volume (with its corresponding storage server) that the cinder volume service can use for back end storage.
#for i in api scheduler volume; do service openstack-cinder-$i restart; done
cinder volume service from exporting the deleted GlusterFS volume. This will prevent any instances from mounting the volume from that point onwards.
5.3.8. Back up and restore volumes
$cinder backup-create VOLUME
$cinder backup-restore backup_ID
5.3.9. Export and import backup metadata
admin user (presumably, after creating a volume backup):
$cinder backup-export backup_ID
admin:
$cinder backup-import metadata
5.4. Troubleshoot your installation
5.4.1. Troubleshoot the Block Storage configuration
cinder-apilog (/var/log/cinder/api.log)cinder-volumelog (/var/log/cinder/volume.log)
cinder-api log is useful for determining if you have endpoint or connectivity issues. If you send a request to create a volume and it fails, review the cinder-api log to determine whether the request made it to the Block Storage service. If the request is logged and you see no errors or trace-backs, check the cinder-volume log for errors or trace-backs.
cinder-api log.
cinder.openstack.common.log file can be used to assist in troubleshooting your block storage configuration.
# Print debugging output (set logging level to DEBUG instead # of default WARNING level). (boolean value) #debug=false # Print more verbose output (set logging level to INFO instead # of default WARNING level). (boolean value) #verbose=false # Log output to standard error (boolean value) #use_stderr=true # Default file mode used when creating log files (string # value) #logfile_mode=0644 # format string to use for log messages with context (string # value) #logging_context_format_string=%(asctime)s.%(msecs)03d %(levelname)s %(name)s [%(request_id)s %(user)s %(tenant)s] %(instance)s%(message)s # format string to use for log mes #logging_default_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s # data to append to log format when level is DEBUG (string # value) #logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d # prefix each line of exception output with this format # (string value) #logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s # list of logger=LEVEL pairs (list value) #default_log_levels=amqplib=WARN,sqlalchemy=WARN,boto=WARN,suds=INFO,keystone=INFO,eventlet.wsgi.server=WARNsages without context # (string value) # If an instance is passed with the log message, format it # like this (string value) #instance_format="[instance: %(uuid)s]" # If an instance UUID is passed with the log message, format # it like this (string value) # A logging.Formatter log message format string which may use # any of the available logging.LogRecord attributes. Default: # %(default)s (string value) #log_format=%(asctime)s %(levelname)8s [%(name)s] %(message)s # Format string for %%(asctime)s in log records. Default: # %(default)s (string value) #log_date_format=%Y-%m-%d %H:%M:%S # (Optional) Name of log file to output to. If not set, # logging will go to stdout. (string value) #log_file=<None> # (Optional) The directory to keep log files in (will be # prepended to --log-file) (string value) #log_dir=<None> #instance_uuid_format="[instance: %(uuid)s]" # If this option is specified, the logging configuration file # specified is used and overrides any other logging options # specified. Please see the Python logging module # documentation for details on logging configuration files. # (string value) # Use syslog for logging. (boolean value) #use_syslog=false # syslog facility to receive log lines (string value) #syslog_log_facility=LOG_USER #log_config=<None>
- Issues with
state_pathandvolumes_dirsettings.The OpenStack Block Storage uses tgtd as the default iscsi helper and implements persistent targets. This means that in the case of a tgt restart or even a node reboot your existing volumes on that node will be restored automatically with their original IQN.In order to make this possible the iSCSI target information needs to be stored in a file on creation that can be queried in case of restart of the tgt daemon. By default, Block Storage uses astate_pathvariable, which if installing with Yum or APT should be set to/var/lib/cinder/. The next part is thevolumes_dirvariable, by default this just simply appends a "volumes" directory to thestate_path. The result is a file-tree/var/lib/cinder/volumes/.While this should all be handled by the installer, it can go wrong. If you have trouble creating volumes and this directory does not exist you should see an error message in thecinder-volumelog indicating that thevolumes_dirdoes not exist, and it should provide information about which path it was looking for. - The persistent tgt include file.Along with the
volumes_diroption, the iSCSI target driver also needs to be configured to look in the correct place for the persist files. This is a simple entry in the/etc/tgt/conf.dfile that you should have set when you installed OpenStack. If issues occur, verify that you have a/etc/tgt/conf.d/cinder.conffile.If the file is not present, create it with this command:#echo 'include /var/lib/cinder/volumes/ *' >> /etc/tgt/conf.d/cinder.conf - No sign of attach call in the
cinder-apilog.This is most likely going to be a minor adjustment to yournova.conffile. Make sure that yournova.confhas this entry:volume_api_class=nova.volume.cinder.API
- Failed to create iscsi target error in the
cinder-volume.logfile.2013-03-12 01:35:43 1248 TRACE cinder.openstack.common.rpc.amqp ISCSITargetCreateFailed: Failed to create iscsi target for volume volume-137641b2-af72-4a2f-b243-65fdccd38780.
You might see this error incinder-volume.logafter trying to create a volume that is 1 GB. To fix this issue:Change content of the/etc/tgt/targets.conffrominclude /etc/tgt/conf.d/*.conftoinclude /etc/tgt/conf.d/cinder_tgt.conf, as follows:include /etc/tgt/conf.d/cinder_tgt.conf include /etc/tgt/conf.d/cinder.conf default-driver iscsi
Restarttgtandcinder-*services so they pick up the new configuration.
5.4.2. Multipath Call Failed Exit
5.4.2.1. Problem
multipath-tools package installed on the compute node. This is an optional package and the volume attachment does work without the multipath tools installed. If the multipath-tools package is installed on the compute node, it is used to perform the volume attachment. The IDs in your message are unique to your system.
WARNING nova.storage.linuxscsi [req-cac861e3-8b29-4143-8f1b-705d0084e571 admin
admin|req-cac861e3-8b29-4143-8f1b-705d0084e571 admin admin] Multipath call failed exit
(96)5.4.2.2. Solution
multipath-tools packages.
#yum install sysfsutils sg3_utils multipath-tools
5.4.3. Addressing discrepancies in reported volume sizes for EqualLogic storage
5.4.3.1. Problem
Example 5.1. Demonstrating the effects of volume size reporting discrepancies for EQL storage
/etc/cinder/cinder.conf to connect to the EQL array.
- Create a new volume. Note the ID and size of the volume. In the following example, the ID and size are
74cf9c04-4543-47ae-a937-a9b7c6c921e7and1GB, respectively:#cinder create --display—name volume1 1+-----------------------+-------------------------------------------+ | Property | Value | +-----------------------+-------------------------------------------+ | attachments | [] | | availability zone | nova | | bootable | false | | created_at | 2014—03—21T18:31:54.248775 | | display_description | None | | display_name | volume1 | | id | 74cf9c04—4543—47ae—a937—a9b7c6c921e7 | | metadata | {} | | size | 1 | | snapshot_id | None | | source volid | None | | status | creating | | volume type | None | +-------------------------------+-----------------------------------+ - The EQL command-line interface should display an actual size (
VolReserve) as 1.01GB. The EQL Group Manager should also report a volume size of 1.01GB.a2beqll2>volume select volume—74cf9c04-4543-47ae-a937-a9b7c6c921e7a2beqll2 (volume_volume—74cf9c04—4543—47ae—a937—a9b7c6c921e7)>show_______________________________ Volume Information ________________________________ Name: volume-74cf9c04—4543—47ae-a937—a9b7c6c921e7 Size: 1GB VolReserve: 1.01GB VolReservelnUse: 0MB ReplReservelnUse: 0MB iSCSI Alias: volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7 iSCSI Name: iqn.2001—05.com.equallogic:0-8a0906—19f91850c-067000000b4532cl—volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7 ActualMembers: 1 Snap-Warn: 10% Snap-Depletion: delete—oldest Description: Snap—Reserve: 100% Snap-Reserve—Avail: 100% (1.01GB) Permission: read—write DesiredStatus: online Status: online Connections: O Snapshots: O Bind: Type: not-replicated ReplicationReserveSpace: 0MB - Upload this volume to the Image service:
#cinder upload-to-image --disk-format raw \--container-format bare volume1 image_from_volume1+---------------------+---------------------------------------+ | Property | Value | +---------------------+---------------------------------------+ | container_format | bare | | disk_format | raw | | display_description | None | | id | 74cf9c04-4543-47ae-a937-a9b7c6c921e7 | | image_id | 3020a21d—ba37—4495-8899—07fc201161b9 | | image_name | image_from_volume1 | | size | 1 | | status | uploading | | updated_at | 2014—03—21T18:31:55.000000 | | volume_type | None | +---------------------+---------------------------------------+ - When you uploaded the volume in the previous step, the Image service reported the volume's size as
1(GB). However, when using glance image-list to list the image, the displayed size is 1085276160 bytes, or roughly 1.01GB:Table 5.1. Image settings reported by glance image-list for image ID
Name Disk Format Container Format Size Status image_from_volume1 raw bare 1085276160 active - Create a new volume using the previous image (
image_id 3020a21d-ba37-4495-8899-07fc201161b9in this example) as the source. Set the target volume size to 1GB; this is the size reported by the cinder tool when you uploaded the volume to the Image service:#cinder create -—display-name volume2 \--image-id 3020a21d-ba37—4495—8899—07fc201161b9 1ERROR: Invalid input received: Size of specified image 2 is larger than volume size 1. (HTTP 400) (Request-ID: req-4b9369c0—dec5-4e16-a114-c0cdl6bSd210)
5.4.3.2. Solution
#cinder create -—display-name volume2 \--image-id 3020a21d-ba37—4495—8899—07fc201161b9 1+---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2014-03-21T19:25:31.564482 | | display_description | None | | display_name | volume2 | | id | 64e8eb18—d23f—437b—bcac—b3S2afa6843a | | image_id | 3020a21d—ba37—4495—8899-07fc20116lb9 | | metadata | [] | | size | 2 | | snapshot_id | None | | source_volid | None | | status | creating | | volume_type | None | +---------------------+--------------------------------------+
a2beqll2>volume select volume-64e8eb18-d23f-437b-bcac-b352afa6843aa2beqll2 (volume_volume-61e8eb18-d23f-437b-bcac-b352afa6843a)>show______________________________ Volume Information _______________________________ Name: volume-64e8eb18-d23f-437b-bcac-b352afa6843a Size: 2GB VolReserve: 2.01GB VolReserveInUse: 1.01GB ReplReserveInUse: 0MB iSCSI Alias: volume-64e8eb18-d23f-437b-bcac-b352afa6843a iSCSI Name: iqn.2001-05.com.equallogic:0-8a0906-e3091850e-eae000000b7S32cl-volume—64e8eb18—d23f—437b—bcac—b3S2afa6Bl3a ActualMembers: 1 Snap-Warn: 10% Snap—Depletion: delete-oldest Description: Snap-Reserve: 100% Snap-Reserve-Avail: 100% (2GB) Permission: read—write DesiredStatus: online Status: online Connections: 1 Snapshots: O Bind: Type: not-replicated ReplicationReserveSpace: 0MB
5.4.4. Failed to Attach Volume, Missing sg_scan
5.4.4.1. Problem
sg_scan file not found. This warning and error occur when the sg3-utils package is not installed on the compute node. The IDs in your message are unique to your system:
ERROR nova.compute.manager [req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin|req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin] [instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5|instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5] Failed to attach volume 4cc104c4-ac92-4bd6-9b95-c6686746414a at /dev/vdcTRACE nova.compute.manager [instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5|instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5] Stdout: '/usr/local/bin/nova-rootwrap: Executable not found: /usr/bin/sg_scan
5.4.4.2. Solution
#yum install sg3-utils
5.4.5. Failed to attach volume after detaching
5.4.5.1. Problem
cinder-volume.log file.
2013-05-03 15:16:33 INFO [cinder.volume.manager] Updating volume status
2013-05-03 15:16:33 DEBUG [hp3parclient.http]
REQ: curl -i https://10.10.22.241:8080/api/v1/cpgs -X GET -H "X-Hp3Par-Wsapi-Sessionkey: 48dc-b69ed2e5
f259c58e26df9a4c85df110c-8d1e8451" -H "Accept: application/json" -H "User-Agent: python-3parclient"
2013-05-03 15:16:33 DEBUG [hp3parclient.http] RESP:{'content-length': 311, 'content-type': 'text/plain',
'status': '400'}
2013-05-03 15:16:33 DEBUG [hp3parclient.http] RESP BODY:Second simultaneous read on fileno 13 detected.
Unless you really know what you're doing, make sure that only one greenthread can read any particular socket.
Consider using a pools.Pool. If you do know what you're doing and want to disable this error,
call eventlet.debug.hub_multiple_reader_prevention(False)
2013-05-03 15:16:33 ERROR [cinder.manager] Error during VolumeManager._report_driver_status: Bad request (HTTP 400)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/cinder/manager.py", line 167, in periodic_tasks task(self, context)
File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 690, in _report_driver_status volume_stats =
self.driver.get_volume_stats(refresh=True)
File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/san/hp/hp_3par_fc.py", line 77, in get_volume_stats stats =
self.common.get_volume_stats(refresh, self.client)
File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/san/hp/hp_3par_common.py", line 421, in get_volume_stats cpg =
client.getCPG(self.config.hp3par_cpg)
File "/usr/lib/python2.7/dist-packages/hp3parclient/client.py", line 231, in getCPG cpgs = self.getCPGs()
File "/usr/lib/python2.7/dist-packages/hp3parclient/client.py", line 217, in getCPGs response, body = self.http.get('/cpgs')
File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 255, in get return self._cs_request(url, 'GET', **kwargs)
File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 224, in _cs_request **kwargs)
File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 198, in _time_request resp, body = self.request(url, method, **kwargs)
File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 192, in request raise exceptions.from_response(resp, body)
HTTPBadRequest: Bad request (HTTP 400)5.4.5.2. Solution
hp_3par_fc.py driver which contains the synchronization code.
5.4.6. Duplicate 3PAR host
5.4.6.1. Problem
Duplicate3PARHost: 3PAR Host already exists: Host wwn 50014380242B9750 already used by host cld4b5W(id = 68. The hostname must be called 'cld4b5‘.
5.4.6.2. Solution
5.4.7. Failed to attach volume after detaching
5.4.7.1. Problem
5.4.7.2. Solution
vdb, vdc, or vdd device names:
#ls -al /dev/disk/by-path/total 0 drwxr-xr-x 2 root root 200 2012-08-29 17:33 . drwxr-xr-x 5 root root 100 2012-08-29 17:33 .. lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0 -> ../../vda lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part1 -> ../../vda1 lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part2 -> ../../vda2 lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part5 -> ../../vda5 lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:06.0-virtio-pci-virtio2 -> ../../vdb lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:08.0-virtio-pci-virtio3 -> ../../vdc lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4 -> ../../vdd lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4-part1 -> ../../vdd1
5.4.8. Failed to attach volume, systool is not installed
5.4.8.1. Problem
sysfsutils package installed on the compute node.
WARNING nova.virt.libvirt.utils [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] systool is not installed ERROR nova.compute.manager [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] [instance: df834b5a-8c3f-477a-be9b-47c97626555c|instance: df834b5a-8c3f-477a-be9b-47c97626555c] Failed to attach volume 13d5c633-903a-4764-a5a0-3336945b1db1 at /dev/vdk.
5.4.8.2. Solution
sysfsutils packages.
#yum install sysfsutils
5.4.9. Failed to connect volume in FC SAN
5.4.9.1. Problem
ERROR nova.compute.manager [req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo|req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo] [instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3] Failed to connect to volume 6f6a6a9c-dfcf-4c8d-b1a8-4445ff883200 while attaching at /dev/vdjTRACE nova.compute.manager [instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3] Traceback (most recent call last):…f07aa4c3d5f3\] ClientException: The server has either erred or is incapable of performing the requested operation.(HTTP 500)(Request-ID: req-71e5132b-21aa-46ee-b3cc-19b5b4ab2f00)
5.4.9.2. Solution
5.4.10. Cannot find suitable emulator for x86_64
5.4.10.1. Problem
BUILD then ERROR state.
5.4.10.2. Solution
cat /proc/cpuinfo. Make sure the vme and svm flags are set.
5.4.11. Non-existent host
5.4.11.1. Problem
2013-04-19 04:02:02.336 2814 ERROR cinder.openstack.common.rpc.common [-] Returning exception Not found (HTTP 404) NON_EXISTENT_HOST - HOST '10' was not found to caller.
5.4.11.2. Solution
5.4.12. Non-existent VLUN
5.4.12.1. Problem
HTTPNotFound: Not found (HTTP 404) NON_EXISTENT_VLUN - VLUN 'osv-DqT7CE3mSrWi4gZJmHAP-Q' was not found.
5.4.12.2. Solution
hp3par_domain configuration items either need to be updated to use the domain the 3PAR host currently resides in, or the 3PAR host needs to be moved to the domain that the volume was created in.
Chapter 6. Networking
6.1. Introduction to Networking
6.1.1. Networking API
Table 6.1. Networking resources
| Resource | Description |
|---|---|
| Network | An isolated L2 segment, analogous to VLAN in the physical networking world. |
| Subnet | A block of v4 or v6 IP addresses and associated configuration state. |
| Port | A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port. |
- Enables advanced cloud networking use cases, such as building multi-tiered web applications and enabling migration of applications to the cloud without changing IP addresses.
- Offers flexibility for the cloud administrator to customize network offerings.
- Enables developers to extend the Networking API. Over time, the extended functionality becomes part of the core Networking API.
6.1.2. Configure SSL support for networking API
neutron.conf file.
use_ssl = True- Enables SSL on the networking API server.
ssl_cert_file = /path/to/certfile- Certificate file that is used when you securely start the Networking API server.
ssl_key_file = /path/to/keyfile- Private key file that is used when you securely start the Networking API server.
ssl_ca_file = /path/to/cafile- Optional. CA certificate file that is used when you securely start the Networking API server. This file verifies connecting clients. Set this option when API clients must authenticate to the API server by using SSL certificates that are signed by a trusted CA.
tcp_keepidle = 600- The value of TCP_KEEPIDLE, in seconds, for each server socket when starting the API server. Not supported on OS X.
retry_until_window = 30- Number of seconds to keep retrying to listen.
backlog = 4096- Number of backlog requests with to configure the socket.
6.1.3. Load Balancing-as-a-Service (LBaaS) overview
- Round robin
- Rotates requests evenly between multiple instances.
- Source IP
- Requests from a unique source IP address are consistently directed to the same instance.
- Least connections
- Allocates requests to the instance with the least number of active connections.
Table 6.2. LBaaS features
| Feature | Description |
|---|---|
| Monitors | LBaaS provides availability monitoring with the ping, TCP, HTTP and HTTPS GET methods. Monitors are implemented to determine whether pool members are available to handle requests. |
| Management |
LBaaS is managed using a variety of tool sets. The REST API is available for programmatic administration and scripting. Users perform administrative management of load balancers through either the CLI (neutron) or the OpenStack dashboard.
|
| Connection limits | Ingress traffic can be shaped with connection limits. This feature allows workload control, and can also assist with mitigating DoS (Denial of Service) attacks. |
| Session persistence |
LBaaS supports session persistence by ensuring incoming requests are routed to the same instance within a pool of multiple instances. LBaaS supports routing decisions based on cookies and source IP address.
|
6.1.4. Firewall-as-a-Service (FWaaS) overview
Figure 6.1. FWaaS architecture

neutron.conf file:
service_plugins = neutron.services.firewall.fwaas_plugin.FirewallPlugin [service_providers] service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default [fwaas] driver = neutron.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver enabled = True
/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py
'enable_firewall' = True
Procedure 6.1. Configure Firewall-as-a-Service
- Create a firewall rule:
$neutron firewall-rule-create --protocol <tcp|udp|icmp|any> --destination-port <port-range> --action <allow|deny>The CLI requires a protocol value; if the rule is protocol agnostic, the 'any' value can be used. - Create a firewall policy:
$neutron firewall-policy-create --firewall-rules "<firewall-rule IDs or names separated by space>" myfirewallpolicyThe order of the rules specified above is important.You can create a firewall policy without and rules and add rules later either with the update operation (when adding multiple rules) or with the insert-rule operations (when adding a single rule).NoteFWaaS always adds a defaultdeny allrule at the lowest precedence of each policy. Consequently, a firewall policy with no rules blocks all traffic by default. - Create a firewall:
$neutron firewall-create <firewall-policy-uuid>NoteThe firewall remains in PENDING_CREATE state until a Networking router is created, and an interface is attached.
Allowed-address-pairs allow you to specify mac_address/ip_address(cidr) pairs that pass through a port regardless of subnet. This enables the use of protocols such as VRRP, which floats an IP address between two instances to enable fast data plane failover.
- Create a port with a specific allowed-address-pairs:
$neutron port-create net1 --allowed-address-pairs type=dict list=true mac_address=<mac_address>,ip_address=<ip_cidr> - Update a port adding allowed-address-pairs:
$neutron port-update <port-uuid> --allowed-address-pairs type=dict list=true mac_address=<mac_address>,ip_address=<ip_cidr>
6.1.5. Plug-in architecture
Table 6.3. Available networking plug-ins
| Plug-in | Documentation |
|---|---|
| Big Switch Plug-in (Floodlight REST Proxy) | This guide and http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin |
| Brocade Plug-in | This guide and https://wiki.openstack.org/wiki/Brocade-neutron-plugin |
| Cisco | http://wiki.openstack.org/cisco-neutron |
| Cloudbase Hyper-V Plug-in | http://www.cloudbase.it/quantum-hyper-v-plugin/ |
| Linux Bridge Plug-in | http://wiki.openstack.org/Neutron-Linux-Bridge-Plugin |
| Mellanox Plug-in | https://wiki.openstack.org/wiki/Mellanox-Neutron/ |
| Midonet Plug-in | http://www.midokura.com/ |
| ML2 (Modular Layer 2) Plug-in | https://wiki.openstack.org/wiki/Neutron/ML2 |
| NEC OpenFlow Plug-in | https://wiki.openstack.org/wiki/Neutron/NEC_OpenFlow_Plugin |
| Open vSwitch Plug-in | This guide. |
| PLUMgrid | This guide and https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron |
| Ryu Plug-in | This guide and https://github.com/osrg/ryu/wiki/OpenStack |
| VMware NSX Plug-in | This guide and NSX Product Overview, NSX Product Support |
Table 6.4. Plug-in compatibility with Compute drivers
| Plug-in | Libvirt (KVM/QEMU) | VMware | Hyper-V | Bare-metal | |
|---|---|---|---|---|---|
| Big Switch / Floodlight | Yes | ||||
| Brocade | Yes | ||||
| Cisco | Yes | ||||
| Cloudbase Hyper-V | Yes | ||||
| Linux Bridge | Yes | ||||
| Mellanox | Yes | ||||
| Midonet | Yes | ||||
| ML2 | Yes | Yes | |||
| NEC OpenFlow | Yes | ||||
| Open vSwitch | Yes | ||||
| Plumgrid | Yes | Yes | |||
| Ryu | Yes | ||||
| VMware NSX | Yes | Yes | Yes |
6.1.5.1. Plug-in configurations
6.1.5.1.1. Configure Big Switch, Floodlight REST Proxy plug-in
Procedure 6.2. To use the REST Proxy plug-in with OpenStack Networking
- Edit the
/etc/neutron/neutron.conffile and add this line:core_plugin = neutron.plugins.bigswitch.plugin.NeutronRestProxyV2
- Edit the plug-in configuration file,
/etc/neutron/plugins/bigswitch/restproxy.ini, and specify a comma-separated list ofcontroller_ip:portpairs:server = <controller-ip>:<port>
For database configuration, see the Create the OpenStack Networking Database section in Deploying OpenStack: Learning Environments (Manual Setup) at https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/ - Restart
neutron-serverto apply the new settings:#service neutron-server restart
6.1.5.1.2. Configure Brocade plug-in
Procedure 6.3. To use the Brocade plug-in with OpenStack Networking
- Install the Brocade-modified Python netconf client (ncclient) library, which is available at https://github.com/brocade/ncclient:
$git clone https://www.github.com/brocade/ncclientAsrootexecute:#cd ncclient;python setup.py install - Edit the
/etc/neutron/neutron.conffile and set the following option:core_plugin = neutron.plugins.brocade.NeutronPlugin.BrocadePluginV2
- Edit the
/etc/neutron/plugins/brocade/brocade.iniconfiguration file for the Brocade plug-in and specify the admin user name, password, and IP address of the Brocade switch:[SWITCH] username = admin password = password address = switch mgmt ip address ostype = NOS
- Restart the
neutron-serverservice to apply the new settings:#service neutron-server restart
6.1.5.1.3. Configure OVS plug-in
Procedure 6.4. To configure OpenStack Networking to use the OVS plug-in
- Edit
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.inito specify these values.enable_tunneling=True tenant_network_type=gre tunnel_id_ranges=1:1000 # only required for nodes running agents local_ip=<data-net-IP-address-of-node>
- If you use the neutron DHCP agent, add these lines to the
/etc/neutron/dhcp_agent.inifile:dnsmasq_config_file=/etc/neutron/dnsmasq/dnsmasq-neutron.conf
- Create
/etc/neutron/dnsmasq/dnsmasq-neutron.conf, and add these values to lower the MTU size on instances and prevent packet fragmentation over the GRE tunnel:dhcp-option-force=26,1400
- Restart to apply the new settings:
#service neutron-server restart
6.1.5.1.4. Configure NSX plug-in
Procedure 6.5. To configure OpenStack Networking to use the NSX plug-in
- Install the NSX plug-in, as follows:
#yum install openstack-neutron-vmware - Edit
/etc/neutron/neutron.confand set:core_plugin = vmware
Exampleneutron.conffile for NSX:core_plugin = vmware rabbit_host = 192.168.203.10 allow_overlapping_ips = True
- To configure the NSX controller cluster for the OpenStack Networking Service, locate the
[default]section in the/etc/neutron/plugins/vmware/nsx.inifile, and add the following entries:- To establish and configure the connection with the controller cluster you must set some parameters, including NSX API endpoints, access credentials, and settings for HTTP redirects and retries in case of connection failures:
nsx_user = <admin user name> nsx_password = <password for nsx_user> req_timeout = <timeout in seconds for NSX_requests> # default 30 seconds http_timeout = <tiemout in seconds for single HTTP request> # default 10 seconds retries = <number of HTTP request retries> # default 2 redirects = <maximum allowed redirects for a HTTP request> # default 3 nsx_controllers = <comma separated list of API endpoints>
To ensure correct operations, thensx_useruser must have administrator credentials on the NSX platform.A controller API endpoint consists of the IP address and port for the controller; if you omit the port, port 443 is used. If multiple API endpoints are specified, it is up to the user to ensure that all these endpoints belong to the same controller cluster. The OpenStack Networking VMware NSX plug-in does not perform this check, and results might be unpredictable.When you specify multiple API endpoints, the plug-in load-balances requests on the various API endpoints. - The UUID of the NSX Transport Zone that should be used by default when a tenant creates a network. You can get this value from the NSX Manager's Transport Zones page:
default_tz_uuid = <uuid_of_the_transport_zone>
default_l3_gw_service_uuid = <uuid_of_the_gateway_service>
- Restart
neutron-serverto apply new settings:#service neutron-server restart
nsx.ini file:
[DEFAULT] default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf nsx_user=admin nsx_password=changeme nsx_controllers=10.127.0.100,10.127.0.200:8888
nsx.ini configuration issues, run this command from the host that runs neutron-server:
#neutron-check-nsx-config <path/to/nsx.ini>
neutron-server can log into all of the NSX Controllers and the SQL server, and whether all UUID values are correct.
6.1.5.1.4.1. Load Balancer-as-a-Service and Firewall-as-a-Service
- The NSX LBaaS and FWaaS plug-ins require the routed-insertion extension, which adds the
router_idattribute to the VIP (Virtual IP address) and firewall resources and binds these services to a logical router. - The community reference implementation of LBaaS only supports a one-arm model, which restricts the VIP to be on the same subnet as the back-end servers. The NSX LBaaS plug-in only supports a two-arm model between north-south traffic, which means that you can create the VIP on only the external (physical) network.
- The community reference implementation of FWaaS applies firewall rules to all logical routers in a tenant, while the NSX FWaaS plug-in applies firewall rules only to one logical router according to the
router_idof the firewall entity.
Procedure 6.6. To configure Load Balancer-as-a-Service and Firewall-as-a-Service with NSX:
- Edit
/etc/neutron/neutron.conffile:core_plugin = neutron.plugins.vmware.plugin.NsxServicePlugin # Note: comment out service_plug-ins. LBaaS & FWaaS is supported by core_plugin NsxServicePlugin # service_plugins =
- Edit
/etc/neutron/plugins/vmware/nsx.inifile:In addition to the original NSX configuration, thedefault_l3_gw_service_uuidis required for the NSX Advanced plug-in and you must add avcnssection:[DEFAULT] nsx_password = admin nsx_user = admin nsx_controllers = 10.37.1.137:443 default_l3_gw_service_uuid = aae63e9b-2e4e-4efe-81a1-92cf32e308bf default_tz_uuid = 2702f27a-869a-49d1-8781-09331a0f6b9e [vcns] # VSM management URL manager_uri = https://10.24.106.219 # VSM admin user name user = admin # VSM admin password password = default # UUID of a logical switch on NSX which has physical network connectivity (currently using bridge transport type) external_network = f2c023cf-76e2-4625-869b-d0dabcfcc638 # ID of deployment_container on VSM. Optional, if not specified, a default global deployment container is used # deployment_container_id = # task_status_check_interval configures status check interval for vCNS asynchronous API. Default is 2000 msec. # task_status_check_interval =
6.1.5.1.5. Configure PLUMgrid plug-in
Procedure 6.7. To use the PLUMgrid plug-in with OpenStack Networking
- Edit
/etc/neutron/neutron.confand set:core_plugin = neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2
- Edit
/etc/neutron/plugins/plumgrid/plumgrid.iniunder the[PLUMgridDirector]section, and specify the IP address, port, admin user name, and password of the PLUMgrid Director:[PLUMgridDirector] director_server = "PLUMgrid-director-ip-address" director_server_port = "PLUMgrid-director-port" username = "PLUMgrid-director-admin-username" password = "PLUMgrid-director-admin-password"
- Restart
neutron-serverto apply the new settings:#service neutron-server restart
6.1.5.1.6. Configure Ryu plug-in
Procedure 6.8. To use the Ryu plug-in with OpenStack Networking
- Install the Ryu plug-in, as follows:
#yum install openstack-neutron-ryu - Edit
/etc/neutron/neutron.confand set:core_plugin = neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2
- Edit the
/etc/neutron/plugins/ryu/ryu.inifile and update these options in the[ovs]section for theryu-neutron-agent:openflow_rest_api. Defines where Ryu is listening for REST API. Substituteip-addressandport-nobased on your Ryu setup.ovsdb_interface. Enables Ryu to access theovsdb-server. Substituteeth0based on your setup. The IP address is derived from the interface name. If you want to change this value irrespective of the interface name, you can specifyovsdb_ip. If you use a non-default port forovsdb-server, you can specifyovsdb_port.tunnel_interface. Defines which IP address is used for tunneling. If you do not use tunneling, this value is ignored. The IP address is derived from the network interface name.
You can use the same configuration file for many compute nodes by using a network interface name with a different IP address:openflow_rest_api = <ip-address>:<port-no> ovsdb_interface = <eth0> tunnel_interface = <eth0>
- Restart
neutron-serverto apply the new settings:#service neutron-server restart
6.1.6. Configure neutron agents
nova-compute and nodes that run dedicated OpenStack Networking service agents such as neutron-dhcp-agent, neutron-l3-agent, neutron-metering-agent or neutron-lbaas-agent.
neutron-server process running elsewhere in the data center.
6.1.6.1. Configure data-forwarding nodes
6.1.6.1.1. Node set up: OVS plug-in
neutron-plugin-openvswitch-agent agent on each data-forwarding node:
Procedure 6.9. To set up each node for the OVS plug-in
- Install the OVS agent package. This action also installs the Open vSwitch software as a dependency:
#yum install openstack-neutron-openvswitch - On each node that runs the agent, complete these steps:
- Replicate the
ovs_neutron_plugin.inifile that you created on the node. - If you use tunneling, update the
ovs_neutron_plugin.inifile for the node with the IP address that is configured on the data network for the node by using thelocal_ipvalue.
- Restart Open vSwitch to properly load the kernel module:
#service openvswitch-switch restart - Restart the agent:
#service neutron-plugin-openvswitch-agent restart - All nodes that run
neutron-plugin-openvswitch-agentmust have an OVSbr-intbridge. To create the bridge, run:#ovs-vsctl add-br br-int
6.1.6.1.2. Node set up: NSX plug-in
Procedure 6.10. To set up each node for the NSX plug-in
- Ensure that each data-forwarding node has an IP address on the management network, and an IP address on the "data network" that is used for tunneling data traffic. For full details on configuring your forwarding node, see the NSX Administrator Guide.
- Use the NSX Administrator Guide to add the node as a Hypervisor by using the NSX Manager GUI. Even if your forwarding node has no VMs and is only used for services agents like
neutron-dhcp-agentorneutron-lbaas-agent, it should still be added to NSX as a Hypervisor. - After following the NSX Administrator Guide, use the page for this Hypervisor in the NSX Manager GUI to confirm that the node is properly connected to the NSX Controller Cluster and that the NSX Controller Cluster can see the
br-intintegration bridge.
6.1.6.1.3. Node set up: Ryu plug-in
Procedure 6.11. To set up each node for the Ryu plug-in
- Install Ryu:
#pip install ryu - Install the Ryu agent and Open vSwitch packages:
#yum install openstack-neutron-ryu openvswitch python-openvswitch - Replicate the
ovs_ryu_plugin.iniandneutron.conffiles created in the above step on all nodes runningneutron-ryu-agent. - Restart Open vSwitch to properly load the kernel module:
#service openvswitch restart - Restart the agent:
#service neutron-ryu-agent restart - All nodes running
neutron-ryu-agentalso require that an OVS bridge named "br-int" exists on each node. To create the bridge, run:#ovs-vsctl add-br br-int
6.1.6.2. Configure DHCP agent
Procedure 6.12. To install and configure the DHCP agent
- You must configure the host running the
neutron-dhcp-agentas a "data forwarding node" according to the requirements for your plug-in (see Section 6.1.6, “Configure neutron agents”). - Install the DHCP agent:
#yum install openstack-neutron - Finally, update any options in the
/etc/neutron/dhcp_agent.inifile that depend on the plug-in in use (see the sub-sections).
6.1.6.2.1. DHCP agent setup: OVS plug-in
/etc/neutron/dhcp_agent.ini file for the OVS plug-in:
[DEFAULT] enable_isolated_metadata = True use_namespaces = True interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
6.1.6.2.2. DHCP agent setup: NSX plug-in
/etc/neutron/dhcp_agent.ini file for the NSX plug-in:
[DEFAULT] enable_metadata_network = True enable_isolated_metadata = True use_namespaces = True interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
6.1.6.2.3. DHCP agent setup: Ryu plug-in
/etc/neutron/dhcp_agent.ini file for the Ryu plug-in:
[DEFAULT] use_namespace = True interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
6.1.6.3. Configure L3 agent
- NSX plug-in
- Big Switch/Floodlight plug-in, which supports both the open source Floodlight controller and the proprietary Big Switch controller.NoteOnly the proprietary BigSwitch controller implements L3 functionality. When using Floodlight as your OpenFlow controller, L3 functionality is not available.
- PLUMgrid plug-in
neutron-l3-agent if you use one of these plug-ins.
Procedure 6.13. To install the L3 agent for all other plug-ins
- Install the
neutron-l3-agentbinary on the network node:#yum install openstack-neutron - To uplink the node that runs
neutron-l3-agentto the external network, create a bridge named "br-ex" and attach the NIC for the external network to this bridge.For example, with Open vSwitch and NIC eth1 connected to the external network, run:#ovs-vsctl add-br br-ex#ovs-vsctl add-port br-ex eth1Do not manually configure an IP address on the NIC connected to the external network for the node runningneutron-l3-agent. Rather, you must have a range of IP addresses from the external network that can be used by OpenStack Networking for routers that uplink to the external network. This range must be large enough to have an IP address for each router in the deployment, as well as each floating IP. - The
neutron-l3-agentuses the Linux IP stack and iptables to perform L3 forwarding and NAT. In order to support multiple routers with potentially overlapping IP addresses,neutron-l3-agentdefaults to using Linux network namespaces to provide isolated forwarding contexts. As a result, the IP addresses of routers are not visible simply by running the ip addr list or ifconfig command on the node. Similarly, you cannot directly ping fixed IPs.To do either of these things, you must run the command within a particular network namespace for the router. The namespace has the name "qrouter-<UUID of the router>. These example commands run in the router namespace with UUID 47af3868-0fa8-4447-85f6-1304de32153b:#ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list#ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping <fixed-ip>
6.1.6.4. Configure metering agent
neutron-l3-agent.
Procedure 6.14. To install the metering agent and configure the node
- Install the agent by running:
#yum install openstack-neutron-metering-agentPackage name prior to IcehouseIn releases of neutron prior to Icehouse, this package was named neutron-plugin-metering-agent. - If you use one of the following plugins, you need to configure the metering agent with these lines as well:
- An OVS-based plug-in such as OVS, NSX, Ryu, NEC, BigSwitch/Floodlight:
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
- A plug-in that uses LinuxBridge:
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
- To use the reference implementation, you must set:
driver = neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver
- Set this parameter in the
neutron.conffile on the host that runsneutron-server:service_plugins = neutron.services.metering.metering_plugin.MeteringPlugin
6.1.6.5. Configure Load-Balancing-as-a-Service (LBaaS)
- Install the agent:
#yum install openstack-neutron - Enable the HAProxy plug-in using the
service_providerparameter in the/etc/neutron/neutron.conffile:service_provider = LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
- Enable the load balancer plugin using
service_pluginin the/etc/neutron/neutron.conffile:service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin
- Enable the HAProxy load balancer in the
/etc/neutron/lbaas_agent.inifile:device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
- Select the required driver in the
/etc/neutron/lbaas_agent.inifile:Enable the Open vSwitch LBaaS driver:interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
Or enable the Linux Bridge LBaaS driver:interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
Apply the new settings by restarting theneutron-serverandneutron-lbaas-agentservices.Upgrade from Havana to IcehouseThere were changes in LBaaS server-agent communications in Icehouse so during Havana to Icehouse transition make sure to upgrade both server and agent sides before actual use of the load balancing service. - Enable Load Balancing in the section of the Dashboard user interface:Change the
enable_lboption toTruein the/etc/openstack-dashboard/local_settingsfile:OPENSTACK_NEUTRON_NETWORK = {'enable_lb': True,Apply the new settings by restarting thehttpdservice. You can now view the Load Balancer management options in dashboard's view.
6.2. Networking architecture
6.2.1. Overview
neutron-server daemon to expose the Networking API and enable administration of the configured Networking plug-in. Typically, the plug-in requires access to a database for persistent storage (also similar to other OpenStack services).
Table 6.5. Networking agents
| Agent | Description |
|---|---|
plug-in agent (neutron-*-agent)
|
Runs on each hypervisor to perform local vSwitch configuration. The agent that runs, depends on the plug-in that you use. Certain plug-ins do not require an agent. |
dhcp agent (neutron-dhcp-agent)
|
Provides DHCP services to tenant networks. Required by certain plug-ins. |
l3 agent (neutron-l3-agent)
|
Provides L3/NAT forwarding to provide external network access for VMs on tenant networks. Required by certain plug-ins. |
metering agent (neutron-metering-agent)
|
Provides L3 traffic metering for tenant networks. |
- Networking relies on the Identity service (keystone) for the authentication and authorization of all API requests.
- Compute (nova) interacts with Networking through calls to its standard API. As part of creating a VM, the
nova-computeservice communicates with the Networking API to plug each virtual NIC on the VM into a particular network. - The dashboard (horizon) integrates with the Networking API, enabling administrators and tenant users to create and manage network services through a web-based GUI.
6.2.2. Place services on physical hosts
neutron-l3-agent and other OpenStack services that forward packets.
6.2.3. Network connectivity for physical hosts

Table 6.6. General distinct physical data center networks
| Network | Description |
|---|---|
| Management network | Provides internal communication between OpenStack components. IP addresses on this network should be reachable only within the data center. |
| Data network | Provides VM data communication within the cloud deployment. The IP addressing requirements of this network depend on the Networking plug-in that is used. |
| External network | Provides VMs with Internet access in some deployment scenarios. Anyone on the Internet can reach IP addresses on this network. |
| API network | Exposes all OpenStack APIs, including the Networking API, to tenants. IP addresses on this network should be reachable by anyone on the Internet. The API network might be the same as the external network, because it is possible to create an external-network subnet that has allocated IP ranges that use less than the full range of IP addresses in an IP block. |
6.2.4. Tenant and provider networks
Figure 6.2. Tenant and provider networks

- Flat
- All instances reside on the same network, which can also be shared with the hosts. No VLAN tagging or other network segregation takes place.
- Local
- Instances reside on the local compute host and are effectively isolated from any external networks.
- VLAN
- Networking allows users to create multiple provider or tenant networks using VLAN IDs (802.1Q tagged) that correspond to VLANs present in the physical network. This allows instances to communicate with each other across the environment. They can also communicate with dedicated servers, firewalls, load balancers and other networking infrastructure on the same layer 2 VLAN.
- VXLAN and GRE
- VXLAN and GRE use network overlays to support private communication between instances. A Networking router is required to enable traffic to traverse outside of the GRE or VXLAN tenant network. A router is also required to connect directly-connected tenant networks with external networks, including the Internet; the router provides the ability to connect to instances directly from an external network using floating IP addresses.
6.2.5. VMware NSX integration
neutron-server service on the network node. The NSX controller features centrally with a green line to the network node to indicate the management relationship:
Figure 6.3. VMware NSX overview

6.3. Configure Identity Service for Networking
Procedure 6.15. To configure the Identity Service for use with Networking
Create the
get_id()functionTheget_id()function stores the ID of created objects, and removes the need to copy and paste object IDs in later steps:- Add the following function to your
.bashrcfile:function get_id () { echo `"$@" | awk '/ id / { print $4 }'` } - Source the
.bashrcfile:$source .bashrc
Create the Networking service entry
Networking must be available in the Compute service catalog. Create the service:$NEUTRON_SERVICE_ID=$(get_id keystone service-create --name neutron --type network --description 'OpenStack Networking Service')Create the Networking service endpoint entry
The way that you create a Networking endpoint entry depends on whether you are using the SQL or the template catalog driver:- If you use the SQL driver, run the following command with the specified region (
$REGION), IP address of the Networking server ($IP), and service ID ($NEUTRON_SERVICE_ID, obtained in the previous step).$keystone endpoint-create --region $REGION --service-id $NEUTRON_SERVICE_ID \--publicurl 'http://$IP:9696/' --adminurl 'http://$IP:9696/' --internalurl 'http://$IP:9696/'For example:$keystone endpoint-create --region myregion --service-id $NEUTRON_SERVICE_ID \--publicurl "http://10.211.55.17:9696/" --adminurl "http://10.211.55.17:9696/" --internalurl "http://10.211.55.17:9696/" - If you are using the template driver, specify the following parameters in your Compute catalog template file (
default_catalog.templates), along with the region ($REGION) and IP address of the Networking server ($IP).catalog.$REGION.network.publicURL = http://$IP:9696 catalog.$REGION.network.adminURL = http://$IP:9696 catalog.$REGION.network.internalURL = http://$IP:9696 catalog.$REGION.network.name = Network Service
For example:catalog.$Region.network.publicURL = http://10.211.55.17:9696 catalog.$Region.network.adminURL = http://10.211.55.17:9696 catalog.$Region.network.internalURL = http://10.211.55.17:9696 catalog.$Region.network.name = Network Service
Create the Networking service user
You must provide admin user credentials that Compute and some internal Networking components can use to access the Networking API. Create a specialservicetenant and aneutronuser within this tenant, and assign anadminrole to this role.- Create the
adminrole:$ADMIN_ROLE=$(get_id keystone role-create --name=admin) - Create the
neutronuser:$NEUTRON_USER=$(get_id keystone user-create --name=neutron --pass="$NEUTRON_PASSWORD" --email=demo@example.com --tenant-id service) - Create the
servicetenant:$SERVICE_TENANT=$(get_id keystone tenant-create --name service --description "Services Tenant") - Establish the relationship among the tenant, user, and role:
$keystone user-role-add --user_id $NEUTRON_USER --role_id $ADMIN_ROLE --tenant_id $SERVICE_TENANT
6.3.1. Compute
nova-network service (like you do in traditional Compute deployments). Instead, Compute delegates most network-related decisions to Networking. Compute proxies tenant-facing API calls to manage security groups and floating IPs to Networking APIs. However, operator-facing tools such as nova-manage, are not proxied and should not be used.
nova-network and reboot any physical nodes that have been running nova-network before using them to run Networking. Inadvertently running the nova-network process while using Networking can cause problems, as can stale iptables rules pushed down by previously running nova-network.
nova-network mechanism), you must adjust settings in the nova.conf configuration file.
6.3.2. Networking API and credential configuration
nova-* services communicate with Networking using the standard API. For this to happen, you must configure the following items in the nova.conf file (used by each nova-compute and nova-api instance).
Table 6.7. nova.conf API and credential settings
| Item | Configuration |
|---|---|
network_api_class
|
Modify from the default to
nova.network.neutronv2.api.API, to indicate that Networking should be used rather than the traditional nova-network networking model.
|
neutron_url
|
Update to the hostname/IP and port of the
neutron-server instance for this deployment.
|
neutron_auth_strategy
|
Keep the default
keystone value for all production deployments.
|
neutron_admin_tenant_name
|
Update to the name of the service tenant created in the above section on Identity configuration.
|
neutron_admin_username
|
Update to the name of the user created in the above section on Identity configuration.
|
neutron_admin_password
|
Update to the password of the user created in the above section on Identity configuration.
|
neutron_admin_auth_url
|
Update to the Identity server IP and port. This is the Identity (keystone) admin API server IP and port value, and not the Identity service API IP and port.
|
6.3.3. Configure security groups
nova.conf:
Table 6.8. nova.conf security group settings
| Item | Configuration |
firewall_driver
|
Update to
nova.virt.firewall.NoopFirewallDriver, so that nova-compute does not perform iptables-based filtering itself.
|
security_group_api
|
Update to
neutron, so that all security group requests are proxied to the Network Service.
|
6.3.4. Configure metadata
nova-api, even when the requests are made from isolated networks, or from multiple networks that use overlapping IP addresses.
nova.conf.
Table 6.9. nova.conf metadata settings
| Item | Configuration |
service_neutron_metadata_proxy
|
Update to
true, otherwise nova-api will not properly respond to requests from the neutron-metadata-agent.
|
neutron_metadata_proxy_shared_secret
|
Update to a string "password" value. You must also configure the same value in the
metadata_agent.ini file, to authenticate requests made for metadata.
The default value of an empty string in both files will allow metadata to function, but will not be secure if any non-trusted entities have access to the metadata APIs exposed by
nova-api.
|
neutron_metadata_proxy_shared_secret, it is recommended that you do not expose metadata using the same nova-api instances that are used for tenants. Instead, you should run a dedicated set of nova-api instances for metadata that are available only on your management network. Whether a given nova-api instance exposes metadata APIs is determined by the value of enabled_apis in its nova.conf.
6.3.5. Example nova.conf (for nova-compute and nova-api)
network_api_class=nova.network.neutronv2.api.API neutron_url=http://192.168.1.2:9696 neutron_auth_strategy=keystone neutron_admin_tenant_name=service neutron_admin_username=neutron neutron_admin_password=password neutron_admin_auth_url=http://192.168.1.2:35357/v2.0 security_group_api=neutron firewall_driver=nova.virt.firewall.NoopFirewallDriver service_neutron_metadata_proxy=true neutron_metadata_proxy_shared_secret=foo
6.4. Networking scenarios
6.4.1. Open vSwitch
6.4.1.1. Configuration
physnet1, and the physical network associated with the data network as physnet2, which leads to the following configuration options in ovs_neutron_plugin.ini:
[ovs] tenant_network_type = vlan network_vlan_ranges = physnet2:100:110 integration_bridge = br-int bridge_mappings = physnet2:br-eth1
6.4.1.2. Scenario 1: one tenant, two networks, one router
net01, and net02), each with one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.102.0/24). Both private networks are attached to a router that connects them to the public network (10.64.201.0/24).

service tenant, create the shared router, define the public network, and set it as the default gateway of the router
$tenant=$(keystone tenant-list | awk '/service/ {print $2}')$neutron router-create router01$neutron net-create --tenant-id $tenant public01 \--provider:network_type flat \--provider:physical_network physnet1 \--router:external=True$neutron subnet-create --tenant-id $tenant --name public01_subnet01 \--gateway 10.64.201.254 public01 10.64.201.0/24 --disable-dhcp$neutron router-gateway-set router01 public01
demo user tenant, create the private network net01 and corresponding subnet, and connect it to the router01 router. Configure it to use VLAN ID 101 on the physical switch.
$tenant=$(keystone tenant-list|awk '/demo/ {print $2}'$neutron net-create --tenant-id $tenant net01 \--provider:network_type vlan \--provider:physical_network physnet2 \--provider:segmentation_id 101$neutron subnet-create --tenant-id $tenant --name net01_subnet01 net01 192.168.101.0/24$neutron router-interface-add router01 net01_subnet01
net02, using VLAN ID 102 on the physical switch:
$neutron net-create --tenant-id $tenant net02 \--provider:network_type vlan \--provider:physical_network physnet2 \--provider:segmentation_id 102$neutron subnet-create --tenant-id $tenant --name net02_subnet01 net02 192.168.102.0/24$neutron router-interface-add router01 net02_subnet01
6.4.1.2.1. Scenario 1: Compute host config

Types of network devices
eth0 of virtual machine vm01 to the physical network, it must pass through nine devices inside of the host: TAP vnet0, Linux bridge qbrnnn, veth pair (qvbnnn, qvonnn), Open vSwitch bridge br-int, veth pair (int-br-eth1, phy-br-eth1), and, finally, the physical network interface card eth1.
vnet0 is how hypervisors such as KVM implement a virtual network interface card (typically called a VIF or vNIC). An ethernet frame sent to a TAP device is received by the guest operating system.
Integration bridge
br-int Open vSwitch bridge is the integration bridge: all guests running on the compute host connect to this bridge. Networking implements isolation across these guests by configuring the br-int ports.
Physical connectivity bridge
br-eth1 bridge provides connectivity to the physical network interface card, eth1. It connects to the integration bridge by a veth pair: (int-br-eth1, phy-br-eth1).
VLAN translation
br-int and br-eth1 to do VLAN translation. When br-eth1 receives a frame marked with VLAN ID 1 on the port associated with phy-br-eth1, it modifies the VLAN ID in the frame to 101. Similarly, when br-int receives a frame marked with VLAN ID 101 on the port associated with int-br-eth1, it modifies the VLAN ID in the frame to 1.
Security groups: iptables and Linux bridges
vnet0 would be connected directly to the integration bridge, br-int. Unfortunately, this isn't possible because of how OpenStack security groups are currently implemented. OpenStack uses iptables rules on the TAP devices such as vnet0 to implement security groups, and Open vSwitch is not compatible with iptables rules that are applied directly on TAP devices that are connected to an Open vSwitch port.
vnet0 to an Open vSwitch bridge, it is connected to a Linux bridge, qbrXXX. This bridge is connected to the integration bridge, br-int, through the (qvbXXX, qvoXXX) veth pair.
6.4.1.2.2. Scenario 1: Network host config
ovs_neutron_plugin.ini file:
[ovs] tenant_network_type = vlan network_vlan_ranges = physnet2:101:110 integration_bridge = br-int bridge_mappings = physnet1:br-ex,physnet2:br-eth1The following figure shows the network devices on the network host:

br-int) and an Open vSwitch bridge connected to the data network (br-eth1), and the two are connected by a veth pair, and the neutron-openvswitch-plugin-agent configures the ports on both switches to do VLAN translation.
br-ex, connects to the physical interface that is connected to the external network. In this example, that physical interface is eth0.
(int-br-ex, phy-br-ex), this example uses layer 3 connectivity to route packets from the internal networks to the public network: no packets traverse that veth pair in this example.
Open vSwitch internal ports
br-int bridge has four internal ports: tapXXX, qr-YYY, qr-ZZZ, and tapWWW. Each internal port has a separate IP address associated with it. An internal port, qg-VVV, is on the br-ex bridge.
DHCP agent
tapXXX interface is on net01_subnet01, and the tapWWW interface is on net02_subnet01.
L3 agent (routing)
qr-YYY interface is on net01_subnet01 and has the IP address 192.168.101.1/24. The qr-ZZZ, interface is on net02_subnet01 and has the IP address 192.168.102.1/24. The qg-VVV interface has the IP address 10.64.201.254/24. Because each of these interfaces is visible to the network host operating system, the network host routes the packets across the interfaces, as long as an administrator has enabled IP forwarding.
Overlapping subnets and network namespaces
eth2 and also happens to be on the 192.168.101.0/24 subnet, routing problems will occur because the host can't determine whether to send a packet on this subnet to qr-YYY or eth2. If end users are permitted to create their own logical networks and subnets, you must design the system so that such collisions do not occur.

qdhcp-aaa: contains thetapXXXinterface and the dnsmasq process that listens on that interface to provide DHCP services fornet01_subnet01. This allows overlapping IPs betweennet01_subnet01and any other subnets on the network host.qrouter-bbbb: contains theqr-YYY,qr-ZZZ, andqg-VVVinterfaces, and the corresponding routes. This namespace implementsrouter01in our example.qdhcp-ccc: contains thetapWWWinterface and the dnsmasq process that listens on that interface, to provide DHCP services fornet02_subnet01. This allows overlapping IPs betweennet02_subnet01and any other subnets on the network host.
6.4.1.3. Scenario 2: two tenants, two networks, two routers

service tenant, define the public network:
$tenant=$(keystone tenant-list | awk '/service/ {print $2}')$neutron net-create --tenant-id $tenant public01 \--provider:network_type flat \--provider:physical_network physnet1 \--router:external=True$neutron subnet-create --tenant-id $tenant --name public01_subnet01 \--gateway 10.64.201.254 public01 10.64.201.0/24 --disable-dhcp
tenantA user tenant, create the tenant router and set its gateway for the public network.
Then, define private network$tenant=$(keystone tenant-list|awk '/tenantA/ {print $2}')$neutron router-create --tenant-id $tenant router01$neutron router-gateway-set router01 public01
net01 using VLAN ID 101 on the physical switch, along with its subnet, and connect it to the router.
$neutron net-create --tenant-id $tenant net01 \--provider:network_type vlan \--provider:physical_network physnet2 \--provider:segmentation_id 101$neutron subnet-create --tenant-id $tenant --name net01_subnet01 net01 192.168.101.0/24$neutron router-interface-add router01 net01_subnet01
tenantB, create a router and another network, using VLAN ID 102 on the physical switch:
$tenant=$(keystone tenant-list|awk '/tenantB/ {print $2}')$neutron router-create --tenant-id $tenant router02$neutron router-gateway-set router02 public01$neutron net-create --tenant-id $tenant net02 \--provider:network_type vlan \--provider:physical_network physnet2 \--provider:segmentation_id 102$neutron subnet-create --tenant-id $tenant --name net02_subnet01 net01 192.168.102.0/24$neutron router-interface-add router02 net02_subnet01
6.4.1.3.1. Scenario 2: Compute host config

6.4.1.3.2. Scenario 2: Network host config


qhdcp-aaa, qrouter-bbbb, qrouter-cccc, and qhdcp-dddd), instead of three. Since there is no connectivity between the two networks, and so each router is implemented by a separate namespace.
6.4.1.4. Configure Open vSwitch tunneling
Figure 6.4. Example VXLAN tunnel

Procedure 6.16. Example tunnel configuration
- Create a virtual bridge named OVS-BR0 on each participating host:
ovs-vsctl add-br OVS-BR0
- Create a tunnel to link the OVS-BR0 virtual bridges. Run the ovs-vsctl command on HOST1 to create the tunnel and link it to the bridge on HOST2:GRE tunnel command:
ovs-vsctl add-port OVS-BR0 gre1 -- set Interface gre1 type=gre options:remote_ip=192.168.1.11
VXLAN tunnel command:ovs-vsctl add-port OVS-BR0 vxlan1 -- set Interface vxlan1 type=vxlan options:remote_ip=192.168.1.11
- Run the ovs-vsctl command on HOST1 to create the tunnel and link it to the bridge on HOST2.GRE tunnel command:
ovs-vsctl add-port OVS-BR0 gre1 -- set Interface gre1 type=gre options:remote_ip=192.168.1.10
VXLAN tunnel command:ovs-vsctl add-port OVS-BR0 vxlan1 -- set Interface vxlan1 type=vxlan options:remote_ip=192.168.1.10
6.4.2. Linux Bridge
6.4.2.1. Configuration
physnet1, and the physical network associated with the data network as physnet2, which leads to the following configuration options in linuxbridge_conf.ini:
[vlans] tenant_network_type = vlan network_vlan_ranges = physnet2:100:110 [linux_bridge] physical_interface_mappings = physnet2:eth1
6.4.2.2. Scenario 1: one tenant, two networks, one router
net01, and net02), each with one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.102.0/24). Both private networks are attached to a router that contains them to the public network (10.64.201.0/24).

service tenant, create the shared router, define the public network, and set it as the default gateway of the router
$tenant=$(keystone tenant-list | awk '/service/ {print $2}')$neutron router-create router01$neutron net-create --tenant-id $tenant public01 \--provider:network_type flat \--provider:physical_network physnet1 \--router:external=True$neutron subnet-create --tenant-id $tenant --name public01_subnet01 \--gateway 10.64.201.254 public01 10.64.201.0/24 --disable-dhcp$neutron router-gateway-set router01 public01
demo user tenant, create the private network net01 and corresponding subnet, and connect it to the router01 router. Configure it to use VLAN ID 101 on the physical switch.
$tenant=$(keystone tenant-list|awk '/demo/ {print $2}'$neutron net-create --tenant-id $tenant net01 \--provider:network_type vlan \--provider:physical_network physnet2 \--provider:segmentation_id 101$neutron subnet-create --tenant-id $tenant --name net01_subnet01 net01 192.168.101.0/24$neutron router-interface-add router01 net01_subnet01
net02, using VLAN ID 102 on the physical switch:
$neutron net-create --tenant-id $tenant net02 \--provider:network_type vlan \--provider:physical_network physnet2 \--provider:segmentation_id 102$neutron subnet-create --tenant-id $tenant --name net02_subnet01 net02 192.168.102.0/24$neutron router-interface-add router01 net02_subnet01
6.4.2.2.1. Scenario 1: Compute host config

Types of network devices
eth0 of virtual machine vm01, to the physical network, it must pass through four devices inside of the host: TAP vnet0, Linux bridge brqXXX, VLAN eth1.101), and, finally, the physical network interface card eth1.
vnet0 is how hypervisors such as KVM implement a virtual network interface card (typically called a VIF or vNIC). An ethernet frame sent to a TAP device is received by the guest operating system.
eth1.101 is associated with VLAN ID 101 and is attached to interface eth1. Packets received from the outside by eth1 with VLAN tag 101 will be passed to device eth1.101, which will then strip the tag. In the other direction, any ethernet frame sent directly to eth1.101 will have VLAN tag 101 added and will be forward to eth1 for sending out to the network.
6.4.2.2.2. Scenario 1: Network host config


6.4.2.3. Scenario 2: two tenants, two networks, two routers

service tenant, define the public network:
$tenant=$(keystone tenant-list | awk '/service/ {print $2}')$neutron net-create --tenant-id $tenant public01 \--provider:network_type flat \--provider:physical_network physnet1 \--router:external=True$neutron subnet-create --tenant-id $tenant --name public01_subnet01 \--gateway 10.64.201.254 public01 10.64.201.0/24 --disable-dhcp
tenantA user tenant, create the tenant router and set its gateway for the public network.
Then, define private network$tenant=$(keystone tenant-list|awk '/tenantA/ {print $2}')$neutron router-create --tenant-id $tenant router01$neutron router-gateway-set router01 public01
net01 using VLAN ID 102 on the physical switch, along with its subnet, and connect it to the router.
$neutron net-create --tenant-id $tenant net01 \--provider:network_type vlan \--provider:physical_network physnet2 \--provider:segmentation_id 101$neutron subnet-create --tenant-id $tenant --name net01_subnet01 net01 192.168.101.0/24$neutron router-interface-add router01 net01_subnet01
tenantB, create a router and another network, using VLAN ID 102 on the physical switch:
$tenant=$(keystone tenant-list|awk '/tenantB/ {print $2}')$neutron router-create --tenant-id $tenant router02$neutron router-gateway-set router02 public01$neutron net-create --tenant-id $tenant net02 \--provider:network_type vlan \--provider:physical_network physnet2 \--provider:segmentation_id 102$neutron subnet-create --tenant-id $tenant --name net02_subnet01 net01 192.168.101.0/24$neutron router-interface-add router02 net02_subnet01
6.4.2.3.1. Scenario 2: Compute host config

6.4.2.3.2. Scenario 2: Network host config


qhdcp-aaa, qrouter-bbbb, qrouter-cccc, and qhdcp-dddd), instead of three. Each router is implemented by a separate namespace, since there is no connectivity between the two networks.
6.4.3. ML2
6.4.3.1. ML2 with L2 population mechanism driver


ml2_conf.ini:
[ml2] type_drivers = local,flat,vlan,gre,vxlan mechanism_drivers = openvswitch,linuxbridge,l2population
6.4.3.2. Scenario 1: L2 population with Open vSwitch agent
local_ip and tunnel_types parameters in the ml2_conf.ini file:
[ovs] local_ip = 192.168.1.10 [agent] tunnel_types = gre,vxlan l2_population = True
6.4.3.3. Scenario 2: L2 population with Linux Bridge agent
ml2_conf.ini.
[vxlan] enable_vxlan = True local_ip = 192.168.1.10 l2_population = True
6.4.3.4. Enable security group API
firewall_driver value in the ml2_conf.ini file does not matter in the server, but firewall_driver must be set to a non-default value in the ml2 configuration to enable the securitygroup extension. To enable securitygroup API, edit the ml2_conf.ini file:
[securitygroup] firewall_driver = dummyEach L2 agent configuration file (such as
ovs_neutron_plugin.ini or linuxbridge_conf.ini) should contain the appropriate firewall_driver value for that agent. To disable securitygroup API, edit theml2_conf.ini file:
[securitygroup] firewall_driver = neutron.agent.firewall.NoopFirewallDriverAlso, each L2 agent configuration file (such as
ovs_neutron_plugin.ini or linuxbridge_conf.ini) should contain this value in firewall_driver parameter for that agent.
6.5. Advanced configuration options
/etc/neutron.
6.5.1. OpenStack Networking server with plug-in
neutron-server --config-file <neutron config> --config-file <plugin config>core_plugin configuration parameter. In some cases a plug-in might have an agent that performs the actual networking.
$> mysql -u root mysql> update mysql.user set password = password('iamroot') where user = 'root'; mysql> delete from mysql.user where user = '';mysql> create database <database-name>; mysql> create user '<user-name>'@'localhost' identified by '<user-name>'; mysql> create user '<user-name>'@'%' identified by '<user-name>'; mysql> grant all on <database-name>.* to '<user-name>'@'%';
neutron-plugin-agent --config-file <neutron config> --config-file <plugin config>- Ensure that the core plug-in is updated.
- Ensure that the database connection is correctly set.
Table 6.10. Settings
| Parameter | Value |
|---|---|
| Open vSwitch | |
| core_plugin ($NEUTRON_CONF_DIR/neutron.conf) | neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 |
connection (in the plugin configuration file, section [database])
|
mysql://<username>:<password>@localhost/ovs_neutron?charset=utf8 |
| Plug-in Configuration File | $NEUTRON_CONF_DIR/plugins/openvswitch/ovs_neutron_plugin.ini |
| Agent | neutron-openvswitch-agent |
| Linux Bridge | |
| core_plugin ($NEUTRON_CONF_DIR/neutron.conf) | neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2 |
connection (in the plug-in configuration file, section [database])
|
mysql://<username>:<password>@localhost/neutron_linux_bridge?charset=utf8 |
| Plug-in Configuration File | $NEUTRON_CONF_DIR/plugins/linuxbridge/linuxbridge_conf.ini |
| Agent | neutron-linuxbridge-agent |
6.5.2. DHCP agent
neutron-dhcp-agent --config-file <neutron config> --config-file <dhcp config>Table 6.11. Basic settings
| Parameter | Value |
|---|---|
| Open vSwitch | |
| interface_driver ($NEUTRON_CONF_DIR/dhcp_agent.ini) | neutron.agent.linux.interface.OVSInterfaceDriver |
| Linux Bridge | |
| interface_driver ($NEUTRON_CONF_DIR/dhcp_agent.ini) | neutron.agent.linux.interface.BridgeInterfaceDriver |
6.5.2.1. Namespace
use_namespaces = False6.5.3. L3 Agent
neutron-l3-agent --config-file <neutron config> --config-file <l3 config>Table 6.12. Basic settings
| Parameter | Value |
|---|---|
| Open vSwitch | |
| interface_driver ($NEUTRON_CONF_DIR/l3_agent.ini) | neutron.agent.linux.interface.OVSInterfaceDriver |
| external_network_bridge ($NEUTRON_CONF_DIR/l3_agent.ini) | br-ex |
| Linux Bridge | |
| interface_driver ($NEUTRON_CONF_DIR/l3_agent.ini) | neutron.agent.linux.interface.BridgeInterfaceDriver |
| external_network_bridge ($NEUTRON_CONF_DIR/l3_agent.ini) | This field must be empty (or the bridge name for the external network). |
- OpenStack Identity authentication:
auth_url="$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_AUTH_HOST:$KEYSTONE_AUTH_PORT/v2.0"For example,http://10.56.51.210:5000/v2.0 - Admin user details:
admin_tenant_name $SERVICE_TENANT_NAME admin_user $Q_ADMIN_USERNAME admin_password $SERVICE_PASSWORD
6.5.3.1. Namespace
use_namespaces = False# If use_namespaces is set to False then the agent can only configure one router. # This is done by setting the specific router_id. router_id = 1064ad16-36b7-4c2f-86f0-daa2bcbd6b2a$neutron router-create myrouter1Created a new router: +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | external_gateway_info | | | id | 338d42d7-b22e-42c5-9df6-f3674768fe75 | | name | myrouter1 | | status | ACTIVE | | tenant_id | 0c236f65baa04e6f9b4236b996555d56 | +-----------------------+--------------------------------------+
6.5.3.2. Multiple floating IP pools
handle_internal_only_routers = True gateway_external_network_id = 2118b11c-011e-4fa5-a6f1-2ca34d372c35 external_network_bridge = br-ex
python /opt/stack/neutron/bin/neutron-l3-agent --config-file /etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.inihandle_internal_only_routers = False gateway_external_network_id = e828e54c-850a-4e74-80a8-8b79c6a285d8 external_network_bridge = br-ex-26.5.4. L3 Metering Agent
neutron-metering-agent --config-file <neutron config> --config-file <l3 metering config>Table 6.13. Basic settings
| Parameter | Value |
|---|---|
| Open vSwitch | |
| interface_driver ($NEUTRON_CONF_DIR/metering_agent.ini) | neutron.agent.linux.interface.OVSInterfaceDriver |
| Linux Bridge | |
| interface_driver ($NEUTRON_CONF_DIR/metering_agent.ini) | neutron.agent.linux.interface.BridgeInterfaceDriver |
6.5.4.1. Namespace
use_namespaces is True).
use_namespaces = False
6.5.4.2. L3 metering driver
driver = neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver
6.5.4.3. L3 metering service driver
neutron.conf on the host that runs neutron-server:
service_plugins = neutron.services.metering.metering_plugin.MeteringPlugin
6.5.5. Limitations
- No equivalent for nova-network --multi_host flag: Nova-network has a model where the L3, NAT, and DHCP processing happen on the compute node itself, rather than a dedicated networking node. OpenStack Networking now support running multiple l3-agent and dhcp-agents with load being split across those agents, but the tight coupling of that scheduling with the location of the VM is not supported in Icehouse. The Juno release is expected to include an exact replacement for the --multi_host flag in nova-network.
- Linux network namespace required on nodes running
neutron-l3-agentorneutron-dhcp-agentif overlapping IPs are in use: . In order to support overlapping IP addresses, the OpenStack Networking DHCP and L3 agents use Linux network namespaces by default. The hosts running these processes must support network namespaces. To support network namespaces, the following are required:- Linux kernel 2.6.24 or newer (with CONFIG_NET_NS=y in kernel configuration) and
- iproute2 utilities ('ip' command) version 3.1.0 (aka 20111117) or newer
To check whether your host supports namespaces try running the following as root:#ip netns add test-ns#ip netns exec test-ns ifconfigIf you need to disable namespaces, make sure theneutron.confused by neutron-server has the following setting:allow_overlapping_ips=False
and that the dhcp_agent.ini and l3_agent.ini have the following setting:use_namespaces=False
NoteIf the host does not support namespaces then theneutron-l3-agentandneutron-dhcp-agentshould be run on different hosts. This is due to the fact that there is no isolation between the IP addresses created by the L3 agent and by the DHCP agent. By manipulating the routing the user can ensure that these networks have access to one another.If you run both L3 and DHCP services on the same node, you should enable namespaces to avoid conflicts with routes:use_namespaces=True
- No IPv6 support for L3 agent: The
neutron-l3-agent, used by many plug-ins to implement L3 forwarding, supports only IPv4 forwarding. Currently, there are no errors provided if you configure IPv6 addresses via the API. - ZeroMQ support is experimental: Some agents, including
neutron-dhcp-agent,neutron-openvswitch-agent, andneutron-linuxbridge-agentuse RPC to communicate. ZeroMQ is an available option in the configuration file, but has not been tested and should be considered experimental. In particular, issues might occur with ZeroMQ and the dhcp agent. - MetaPlugin is experimental: This release includes a MetaPlugin that is intended to support multiple plug-ins at the same time for different API requests, based on the content of those API requests. The core team has not thoroughly reviewed or tested this functionality. Consider this functionality to be experimental until further validation is performed.
6.6. Scalable and highly available DHCP agents
$neutron ext-list -c name -c alias+-----------------+--------------------------+ | alias | name | +-----------------+--------------------------+ | agent_scheduler | Agent Schedulers | | binding | Port Binding | | quotas | Quota management support | | agent | agent | | provider | Provider Network | | router | Neutron L3 Router | | lbaas | LoadBalancing service | | extraroute | Neutron Extra Route | +-----------------+--------------------------+

Table 6.14. Hosts for demo
| Host | Description |
|---|---|
| OpenStack controller host - controlnode |
Runs the Networking, Identity, and Compute services that are required to deploy VMs. The node must have at least one network interface that is connected to the Management Network.
Note that
nova-network should not be running because it is replaced by Neutron.
|
| HostA |
Runs nova-compute, the Neutron L2 agent and DHCP agent
|
| HostB | Same as HostA |
6.6.1. Configuration
Procedure 6.17. controlnode: neutron server
- Neutron configuration file
/etc/neutron/neutron.conf:[DEFAULT] core_plugin = neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2 rabbit_host = controlnode allow_overlapping_ips = True host = controlnode agent_down_time = 5
- Update the plug-in configuration file
/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini:[vlans] tenant_network_type = vlan network_vlan_ranges = physnet1:1000:2999 [database] connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge retry_interval = 2 [linux_bridge] physical_interface_mappings = physnet1:eth0
Procedure 6.18. HostA and Hostb: L2 agent
- Neutron configuration file
/etc/neutron/neutron.conf:[DEFAULT] rabbit_host = controlnode rabbit_password = openstack # host = HostB on hostb host = HostA
- Update the plug-in configuration file
/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini:[vlans] tenant_network_type = vlan network_vlan_ranges = physnet1:1000:2999 [database] connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge retry_interval = 2 [linux_bridge] physical_interface_mappings = physnet1:eth0
- Update the nova configuration file
/etc/nova/nova.conf:[DEFAULT] network_api_class=nova.network.neutronv2.api.API neutron_admin_username=neutron neutron_admin_password=servicepassword neutron_admin_auth_url=http://controlnode:35357/v2.0/ neutron_auth_strategy=keystone neutron_admin_tenant_name=servicetenant neutron_url=http://100.1.1.10:9696/ firewall_driver=nova.virt.firewall.NoopFirewallDriver
Procedure 6.19. HostA and HostB: DHCP agent
- Update the DHCP configuration file
/etc/neutron/dhcp_agent.ini:[DEFAULT] interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
6.6.2. Commands in agent management and scheduler extensions
export OS_USERNAME=admin export OS_PASSWORD=adminpassword export OS_TENANT_NAME=admin export OS_AUTH_URL=http://controlnode:5000/v2.0/
Procedure 6.20. Settings
- To experiment, you need VMs and a neutron network:
$nova list+--------------------------------------+-----------+--------+---------------+ | ID | Name | Status | Networks | +--------------------------------------+-----------+--------+---------------+ | c394fcd0-0baa-43ae-a793-201815c3e8ce | myserver1 | ACTIVE | net1=10.0.1.3 | | 2d604e05-9a6c-4ddb-9082-8a1fbdcc797d | myserver2 | ACTIVE | net1=10.0.1.4 | | c7c0481c-3db8-4d7a-a948-60ce8211d585 | myserver3 | ACTIVE | net1=10.0.1.5 | +--------------------------------------+-----------+--------+---------------+$neutron net-list+--------------------------------------+------+--------------------------------------+ | id | name | subnets | +--------------------------------------+------+--------------------------------------+ | 89dca1c6-c7d4-4f7a-b730-549af0fb6e34 | net1 | f6c832e3-9968-46fd-8e45-d5cf646db9d1 | +--------------------------------------+------+--------------------------------------+
Procedure 6.21. Manage agents in neutron deployment
- List all agents:
$neutron agent-list+--------------------------------------+--------------------+-------+-------+----------------+ | id | agent_type | host | alive | admin_state_up | +--------------------------------------+--------------------+-------+-------+----------------+ | 1b69828d-6a9b-4826-87cd-1757f0e27f31 | Linux bridge agent | HostA | :-) | True | | a0c1c21c-d4f4-4577-9ec7-908f2d48622d | DHCP agent | HostA | :-) | True | | ed96b856-ae0f-4d75-bb28-40a47ffd7695 | Linux bridge agent | HostB | :-) | True | | f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent | HostB | :-) | True | +--------------------------------------+--------------------+-------+-------+----------------+The output shows information for four agents. Thealivefield shows:-)if the agent reported its state within the period defined by theagent_down_timeoption in theneutron.conffile. Otherwise thealiveisxxx. - List the DHCP agents that host a specified networkIn some deployments, one DHCP agent is not enough to hold all network data. In addition, you must have a backup for it even when the deployment is small. The same network can be assigned to more than one DHCP agent and one DHCP agent can host more than one network.List DHCP agents that host a specified network:
$neutron dhcp-agent-list-hosting-net net1+--------------------------------------+-------+----------------+-------+ | id | host | admin_state_up | alive | +--------------------------------------+-------+----------------+-------+ | a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) | +--------------------------------------+-------+----------------+-------+ - List the networks hosted by a given DHCP agent.This command is to show which networks a given dhcp agent is managing.
$neutron net-list-on-dhcp-agent a0c1c21c-d4f4-4577-9ec7-908f2d48622d+--------------------------------------+------+---------------------------------------------------+ | id | name | subnets | +--------------------------------------+------+---------------------------------------------------+ | 89dca1c6-c7d4-4f7a-b730-549af0fb6e34 | net1 | f6c832e3-9968-46fd-8e45-d5cf646db9d1 10.0.1.0/24 | +--------------------------------------+------+---------------------------------------------------+ - Show agent details.The agent-list command shows details for a specified agent:
$neutron agent-show a0c1c21c-d4f4-4577-9ec7-908f2d48622d+---------------------+----------------------------------------------------------+ | Field | Value | +---------------------+----------------------------------------------------------+ | admin_state_up | True | | agent_type | DHCP agent | | alive | False | | binary | neutron-dhcp-agent | | configurations | { | | | "subnets": 1, | | | "use_namespaces": true, | | | "dhcp_driver": "neutron.agent.linux.dhcp.Dnsmasq", | | | "networks": 1, | | | "dhcp_lease_time": 120, | | | "ports": 3 | | | } | | created_at | 2013-03-16T01:16:18.000000 | | description | | | heartbeat_timestamp | 2013-03-17T01:37:22.000000 | | host | HostA | | id | 58f4ce07-6789-4bb3-aa42-ed3779db2b03 | | started_at | 2013-03-16T06:48:39.000000 | | topic | dhcp_agent | +---------------------+----------------------------------------------------------+In this output,heartbeat_timestampis the time on the neutron server. You do not need to synchronize all agents to this time for this extension to run correctly.configurationsdescribes the static configuration for the agent or run time data. This agent is a DHCP agent and it hosts one network, one subnet, and three ports.Different types of agents show different details. The following output shows information for a Linux bridge agent:$neutron agent-show ed96b856-ae0f-4d75-bb28-40a47ffd7695+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | binary | neutron-linuxbridge-agent | | configurations | { | | | "physnet1": "eth0", | | | "devices": "4" | | | } | | created_at | 2013-03-16T01:49:52.000000 | | description | | | disabled | False | | group | agent | | heartbeat_timestamp | 2013-03-16T01:59:45.000000 | | host | HostB | | id | ed96b856-ae0f-4d75-bb28-40a47ffd7695 | | topic | N/A | | started_at | 2013-03-16T06:48:39.000000 | | type | Linux bridge agent | +---------------------+--------------------------------------+The output showsbridge-mappingand the number of virtual network devices on this L2 agent.
Procedure 6.22. Manage assignment of networks to DHCP agent
- Default scheduling.When you create a network with one port, you can schedule it to an active DHCP agent. If many active DHCP agents are running, select one randomly. You can design more sophisticated scheduling algorithms in the same way as
nova-schedulelater on.$neutron net-create net2$neutron subnet-create net2 9.0.1.0/24 --name subnet2$neutron port-create net2$neutron dhcp-agent-list-hosting-net net2+--------------------------------------+-------+----------------+-------+ | id | host | admin_state_up | alive | +--------------------------------------+-------+----------------+-------+ | a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) | +--------------------------------------+-------+----------------+-------+It is allocated to DHCP agent on HostA. If you want to validate the behavior through the dnsmasq command, you must create a subnet for the network because the DHCP agent starts thednsmasqservice only if there is a DHCP. - Assign a network to a given DHCP agent.To add another DHCP agent to host the network, run this command:
$neutron dhcp-agent-network-add f28aa126-6edb-4ea5-a81e-8850876bc0a8 net2Added network net2 to dhcp agent$neutron dhcp-agent-list-hosting-net net2+--------------------------------------+-------+----------------+-------+ | id | host | admin_state_up | alive | +--------------------------------------+-------+----------------+-------+ | a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) | | f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) | +--------------------------------------+-------+----------------+-------+Both DHCP agents host thenet2network. - Remove a network from a specified DHCP agent.This command is the sibling command for the previous one. Remove
net2from the DHCP agent for HostA:$neutron dhcp-agent-network-remove a0c1c21c-d4f4-4577-9ec7-908f2d48622d net2Removed network net2 to dhcp agent$neutron dhcp-agent-list-hosting-net net2+--------------------------------------+-------+----------------+-------+ | id | host | admin_state_up | alive | +--------------------------------------+-------+----------------+-------+ | f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) | +--------------------------------------+-------+----------------+-------+You can see that only the DHCP agent for HostB is hosting thenet2network.
Procedure 6.23. HA of DHCP agents
net2. Fail the agents in turn to see if the VM can still get the desired IP.
- Boot a VM on net2.
$neutron net-list+--------------------------------------+------+--------------------------------------------------+ | id | name | subnets | +--------------------------------------+------+--------------------------------------------------+ | 89dca1c6-c7d4-4f7a-b730-549af0fb6e34 | net1 | f6c832e3-9968-46fd-8e45-d5cf646db9d1 10.0.1.0/24| | 9b96b14f-71b8-4918-90aa-c5d705606b1a | net2 | 6979b71a-0ae8-448c-aa87-65f68eedcaaa 9.0.1.0/24 | +--------------------------------------+------+--------------------------------------------------+$nova boot --image tty --flavor 1 myserver4 \--nic net-id=9b96b14f-71b8-4918-90aa-c5d705606b1a$nova list+--------------------------------------+-----------+--------+---------------+ | ID | Name | Status | Networks | +--------------------------------------+-----------+--------+---------------+ | c394fcd0-0baa-43ae-a793-201815c3e8ce | myserver1 | ACTIVE | net1=10.0.1.3 | | 2d604e05-9a6c-4ddb-9082-8a1fbdcc797d | myserver2 | ACTIVE | net1=10.0.1.4 | | c7c0481c-3db8-4d7a-a948-60ce8211d585 | myserver3 | ACTIVE | net1=10.0.1.5 | | f62f4731-5591-46b1-9d74-f0c901de567f | myserver4 | ACTIVE | net2=9.0.1.2 | +--------------------------------------+-----------+--------+---------------+ - Make sure both DHCP agents hosting 'net2'.Use the previous commands to assign the network to agents.
$neutron dhcp-agent-list-hosting-net net2+--------------------------------------+-------+----------------+-------+ | id | host | admin_state_up | alive | +--------------------------------------+-------+----------------+-------+ | a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) | | f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) | +--------------------------------------+-------+----------------+-------+
Procedure 6.24. Test the HA
- Log in to the
myserver4VM, and runudhcpc,dhclientor other DHCP client. - Stop the DHCP agent on HostA. Besides stopping the
neutron-dhcp-agentbinary, you must stop the dnsmasq processes. - Run a DHCP client in VM to see if it can get the wanted IP.
- Stop the DHCP agent on HostB too.
- Run udhcpc in the VM; it cannot get the wanted IP.
- Start DHCP agent on HostB. The VM gets the wanted IP again.
Procedure 6.25. Disable and remove an agent
- To run the following commands, you must stop the DHCP agent on HostA.
$neutron agent-update --admin-state-up False a0c1c21c-d4f4-4577-9ec7-908f2d48622d$neutron agent-list+--------------------------------------+--------------------+-------+-------+----------------+ | id | agent_type | host | alive | admin_state_up | +--------------------------------------+--------------------+-------+-------+----------------+ | 1b69828d-6a9b-4826-87cd-1757f0e27f31 | Linux bridge agent | HostA | :-) | True | | a0c1c21c-d4f4-4577-9ec7-908f2d48622d | DHCP agent | HostA | :-) | False | | ed96b856-ae0f-4d75-bb28-40a47ffd7695 | Linux bridge agent | HostB | :-) | True | | f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent | HostB | :-) | True | +--------------------------------------+--------------------+-------+-------+----------------+$neutron agent-delete a0c1c21c-d4f4-4577-9ec7-908f2d48622dDeleted agent: a0c1c21c-d4f4-4577-9ec7-908f2d48622d$neutron agent-list+--------------------------------------+--------------------+-------+-------+----------------+ | id | agent_type | host | alive | admin_state_up | +--------------------------------------+--------------------+-------+-------+----------------+ | 1b69828d-6a9b-4826-87cd-1757f0e27f31 | Linux bridge agent | HostA | :-) | True | | ed96b856-ae0f-4d75-bb28-40a47ffd7695 | Linux bridge agent | HostB | :-) | True | | f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent | HostB | :-) | True | +--------------------------------------+--------------------+-------+-------+----------------+After deletion, if you restart the DHCP agent, it appears on the agent list again.
6.7. Use Networking
service command. For example:
#service neutron-server stop#service neutron-server status#service neutron-server start#service neutron-server restart
/var/log/neutron directory.
/etc/neutron directory.
- Expose the OpenStack Networking API to cloud tenants, enabling them to build rich network topologies.
- Have the cloud administrator, or an automated administrative tool, create network connectivity on behalf of tenants.
6.7.1. Core Networking API features
neutron command-line interface (CLI). The neutron CLI is a wrapper around the Networking API. Every OpenStack Networking API call has a corresponding neutron command.
6.7.1.1. API abstractions
Table 6.15. API abstractions
| Abstraction | Description |
|---|---|
| Network | An isolated L2 network segment (similar to a VLAN) that forms the basis for describing the L2 network topology available in an Networking deployment. |
| Subnet | Associates a block of IP addresses and other network configuration, such as default gateways or dns-servers, with an Networking network. Each subnet represents an IPv4 or IPv6 address block, and each Networking network can have multiple subnets. |
| Port | Represents an attachment port to a L2 Networking network. When a port is created on the network, by default it is allocated an available fixed IP address out of one of the designated subnets for each IP version (if one exists). When the port is destroyed, its allocated addresses return to the pool of available IPs on the subnet. Users of the Networking API can either choose a specific IP address from the block, or let Networking choose the first available IP address. |
Table 6.16. Network attributes
| Attribute | Type | Default value | Description |
|---|---|---|---|
admin_state_up
|
bool | True | Administrative state of the network. If specified as False (down), this network does not forward packets. |
id
|
uuid-str | Generated | UUID for this network. |
name
|
string | None | Human-readable name for this network; is not required to be unique. |
shared
|
bool | False | Specifies whether this network resource can be accessed by any tenant. The default policy setting restricts usage of this attribute to administrative users only. |
status
|
string | N/A | Indicates whether this network is currently operational. |
subnets
|
list(uuid-str) | Empty list | List of subnets associated with this network. |
tenant_id
|
uuid-str | N/A | Tenant owner of the network. Only administrative users can set the tenant identifier; this cannot be changed using authorization policies. |
Table 6.17. Subnet attributes
| Attribute | Type | Default Value | Description |
|---|---|---|---|
allocation_pools
|
list(dict) |
Every address in cidr, excluding gateway_ip (if configured).
|
List of cidr sub-ranges that are available for dynamic allocation to ports. Syntax:
[ { "start":"10.0.0.2",
"end": "10.0.0.254"} ]
|
cidr
|
string | N/A | IP range for this subnet, based on the IP version. |
dns_nameservers
|
list(string) | Empty list | List of DNS name servers used by hosts in this subnet. |
enable_dhcp
|
bool | True | Specifies whether DHCP is enabled for this subnet. |
gateway_ip
|
string |
First address in cidr
|
Default gateway used by devices in this subnet. |
host_routes
|
list(dict) | Empty list | Routes that should be used by devices with IPs from this subnet (not including local subnet route). |
id
|
uuid-string | Generated | UUID representing this subnet. |
ip_version
|
int | 4 | IP version. |
name
|
string | None | Human-readable name for this subnet (might not be unique). |
network_id
|
uuid-string | N/A | Network with which this subnet is associated. |
tenant_id
|
uuid-string | N/A | Owner of network. Only administrative users can set the tenant identifier; this cannot be changed using authorization policies. |
Table 6.18. Port attributes
| Attribute | Type | Default Value | Description |
|---|---|---|---|
admin_state_up
|
bool | true | Administrative state of this port. If specified as False (down), this port does not forward packets. |
device_id
|
string | None | Identifies the device using this port (for example, a virtual server's ID). |
device_owner
|
string | None | Identifies the entity using this port (for example, a dhcp agent). |
fixed_ips
|
list(dict) | Automatically allocated from pool | Specifies IP addresses for this port; associates the port with the subnets containing the listed IP addresses. |
id
|
uuid-string | Generated | UUID for this port. |
mac_address
|
string | Generated | Mac address to use on this port. |
name
|
string | None | Human-readable name for this port (might not be unique). |
network_id
|
uuid-string | N/A | Network with which this port is associated. |
status
|
string | N/A | Indicates whether the network is currently operational. |
tenant_id
|
uuid-string | N/A | Owner of the network. Only administrative users can set the tenant identifier; this cannot be changed using authorization policies. |
6.7.1.2. Basic Networking operations
neutron commands that enable you to complete basic network operations.
Table 6.19. Basic Networking operations
| Operation | Command |
|---|---|
| Creates a network. |
|
| Creates a subnet that is associated with net1. |
|
| Lists ports for a specified tenant. |
|
Lists ports for a specified tenant and displays the id, fixed_ips, and device_owner columns.
|
|
| Shows information for a specified port. |
|
device_owner field describes who owns the port. A port whose device_owner begins with:
networkis created by Networking.computeis created by Compute.
6.7.1.3. Administrative operations
neutron command on behalf of tenants by specifying an Identity tenant_id in the command, as follows:
$neutron net-create --tenant-id=tenant-id network-name
$neutron net-create --tenant-id=5e4bbe24b67a4410bc4d9fae29ec394e net1
$keystone tenant-list
6.7.1.4. Advanced Networking operations
Table 6.20. Advanced Networking operations
| Operation | Command |
|---|---|
| Creates a network that all tenants can use. |
|
| Creates a subnet with a specified gateway IP address. |
|
| Creates a subnet that has no gateway IP address. |
|
| Creates a subnet with DHCP disabled. |
|
| Creates a subnet with a specified set of host routes. |
|
| Creates a subnet with a specified set of dns name servers. |
|
| Displays all ports and IPs allocated on a network. |
|
6.7.2. Use Compute with Networking
6.7.2.1. Basic Compute and Networking operations
neutron and nova commands that enable you to complete basic VM networking operations:
Table 6.21. Basic Compute and Networking operations
| Action | Command |
|---|---|
| Checks available networks. |
|
| Boots a VM with a single NIC on a selected Networking network. |
|
|
Searches for ports with a
device_id that matches the Compute instance UUID. See Create and delete VMs.
|
|
Searches for ports, but shows only the mac_address of the port.
|
|
| Temporarily disables a port from sending traffic. |
|
device_id can also be a logical router ID.
- When you boot a Compute VM, a port on the network that corresponds to the VM NIC is automatically created and associated with the default security group. You can configure security group rules to enable users to access the VM.
- When you delete a Compute VM, the underlying Networking port is automatically deleted.
6.7.2.2. Advanced VM creation operations
nova and neutron commands that enable you to complete advanced VM creation operations:
Table 6.22. Advanced VM creation operations
| Operation | Command |
|---|---|
| Boots a VM with multiple NICs. |
|
Boots a VM with a specific IP address. First, create an Networking port with a specific IP address. Then, boot a VM specifying a port-id rather than a net-id.
|
|
If only one network is available to the tenant making the request, this command boots a VM that connects to that network. If more than one network is available, the --nic option must be specified; otherwise, this command returns an error.
|
|
v4-fixed-ip parameter of the --nic option for the nova command.
6.7.2.3. Enable ping and SSH on VMs (security groups)
- Implements Networking security groups, you can configure security group rules directly by using neutron security-group-rule-create. This example enables ping and ssh access to your VMs.
$neutron security-group-rule-create --protocol icmp \--direction ingress default$neutron security-group-rule-create --protocol tcp --port-range-min 22 \--port-range-max 22 --direction ingress default - Does not implement Networking security groups, you can configure security group rules by using the nova secgroup-add-rule or euca-authorize command. These nova commands enable ping and ssh access to your VMs.
$nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0$nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
security_group_api = neutron in the nova.conf file. After you set this option, all Compute security group commands are proxied to Networking.
6.8. Advanced features through API extensions
6.8.1. Provider networks
6.8.1.1. Terminology
Table 6.23. Provider extension terminology
| Term | Description |
|---|---|
| virtual network | An Networking L2 network (identified by a UUID and optional name) whose ports can be attached as vNICs to Compute instances and to various Networking agents. The Open vSwitch and Linux Bridge plug-ins each support several different mechanisms to realize virtual networks. |
| physical network | A network connecting virtualization hosts (such as compute nodes) with each other and with other network resources. Each physical network might support multiple virtual networks. The provider extension and the plug-in configurations identify physical networks using simple string names. |
| tenant network | A virtual network that a tenant or an administrator creates. The physical details of the network are not exposed to the tenant. |
| provider network | A virtual network administratively created to map to a specific network in the data center, typically to enable direct access to non-OpenStack resources on that network. Tenants can be given access to provider networks. |
| VLAN network | A virtual network implemented as packets on a specific physical network containing IEEE 802.1Q headers with a specific VID field value. VLAN networks sharing the same physical network are isolated from each other at L2, and can even have overlapping IP address spaces. Each distinct physical network supporting VLAN networks is treated as a separate VLAN trunk, with a distinct space of VID values. Valid VID values are 1 through 4094. |
| flat network | A virtual network implemented as packets on a specific physical network containing no IEEE 802.1Q header. Each physical network can realize at most one flat network. |
| local network | A virtual network that allows communication within each host, but not across a network. Local networks are intended mainly for single-node test scenarios, but can have other uses. |
| GRE network | A virtual network implemented as network packets encapsulated using GRE. GRE networks are also referred to as tunnels. GRE tunnel packets are routed by the IP routing table for the host, so GRE networks are not associated by Networking with specific physical networks. |
| Virtual Extensible LAN (VXLAN) network | VXLAN is a proposed encapsulation protocol for running an overlay network on existing Layer 3 infrastructure. An overlay network is a virtual network that is built on top of existing network Layer 2 and Layer 3 technologies to support elastic compute architectures. |
6.8.1.2. Provider attributes
Table 6.24. Provider network attributes
| Attribute name | Type | Default Value | Description |
|---|---|---|---|
| provider:network_type | String | N/A |
The physical mechanism by which the virtual network is implemented. Possible values are flat, vlan, local, and gre, corresponding to flat networks, VLAN networks, local networks, and GRE networks as defined above. All types of provider networks can be created by administrators, while tenant networks can be implemented as vlan, gre, or local network types depending on plug-in configuration.
|
| provider:physical_network | String |
If a physical network named "default" has been configured, and if provider:network_type is flat or vlan, then "default" is used.
|
The name of the physical network over which the virtual network is implemented for flat and VLAN networks. Not applicable to the local or gre network types.
|
| provider:segmentation_id | Integer | N/A |
For VLAN networks, the VLAN VID on the physical network that realizes the virtual network. Valid VLAN VIDs are 1 through 4094. For GRE networks, the tunnel ID. Valid tunnel IDs are any 32 bit unsigned integer. Not applicable to the flat or local network types.
|
extension:provider_network:view and extension:provider_network:set actions in the Networking policy configuration. The default Networking configuration authorizes both actions for users with the admin role. An authorized client or an administrative user can view and set the provider extended attributes through Networking API calls. See Section 6.10, “Authentication and authorization” for details on policy configuration.
6.8.1.3. Provider extension API operations
- Shows all attributes of a network, including provider attributes:
$neutron net-show <name or net-id> - Creates a local provider network:
$neutron net-create <name> --tenant_id <tenant-id> --provider:network_type local - When you create flat networks, <phys-net-name> must be known to the plug-in. See the OpenStack Configuration Reference for details. Creates a flat provider network:
$neutron net-create <name> --tenant_id <tenant-id> --provider:network_type flat --provider:physical_network <phys-net-name> - When you create VLAN networks, <phys-net-name> must be known to the plug-in. See the OpenStack Configuration Reference for details on configuring network_vlan_ranges to identify all physical networks. When you create VLAN networks, <VID> can fall either within or outside any configured ranges of VLAN IDs from which tenant networks are allocated. Creates a VLAN provider network:
$neutron net-create <name> --tenant_id <tenant-id> --provider:network_type vlan --provider:physical_network <phys-net-name> --provider:segmentation_id <VID> - When you create GRE networks, <tunnel-id> can be either inside or outside any tunnel ID ranges from which tenant networks are allocated.After you create provider networks, you can allocate subnets, which you can use in the same way as other virtual networks, subject to authorization policy based on the specified <tenant_id>. Creates a GRE provider network:
$neutron net-create <name> --tenant_id <tenant-id> --provider:network_type gre --provider:segmentation_id <tunnel-id>
6.8.2. L3 routing and NAT
6.8.2.1. L3 API abstractions
Table 6.25. Router
| Attribute name | Type | Default Value | Description |
|---|---|---|---|
| id | uuid-str | generated | UUID for the router. |
| name | String | None | Human-readable name for the router. Might not be unique. |
| admin_state_up | Bool | True | The administrative state of router. If false (down), the router does not forward packets. |
| status | String | N/A |
Indicates whether router is currently operational.
|
| tenant_id | uuid-str | N/A | Owner of the router. Only admin users can specify a tenant_id other than its own. |
| external_gateway_info | dict contain 'network_id' key-value pair | Null | External network that this router connects to for gateway services (for example, NAT) |
Table 6.26. Floating IP
| Attribute name | Type | Default Value | Description |
|---|---|---|---|
| id | uuid-str | generated | UUID for the floating IP. |
| floating_ip_address | string (IP address) | allocated by Networking | The external network IP address available to be mapped to an internal IP address. |
| floating_network_id | uuid-str | N/A |
The network indicating the set of subnets from which the floating IP should be allocated
|
| router_id | uuid-str | N/A | Read-only value indicating the router that connects the external network to the associated internal port, if a port is associated. |
| port_id | uuid-str | Null | Indicates the internal Networking port associated with the external floating IP. |
| fixed_ip_address | string (IP address) | Null | Indicates the IP address on the internal port that is mapped to by the floating IP (since an Networking port might have more than one IP address). |
| tenant_id | uuid-str | N/A | Owner of the Floating IP. Only admin users can specify a tenant_id other than its own. |
6.8.2.2. Basic L3 operations
Table 6.27. Basic L3 operations
| Operation | Command |
|---|---|
|
Creates external networks.
|
|
|
Lists external networks.
|
|
|
Creates an internal-only router that connects to multiple L2 networks privately.
|
|
|
Connects a router to an external network, which enables that router to act as a NAT gateway for external connectivity.
|
The router obtains an interface with the gateway_ip address of the subnet, and this interface is attached to a port on the L2 Networking network associated with the subnet. The router also gets a gateway interface to the specified external network. This provides SNAT connectivity to the external network as well as support for floating IPs allocated on that external networks. Commonly an external network maps to a network in the provider
|
|
Lists routers.
|
|
|
Shows information for a specified router.
|
|
|
Shows all internal interfaces for a router.
|
|
|
Identifies the
port-id that represents the VM NIC to which the floating IP should map.
|
This port must be on an Networking subnet that is attached to a router uplinked to the external network used to create the floating IP. Conceptually, this is because the router must be able to perform the Destination NAT (DNAT) rewriting of packets from the Floating IP address (chosen from a subnet on the external network) to the internal Fixed IP (chosen from a private subnet that is behind the router).
|
|
Creates a floating IP address and associates it with a port.
|
|
|
Creates a floating IP address and associates it with a port, in a single step.
|
|
|
Lists floating IPs.
|
|
|
Finds floating IP for a specified VM port.
|
|
|
Disassociates a floating IP address.
|
|
|
Deletes the floating IP address.
|
|
|
Clears the gateway.
|
|
|
Removes the interfaces from the router.
|
|
|
Deletes the router.
|
|
6.8.3. Security groups
/etc/nova/nova.conf file and set the security_group_api=neutron option on every node that runs nova-compute and nova-api. After you make this change, restart nova-api and nova-compute to pick up this change. Then, you can use both the Compute and OpenStack Network security group APIs at the same time.
- To use the Compute security group API with Networking, the Networking plug-in must implement the security group API. The following plug-ins currently implement this: ML2, Open vSwitch, Linux Bridge, NEC, Ryu, and VMware NSX.
- You must configure the correct firewall driver in the
securitygroupsection of the plug-in/agent configuration file. Some plug-ins and agents, such as Linux Bridge Agent and Open vSwitch Agent, use the no-operation driver as the default, which results in non-working security groups. - When using the security group API through Compute, security groups are applied to all ports on an instance. The reason for this is that Compute security group APIs are instances based and not port based as Networking.
6.8.3.1. Security group API abstractions
Table 6.28. Security group attributes
| Attribute name | Type | Default Value | Description |
|---|---|---|---|
| id | uuid-str | generated | UUID for the security group. |
| name | String | None | Human-readable name for the security group. Might not be unique. Cannot be named default as that is automatically created for a tenant. |
| description | String | None | Human-readable description of a security group. |
| tenant_id | uuid-str | N/A | Owner of the security group. Only admin users can specify a tenant_id other than their own. |
Table 6.29. Security group rules
| Attribute name | Type | Default Value | Description |
|---|---|---|---|
| id | uuid-str | generated | UUID for the security group rule. |
| security_group_id | uuid-str or Integer | allocated by Networking | The security group to associate rule with. |
| direction | String | N/A | The direction the traffic is allow (ingress/egress) from a VM. |
| protocol | String | None | IP Protocol (icmp, tcp, udp, and so on). |
| port_range_min | Integer | None | Port at start of range |
| port_range_max | Integer | None | Port at end of range |
| ethertype | String | None | ethertype in L2 packet (IPv4, IPv6, and so on) |
| remote_ip_prefix | string (IP cidr) | None | CIDR for address range |
| remote_group_id | uuid-str or Integer | allocated by Networking or Compute | Source security group to apply to rule. |
| tenant_id | uuid-str | N/A | Owner of the security group rule. Only admin users can specify a tenant_id other than its own. |
6.8.3.2. Basic security group operations
Table 6.30. Basic security group operations
| Operation | Command |
|---|---|
|
Creates a security group for our web servers.
|
|
|
Lists security groups.
|
|
|
Creates a security group rule to allow port 80 ingress.
|
|
|
Lists security group rules.
|
|
|
Deletes a security group rule.
|
|
|
Deletes a security group.
|
|
|
Creates a port and associates two security groups.
|
|
|
Removes security groups from a port.
|
|
6.8.4. Basic Load-Balancer-as-a-Service operations
- Creates a load balancer pool by using specific provider.
--provideris an optional argument. If not used, the pool is created with default provider for LBaaS service. You should configure the default provider in the[service_providers]section ofneutron.conffile. If no default provider is specified for LBaaS, the--provideroption is required for pool creation.$neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id <subnet-uuid>--provider <provider_name> - Associates two web servers with pool.
$neutron lb-member-create --address <webserver one IP> --protocol-port 80 mypool$neutron lb-member-create --address <webserver two IP> --protocol-port 80 mypool - Creates a health monitor which checks to make sure our instances are still running on the specified protocol-port.
$neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3 - Associates a health monitor with pool.
$neutron lb-healthmonitor-associate <healthmonitor-uuid> mypool - Creates a virtual IP (VIP) address that, when accessed through the load balancer, directs the requests to one of the pool members.
$neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id <subnet-uuid> mypool
6.8.5. Firewall-as-a-Service
6.8.5.1. Firewall-as-a-Service API abstractions
Table 6.31. Firewall rules
| Attribute name | Type | Default Value | Description |
|---|---|---|---|
| id | uuid-str | generated | UUID for the firewall rule. |
| tenant_id | uuid-str | N/A | Owner of the firewall rule. Only admin users can specify a tenant_id other than its own. |
| name | String | None | Human readable name for the firewall rule (255 characters limit). |
| description | String | None | Human readable description for the firewall rule (1024 characters limit). |
| firewall_policy_id | uuid-str or None | allocated by Networking | This is a read-only attribute that gets populated with the uuid of the firewall policy when this firewall rule is associated with a firewall policy. A firewall rule can be associated with only one firewall policy at a time. However, the association can be changed to a different firewall policy. |
| shared | Boolean | False | When set to True makes this firewall rule visible to tenants other than its owner, and it can be used in firewall policies not owned by its tenant. |
| protocol | String | None | IP Protocol (icmp, tcp, udp, None). |
| ip_version | Integer or String | 4 | IP Version (4, 6). |
| source_ip_address | String (IP address or CIDR) | None | Source IP address or CIDR. |
| destination_ip_address | String (IP address or CIDR) | None | Destination IP address or CIDR. |
| source_port | Integer or String (either as a single port number or in the format of a ':' separated range) | None | Source port number or a range. |
| destination_port | Integer or String (either as a single port number or in the format of a ':' separated range) | None | Destination port number or a range. |
| position | Integer | None | This is a read-only attribute that gets assigned to this rule when the rule is associated with a firewall policy. It indicates the position of this rule in that firewall policy. |
| action | String | deny | Action to be performed on the traffic matching the rule (allow, deny). |
| enabled | Boolean | True | When set to False, disables this rule in the firewall policy. Facilitates selectively turning off rules without having to disassociate the rule from the firewall policy. |
Table 6.32. Firewall policies
| Attribute name | Type | Default Value | Description |
|---|---|---|---|
| id | uuid-str | generated | UUID for the firewall policy. |
| tenant_id | uuid-str | N/A | Owner of the firewall policy. Only admin users can specify a tenant_id other their own. |
| name | String | None | Human readable name for the firewall policy (255 characters limit). |
| description | String | None | Human readable description for the firewall policy (1024 characters limit). |
| shared | Boolean | False | When set to True makes this firewall policy visible to tenants other than its owner, and can be used to associate with firewalls not owned by its tenant. |
| firewall_rules | List of uuid-str or None | None | This is an ordered list of firewall rule uuids. The firewall applies the rules in the order in which they appear in this list. |
| audited | Boolean | False | When set to True by the policy owner indicates that the firewall policy has been audited. This attribute is meant to aid in the firewall policy audit workflows. Each time the firewall policy or the associated firewall rules are changed, this attribute is set to False and must be explicitly set to True through an update operation. |
Table 6.33. Firewalls
| Attribute name | Type | Default Value | Description |
|---|---|---|---|
| id | uuid-str | generated | UUID for the firewall. |
| tenant_id | uuid-str | N/A | Owner of the firewall. Only admin users can specify a tenant_id other than its own. |
| name | String | None | Human readable name for the firewall (255 characters limit). |
| description | String | None | Human readable description for the firewall (1024 characters limit). |
| admin_state_up | Boolean | True | The administrative state of the firewall. If False (down), the firewall does not forward any packets. |
| status | String | N/A |
Indicates whether the firewall is currently operational. Possible values include:
|
| firewall_policy_id | uuid-str or None | None | The firewall policy uuid that this firewall is associated with. This firewall implements the rules contained in the firewall policy represented by this uuid. |
6.8.6. Plug-in specific extensions
6.8.6.1. VMware NSX extensions
6.8.6.1.1. VMware NSX QoS extension
policy.json file. To use this extension, create a queue and specify the min/max bandwidth rates (kbps) and optionally set the QoS Marking and DSCP value (if your network fabric uses these values to make forwarding decisions). Once created, you can associate a queue with a network. Then, when ports are created on that network they are automatically created and associated with the specific queue size that was associated with the network. Because one size queue for a every port on a network might not be optimal, a scaling factor from the nova flavor 'rxtx_factor' is passed in from Compute when creating the port to scale the queue.
6.8.6.1.1.1. VMware NSX QoS API abstractions
Table 6.34. VMware NSX QoS attributes
| Attribute name | Type | Default Value | Description |
|---|---|---|---|
| id | uuid-str | generated | UUID for the QoS queue. |
| default | Boolean | False by default | If True, ports are created with this queue size unless the network port is created or associated with a queue at port creation time. |
| name | String | None | Name for QoS queue. |
| min | Integer | 0 | Minimum Bandwidth Rate (kbps). |
| max | Integer | N/A | Maximum Bandwidth Rate (kbps). |
| qos_marking | String | untrusted by default | Whether QoS marking should be trusted or untrusted. |
| dscp | Integer | 0 | DSCP Marking value. |
| tenant_id | uuid-str | N/A | The owner of the QoS queue. |
6.8.6.1.1.2. Basic VMware NSX QoS operations
Table 6.35. Basic VMware NSX QoS operations
| Operation | Command |
|---|---|
|
Creates QoS Queue (admin-only).
|
|
|
Associates a queue with a network.
|
|
|
Creates a default system queue.
|
|
|
Lists QoS queues.
|
|
|
Deletes a QoS queue.
|
|
6.8.6.1.2. VMware NSX provider networks extension
max_lp_per_bridged_ls parameter, which has a default value of 5,000.
Table 6.36. Recommended values for max_lp_per_bridged_ls
| NSX version | Recommended Value |
| 2.x | 64 |
| 3.0.x | 5,000 |
| 3.1.x | 5,000 |
| 3.2.x | 10,000 |
6.8.6.1.3. VMware NSX L3 extension
/etc/neutron/plugins/vmware/nsx.ini set default_l3_gw_service_uuid to this value. By default, routers are mapped to this gateway service.
6.8.6.1.3.1. VMware NSX L3 extension operations
$neutron net-create public --router:external=True --provider:network_type l3_ext \--provider:physical_network <L3-Gateway-Service-UUID>
$neutron net-create public --router:external=True --provider:network_type l3_ext \--provider:physical_network <L3-Gateway-Service-UUID> --provider:segmentation_id <VLAN_ID>
6.8.6.1.4. Operational status synchronization in the VMware NSX plug-in
GET operations is consistently improved.
Table 6.37. Configuration options for tuning operational status synchronization in the NSX plug-in
| Option name | Group | Default value | Type and constraints | Notes |
|---|---|---|---|---|
state_sync_interval
|
nsx_sync
|
120 seconds | Integer; no constraint. |
Interval in seconds between two run of the synchronization task. If the synchronization task takes more than state_sync_interval seconds to execute, a new instance of the task is started as soon as the other is completed. Setting the value for this option to 0 will disable the synchronization task.
|
max_random_sync_delay
|
nsx_sync
|
0 seconds |
Integer. Must not exceed min_sync_req_delay
|
When different from zero, a random delay between 0 and max_random_sync_delay will be added before processing the next chunk.
|
min_sync_req_delay
|
nsx_sync
|
10 seconds |
Integer. Must not exceed state_sync_interval.
|
The value of this option can be tuned according to the observed load on the NSX controllers. Lower values will result in faster synchronization, but might increase the load on the controller cluster. |
min_chunk_size
|
nsx_sync
|
500 resources | Integer; no constraint. |
Minimum number of resources to retrieve from the back-end for each synchronization chunk. The expected number of synchronization chunks is given by the ratio between state_sync_interval and min_sync_req_delay. This size of a chunk might increase if the total number of resources is such that more than min_chunk_size resources must be fetched in one chunk with the current number of chunks.
|
always_read_status
|
nsx_sync
|
False | Boolean; no constraint. |
When this option is enabled, the operational status will always be retrieved from the NSX back-end ad every GET request. In this case it is advisable to disable the synchronization task.
|
state_sync_interval configuration option to a non-zero value exclusively on a node designated for back-end status synchronization.
fields=status parameter in Networking API requests always triggers an explicit query to the NSX back end, even when you enable asynchronous state synchronization. For example, GET /v2.0/networks/<net-id>?fields=status&fields=name.
6.8.6.2. Big Switch plug-in extensions
6.8.6.2.1. Big Switch router rules
6.8.6.2.1.1. Router rule attributes
Table 6.38. Big Switch Router rule attributes
| Attribute name | Required | Input Type | Description |
|---|---|---|---|
| source | Yes | A valid CIDR or one of the keywords 'any' or 'external' | The network that a packet's source IP must match for the rule to be applied |
| destination | Yes | A valid CIDR or one of the keywords 'any' or 'external' | The network that a packet's destination IP must match for the rule to be applied |
| action | Yes | 'permit' or 'deny' | Determines whether or not the matched packets will allowed to cross the router |
| nexthop | No |
A plus-separated (+) list of next-hop IP addresses. For example, 1.1.1.1+1.1.1.2.
|
Overrides the default virtual router used to handle traffic for packets that match the rule |
6.8.6.2.1.2. Order of rule processing
6.8.6.2.1.3. Big Switch router rules operations
$neutron router-update Router-UUID --router_rules type=dict list=true\source=any,destination=any,action=permit \source=external,destination=10.10.10.0/24,action=deny
$neutron router-update Router-UUID --router_rules type=dict list=true\source=any,destination=any,action=permit \source=10.10.10.0/24,destination=any,action=permit,nexthops=10.10.10.254+10.10.10.253
$neutron router-update Router-UUID --router_rules type=dict list=true\source=any,destination=any,action=permit \source=10.10.10.0/24,destination=10.20.20.20/24,action=deny
6.8.7. L3 metering
6.8.7.1. L3 metering API abstractions
Table 6.39. Label
| Attribute name | Type | Default Value | Description |
|---|---|---|---|
| id | uuid-str | generated | UUID for the metering label. |
| name | String | None | Human-readable name for the metering label. Might not be unique. |
| description | String | None | The optional description for the metering label. |
| tenant_id | uuid-str | N/A | Owner of the metering label. |
Table 6.40. Rules
| Attribute name | Type | Default Value | Description |
|---|---|---|---|
| id | uuid-str | generated | UUID for the metering rule. |
| direction | String (Either ingress or egress) | ingress | The direction in which metering rule is applied, either ingress or egress. |
| metering_label_id | uuid-str | N/A |
The metering label ID to associate with this metering rule.
|
| excluded | Boolean | False | Specify whether the remote_ip_prefix will be excluded or not from traffic counters of the metering label (for example, to not count the traffic of a specific IP address of a range). |
| remote_ip_prefix | String (CIDR) | N/A | Indicates remote IP prefix to be associated with this metering rule. |
6.8.7.2. Basic L3 metering operations
Table 6.41. Basic L3 operations
| Operation | Command |
|---|---|
|
Creates a metering label.
|
|
|
Lists metering labels.
|
|
|
Shows information for a specified label.
|
|
|
Deletes a metering label.
|
|
|
Creates a metering rule.
|
|
|
Lists metering all label rules.
|
|
|
Shows information for a specified label rule.
|
|
| Deletes a metering label rule. |
|
6.9. Advanced operational features
6.9.1. Logging settings
neutron.conf or as command-line options. Command options override ones in neutron.conf.
- Provide logging settings in a logging configuration file.See Python logging how-to to learn more about logging.
- Provide logging setting in
neutron.conf[DEFAULT] # Default log level is WARNING # Show debugging output in logs (sets DEBUG log level output) # debug = False # Show more verbose log output (sets INFO log level output) if debug is False # verbose = False # log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s # log_date_format = %Y-%m-%d %H:%M:%S # use_syslog = False # syslog_log_facility = LOG_USER # if use_syslog is False, we can set log_file and log_dir. # if use_syslog is False and we do not set log_file, # the log will be printed to stdout. # log_file = # log_dir =
6.9.2. Notifications
6.9.2.1. Notification options
neutron.conf:
# ============ Notification System Options ===================== # Notifications can be sent when network/subnet/port are create, updated or deleted. # There are three methods of sending notifications: logging (via the # log_file directive), rpc (via a message queue) and # noop (no notifications sent, the default) # Notification_driver can be defined multiple times # Do nothing driver # notification_driver = neutron.openstack.common.notifier.no_op_notifier # Logging driver # notification_driver = neutron.openstack.common.notifier.log_notifier # RPC driver notification_driver = neutron.openstack.common.notifier.rpc_notifier # default_notification_level is used to form actual topic names or to set logging level # default_notification_level = INFO # default_publisher_id is a part of the notification payload # host = myhost.com # default_publisher_id = $host # Defined in rpc_notifier for rpc way, can be comma-separated values. # The actual topic names will be %s.%(default_notification_level)s notification_topics = notifications
6.9.2.2. Setting cases
6.9.2.2.1. Logging and RPC
neutron.conf.
# ============ Notification System Options ===================== # Notifications can be sent when network/subnet/port are create, updated or deleted. # There are three methods of sending notifications: logging (via the # log_file directive), rpc (via a message queue) and # noop (no notifications sent, the default) # Notification_driver can be defined multiple times # Do nothing driver # notification_driver = neutron.openstack.common.notifier.no_op_notifier # Logging driver notification_driver = neutron.openstack.common.notifier.log_notifier # RPC driver notification_driver = neutron.openstack.common.notifier.rpc_notifier # default_notification_level is used to form actual topic names or to set logging level default_notification_level = INFO # default_publisher_id is a part of the notification payload # host = myhost.com # default_publisher_id = $host # Defined in rpc_notifier for rpc way, can be comma separated values. # The actual topic names will be %s.%(default_notification_level)s notification_topics = notifications
6.9.2.2.2. Multiple RPC topics
neutron.conf.
# ============ Notification System Options ===================== # Notifications can be sent when network/subnet/port are create, updated or deleted. # There are three methods of sending notifications: logging (via the # log_file directive), rpc (via a message queue) and # noop (no notifications sent, the default) # Notification_driver can be defined multiple times # Do nothing driver # notification_driver = neutron.openstack.common.notifier.no_op_notifier # Logging driver # notification_driver = neutron.openstack.common.notifier.log_notifier # RPC driver notification_driver = neutron.openstack.common.notifier.rabbit_notifier # default_notification_level is used to form actual topic names or to set logging level default_notification_level = INFO # default_publisher_id is a part of the notification payload # host = myhost.com # default_publisher_id = $host # Defined in rpc_notifier for rpc way, can be comma separated values. # The actual topic names will be %s.%(default_notification_level)s notification_topics = notifications_one,notifications_two
6.10. Authentication and authorization
X-Auth-Token request header. Users obtain this token by authenticating with the Identity Service endpoint. When the Identity Service is enabled, it is not mandatory to specify the tenant ID for resources in create requests because the tenant ID is derived from the authentication token.
- Operation-based policies specify access criteria for specific operations, possibly with fine-grained control over specific attributes;
- Resource-based policies specify whether access to specific resource is granted or not according to the permissions configured for the resource (currently available only for the network resource). The actual authorization policies enforced in Networking might vary from deployment to deployment.
policy.json file. Entries can be updated while the system is running, and no service restart is required. Every time the policy file is updated, the policies are automatically reloaded. Currently the only way of updating such policies is to edit the policy file. In this section, the terms policy and rule refer to objects that are specified in the same way in the policy file. There are no syntax differences between a rule and a policy. A policy is something that is matched directly from the Networking policy engine. A rule is an element in a policy, which is evaluated. For instance in create_subnet: [["admin_or_network_owner"]], create_subnet is a policy, and admin_or_network_owner is a rule.
create_subnet policy is triggered every time a POST /v2.0/subnets request is sent to the Networking server; on the other hand create_network:shared is triggered every time the shared attribute is explicitly specified (and set to a value different from its default) in a POST /v2.0/networks request. It is also worth mentioning that policies can be also related to specific API extensions; for instance extension:provider_network:set is be triggered if the attributes defined by the Provider Network extensions are specified in an API request.
- Role-based rules evaluate successfully if the user who submits the request has the specified role. For instance
"role:admin"is successful if the user who submits the request is an administrator. - Field-based rules evaluate successfully if a field of the resource specified in the current request matches a specific value. For instance
"field:networks:shared=True"is successful if thesharedattribute of thenetworkresource is set to true. - Generic rules compare an attribute in the resource with an attribute extracted from the user's security credentials and evaluates successfully if the comparison is successful. For instance
"tenant_id:%(tenant_id)s"is successful if the tenant identifier in the resource is equal to the tenant identifier of the user submitting the request.
policy.json file:
{
"admin_or_owner": [1
[
"role:admin"
],
[
"tenant_id:%(tenant_id)s"
]
],
"admin_or_network_owner": [
[
"role:admin"
],
[
"tenant_id:%(network_tenant_id)s"
]
],
"admin_only": [
[
"role:admin"
]
],
"regular_user": [],
"shared": [
[
"field:networks:shared=True"
]
],
"default": [
[
"rule:admin_or_owner" 2
]
],
"create_subnet": [
[
"rule:admin_or_network_owner"
]
],
"get_subnet": [
[
"rule:admin_or_owner"
],
[
"rule:shared"
]
],
"update_subnet": [
[
"rule:admin_or_network_owner"
]
],
"delete_subnet": [
[
"rule:admin_or_network_owner"
]
],
"create_network": [],
"get_network": [
[
"rule:admin_or_owner"
],
[ 3
"rule:shared"
]
],
"create_network:shared": [
[
"rule:admin_only"
]
], 4
"update_network": [
[
"rule:admin_or_owner"
]
],
"delete_network": [
[
"rule:admin_or_owner"
]
],
"create_port": [],
"create_port:mac_address": [
[
"rule:admin_or_network_owner"
]
],
"create_port:fixed_ips": [
[ 5
"rule:admin_or_network_owner"
]
],
"get_port": [
[
"rule:admin_or_owner"
]
],
"update_port": [
[
"rule:admin_or_owner"
]
],
"delete_port": [
[
"rule:admin_or_owner"
]
]
}
- 1
- A rule that evaluates successfully if the current user is an administrator or the owner of the resource specified in the request (tenant identifier is equal).
- 2
- The default policy that is always evaluated if an API operation does not match any of the policies in
policy.json. - 3
- This policy evaluates successfully if either admin_or_owner, or shared evaluates successfully.
- 4
- This policy restricts the ability to manipulate the shared attribute for a network to administrators only.
- 5
- This policy restricts the ability to manipulate the mac_address attribute for a port only to administrators and the owner of the network where the port is attached.
{
"admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
"admin_only": [["role:admin"]], "regular_user": [],
"default": [["rule:admin_only"]],
"create_subnet": [["rule:admin_only"]],
"get_subnet": [["rule:admin_or_owner"]],
"update_subnet": [["rule:admin_only"]],
"delete_subnet": [["rule:admin_only"]],
"create_network": [],
"get_network": [["rule:admin_or_owner"]],
"create_network:shared": [["rule:admin_only"]],
"update_network": [["rule:admin_or_owner"]],
"delete_network": [["rule:admin_or_owner"]],
"create_port": [["rule:admin_only"]],
"get_port": [["rule:admin_or_owner"]],
"update_port": [["rule:admin_only"]],
"delete_port": [["rule:admin_only"]]
}6.11. High availability
neutron-server and neutron-dhcp-agent in an active/active
fashion. You can run the neutron-l3-agent service as active/passive
, which avoids IP conflicts with respect to gateway IP addresses.
6.11.1. Networking high availability with Pacemaker
- neutron-server: https://github.com/madkiss/openstack-resource-agents
- neutron-dhcp-agent: https://github.com/madkiss/openstack-resource-agents
- neutron-l3-agent: https://github.com/madkiss/openstack-resource-agents
6.12. Plug-in pagination and sorting support
Table 6.42. Plug-ins that support native pagination and sorting
| Plug-in | Support Native Pagination | Support Native Sorting |
|---|---|---|
| ML2 | True | True |
| Open vSwitch | True | True |
| Linux Bridge | True | True |
Revision History
| Revision History | |||
|---|---|---|---|
| Revision 5.0.0-26 | Thu Oct 29 2015 | ||
| |||
| Revision 5.0.0-25 | Mon December 8 2014 | ||
| |||
| Revision 5.0.0-23 | Wed November 12 2014 | ||
| |||
| Revision 5.0.0-22 | Wed October 22 2014 | ||
| |||
| Revision 5.0.0-21 | Tue October 7 2014 | ||
| |||
| Revision 5.0.0-19 | Thu August 28 2014 | ||
| |||
| Revision 5.0.0-18 | Tue August 12 2014 | ||
| |||
| Revision 5.0.0-17 | Fri August 8 2014 | ||
| |||
| Revision 5.0.0-16 | Wed August 6 2014 | ||
| |||
| Revision 5.0.0-15 | Tue August 5 2014 | ||
| |||
| Revision 5.0.0-13 | Mon July 7 2014 | ||
| |||
| Revision 5.0.0-11 | Wed July 2 2014 | ||
| |||
| Revision 5.0.0-10 | Fri June 27 2014 | ||
| |||
| Revision 5.0.0-9 | Thu June 26 2014 | ||
| |||
| Revision 5.0.0-3 | Wed May 28 2014 | ||
| |||
| Revision 5.0.0-2 | Wed May 28 2014 | ||
| |||
| Revision 5.0.0-1 | Tue May 27 2014 | ||
| |||
| Revision 5-20140416 | Wed Apr 16 2014 | ||
| |||