Chapter 4. Encrypting and validating OpenStack services

You can use barbican to encrypt and validate several Red Hat OpenStack Platform services, such as Block Storage (cinder) encryption keys, Block Storage volume images, Object Storage (swift) objects, and Image Service (glance) images.

Important

Nova formats encrypted volumes during their first use if they are unencrypted. The resulting block device is then presented to the Compute node.

Guidelines for containerized services

  • Do not update any configuration file you might find on the physical node’s host operating system, for example, /etc/cinder/cinder.conf. The containerized service does not reference this file.
  • Do not update the configuration file running within the container. Changes are lost once you restart the container.

    Instead, if you must change containerized services, update the configuration file in /var/lib/config-data/puppet-generated/, which is used to generate the container.

    For example:

    • keystone: /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf
    • cinder: /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf
    • nova: /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf

    Changes are applied after you restart the container.

4.1. Encrypting Object Storage (swift) at-rest objects

By default, objects uploaded to Object Storage (swift) are stored unencrypted. Because of this, it is possible to access objects directly from the file system. This can present a security risk if disks are not properly erased before they are discarded. When you have barbican enabled, the Object Storage service (swift) can transparently encrypt and decrypt your stored (at-rest) objects. At-rest encryption is distinct from in-transit encryption in that it refers to the objects being encrypted while being stored on disk.

Swift performs these encryption tasks transparently, with the objects being automatically encrypted when uploaded to swift, then automatically decrypted when served to a user. This encryption and decryption is done using the same (symmetric) key, which is stored in barbican.

Note

You cannot disable encryption after you have enabled encryption and added data to the swift cluster, because the data is now stored in an encrypted state. Consequently, the data will not be readable if encryption is disabled, until you re-enable encryption with the same key.

Prerequisites

  • OpenStack Key Manager is installed and enabled

Procedure

  1. Include the SwiftEncryptionEnabled: True parameter in your environment file, then re-running openstack overcloud deploy using /home/stack/overcloud_deploy.sh.
  2. Confirm that swift is configured to use at-rest encryption:

    $ crudini --get /var/lib/config-data/puppet-generated/swift/etc/swift/proxy-server.conf pipeline-main pipeline
    
    pipeline = catch_errors healthcheck proxy-logging cache ratelimit bulk tempurl formpost authtoken keystone staticweb copy container_quotas account_quotas slo dlo versioned_writes kms_keymaster encryption proxy-logging proxy-server

    The result should include an entry for encryption.

4.2. Encrypting Block Storage (cinder) volumes

You can use barbican to manage your Block Storage (cinder) encryption keys. This configuration uses LUKS to encrypt the disks attached to your instances, including boot disks. Key management is transparent to the user; when you create a new volume using luks as the encryption type, cinder generates a symmetric key secret for the volume and stores it in barbican. When booting the instance (or attaching an encrypted volume), nova retrieves the key from barbican and stores the secret locally as a Libvirt secret on the Compute node.

Procedure

  1. On nodes running the cinder-volume and nova-compute services, confirm that nova and cinder are both configured to use barbican for key management:

    $ crudini --get /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf key_manager backend
    castellan.key_manager.barbican_key_manager.BarbicanKeyManager
    
    $ crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf key_manager backend
    castellan.key_manager.barbican_key_manager.BarbicanKeyManager
  2. Create a volume template that uses encryption. When you create new volumes they can be modeled off the settings you define here:

    $ openstack volume type create --encryption-provider nova.volume.encryptors.luks.LuksEncryptor --encryption-cipher aes-xts-plain64 --encryption-key-size 256 --encryption-control-location front-end LuksEncryptor-Template-256
    +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | Field       | Value                                                                                                                                                                              |
    +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | description | None                                                                                                                                                                               |
    | encryption  | cipher='aes-xts-plain64', control_location='front-end', encryption_id='9df604d0-8584-4ce8-b450-e13e6316c4d3', key_size='256', provider='nova.volume.encryptors.luks.LuksEncryptor' |
    | id          | 78898a82-8f4c-44b2-a460-40a5da9e4d59                                                                                                                                               |
    | is_public   | True                                                                                                                                                                               |
    | name        | LuksEncryptor-Template-256                                                                                                                                                         |
    +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  3. Create a new volume and specify that it uses the LuksEncryptor-Template-256 settings:

    $ openstack volume create --size 1 --type LuksEncryptor-Template-256 'Encrypted-Test-Volume'
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | attachments         | []                                   |
    | availability_zone   | nova                                 |
    | bootable            | false                                |
    | consistencygroup_id | None                                 |
    | created_at          | 2018-01-22T00:19:06.000000           |
    | description         | None                                 |
    | encrypted           | True                                 |
    | id                  | a361fd0b-882a-46cc-a669-c633630b5c93 |
    | migration_status    | None                                 |
    | multiattach         | False                                |
    | name                | Encrypted-Test-Volume                |
    | properties          |                                      |
    | replication_status  | None                                 |
    | size                | 1                                    |
    | snapshot_id         | None                                 |
    | source_volid        | None                                 |
    | status              | creating                             |
    | type                | LuksEncryptor-Template-256           |
    | updated_at          | None                                 |
    | user_id             | 0e73cb3111614365a144e7f8f1a972af     |
    +---------------------+--------------------------------------+

    The resulting secret is automatically uploaded to the barbican back end.

    Note

    Ensure that the user creating the encrypted volume has the creator barbican role on the project. For more information, see the Grant user access to the creator role section.

  4. Use barbican to confirm that the disk encryption key is present. In this example, the timestamp matches the LUKS volume creation time:

    $ openstack secret list
    +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+
    | Secret href                                                                        | Name | Created                   | Status | Content types                             | Algorithm | Bit length | Secret type | Mode | Expiration |
    +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+
    | https://192.168.123.169:9311/v1/secrets/24845e6d-64a5-4071-ba99-0fdd1046172e | None | 2018-01-22T02:23:15+00:00 | ACTIVE | {u'default': u'application/octet-stream'} | aes       |        256 | symmetric   | None | None       |
    +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+
  5. Attach the new volume to an existing instance. For example:

    $ openstack server add volume testInstance Encrypted-Test-Volume

    The volume is then presented to the guest operating system and can be mounted using the built-in tools.

4.2.1. Migrating Block Storage volumes to OpenStack Key Manager

If you previously used ConfKeyManager to manage disk encryption keys, you can migrate the volumes to OpenStack Key Manager by scanning the databases for encryption_key_id entries within scope for migration to barbican. Each entry gets a new barbican key ID and the existing ConfKeyManager secret is retained.

Note
  • Previously, you could reassign ownership for volumes encrypted using ConfKeyManager. This is not possible for volumes that have their keys managed by barbican.
  • Activating barbican will not break your existing keymgr volumes.

Prerequisites

Before you migrate, review the following differences between Barbican-managed encrypted volumes and volumes that use ConfKeyManager:

  • You cannot transfer ownership of encrypted volumes, because it is not currently possible to transfer ownership of the barbican secret.
  • Barbican is more restrictive about who is allowed to read and delete secrets, which can affect some cinder volume operations. For example, a user cannot attach, detach, or delete a different user’s volumes.

Procedure

  1. Deploy the barbican service.
  2. Add the creator role to the cinder service. For example:

    #openstack role create creator
    #openstack role add --user cinder creator  --project service
  3. Restart the cinder-volume and cinder-backup services. The cinder-volume and cinder-backup services automatically begin the migration process. You can check the log files to view status information about the migration:

    • cinder-volume - migrates keys stored in cinder’s Volumes and Snapshots tables.
    • cinder-backup - migrates keys in the Backups table.
  4. Monitor the logs for the message indicating migration has finished and check that no more volumes are using the ConfKeyManager all-zeros encryption key ID.
  5. Remove the fixed_key option from cinder.conf and nova.conf. You must determine which nodes have this setting configured.
  6. Remove the creator role from the cinder service.

Verification

  • After you start the process, one of these entries appears in the log files. This indicates whether the migration started correctly, or it identifies the issue it encountered:

    • Not migrating encryption keys because the ConfKeyManager is still in use.
    • Not migrating encryption keys because the ConfKeyManager's fixed_key is not in use.
    • Not migrating encryption keys because migration to the 'XXX' key_manager backend is not supported. - This message is unlikely to appear; it is a safety check to handle the code ever encountering another Key Manager back end other than barbican. This is because the code only supports one migration scenario: From ConfKeyManager to barbican.
    • Not migrating encryption keys because there are no volumes associated with this host. - This can occur when cinder-volume is running on multiple hosts, and a particular host has no volumes associated with it. This arises because every host is responsible for handling its own volumes.
    • Starting migration of ConfKeyManager keys.
    • Migrating volume <UUID> encryption key to Barbican - During migration, all of the host’s volumes are examined, and if a volume is still using the ConfKeyManager’s key ID (identified by the fact that it’s all zeros (00000000-0000-0000-0000-000000000000)), then this message appears.

      • For cinder-backup, this message uses slightly different capitalization: Migrating Volume [...] or Migrating Backup [...]
  • After each host examines all of its volumes, the host displays a summary status message:

    `No volumes are using the ConfKeyManager's encryption_key_id.`
    `No backups are known to be using the ConfKeyManager's encryption_key_id.`
  • You may also see the following entries:

    • There are still %d volume(s) using the ConfKeyManager's all-zeros encryption key ID.
    • There are still %d backup(s) using the ConfKeyManager’s all-zeros encryption key ID.

      Both of these messages can appear in the cinder-volume and cinder-backup logs. Whereas each service only handles the migration of its own entries, the service is aware of the the other’s status. As a result, cinder-volume knows if cinder-backup still has backups to migrate, and cinder-backup knows if the cinder-volume service has volumes to migrate.

Although each host migrates only its own volumes, the summary message is based on a global assessment of whether any volume still requires migration This allows you to confirm that migration for all volumes is complete.

Cleanup

After migrating your key IDs into barbican, the fixed key remains in the configuration files. This can present a security concern to some users, because the fixed_key value is not encrypted in the .conf files.

To address this, you can manually remove the fixed_key values from your nova and cinder configurations. However, first complete testing and review the output of the log file before you proceed, because disks that are still dependent on this value are not accessible.

Important

The encryption_key_id was only recently added to the Backup table, as part of the Queens release. As a result, pre-existing backups of encrypted volumes are likely to exist. The all-zeros encryption_key_id is stored on the backup itself, but it does not appear in the Backup database. As such, it is impossible for the migration process to know for certain whether a backup of an encrypted volume exists that still relies on the all-zeros ConfKeyMgr key ID.

  1. Review the existing fixed_key values. The values must match for both services.

    crudini --get /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf keymgr fixed_key
    crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf keymgr fixed_key
    Important

    Make a backup of the existing fixed_key values. This allows you to restore the value if something goes wrong, or if you need to restore a backup that uses the old encryption key.

  2. Delete the fixed_key values:

    crudini --del /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf keymgr fixed_key
    crudini --del /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf keymgr fixed_key

Troubleshooting

The barbican secret can only be created when the requestor has the creator role. This means that the cinder service itself requires the creator role, otherwise a log sequence similar to this will occur:

  1. Starting migration of ConfKeyManager keys.
  2. Migrating volume <UUID> encryption key to Barbican
  3. Error migrating encryption key: Forbidden: Secret creation attempt not allowed - please review your user/project privileges
  4. There are still %d volume(s) using the ConfKeyManager's all-zeros encryption key ID.

The key message is the third one: Secret creation attempt not allowed. To fix the problem, update the cinder account’s privileges:

  1. Run openstack role add --project service --user cinder creator
  2. Restart the cinder-volume and cinder-backup services.

As a result, the next attempt at migration should succeed.

4.3. Validating Block Storage (cinder) volume images

The Block Storage Service (cinder) automatically validates the signature of any downloaded, signed image during volume from image creation. The signature is validated before the image is written to the volume. To improve performance, you can use the Block Storage Image-Volume cache to store validated images for creating new volumes.

Note

Cinder image signature validation is not supported with Red Hat Ceph Storage or RBD volumes.

Procedure

  1. Log in to a Controller node.
  2. Choose one of the following options:

    • View cinder’s image validation activities in the Volume log, /var/log/containers/cinder/cinder-volume.log.

      For example, you can expect the following entry when the instance is booted:

      2018-05-24 12:48:35.256 1 INFO cinder.image.image_utils [req-7c271904-4975-4771-9d26-cbea6c0ade31 b464b2fd2a2140e9a88bbdacf67bdd8c a3db2f2beaee454182c95b646fa7331f - default default] Image signature verification succeeded for image d3396fa0-2ea2-4832-8a77-d36fa3f2ab27
    • Use the openstack volume list and cinder volume show commands:

      1. Use the openstack volume list command to locate the volume ID.
      2. Run the cinder volume show command on a compute node:

        cinder volume show <VOLUME_ID>
  3. Locate the volume_image_metadata section with the line signature verified : True.

    $ cinder show d0db26bb-449d-4111-a59a-6fbb080bb483
    +--------------------------------+-------------------------------------------------+
    | Property                       | Value                                           |
    +--------------------------------+-------------------------------------------------+
    | attached_servers               | []                                              |
    | attachment_ids                 | []                                              |
    | availability_zone              | nova                                            |
    | bootable                       | true                                            |
    | consistencygroup_id            | None                                            |
    | created_at                     | 2018-10-12T19:04:41.000000                      |
    | description                    | None                                            |
    | encrypted                      | True                                            |
    | id                             | d0db26bb-449d-4111-a59a-6fbb080bb483            |
    | metadata                       |                                                 |
    | migration_status               | None                                            |
    | multiattach                    | False                                           |
    | name                           | None                                            |
    | os-vol-host-attr:host          | centstack.localdomain@nfs#nfs                   |
    | os-vol-mig-status-attr:migstat | None                                            |
    | os-vol-mig-status-attr:name_id | None                                            |
    | os-vol-tenant-attr:tenant_id   | 1a081dd2505547f5a8bb1a230f2295f4                |
    | replication_status             | None                                            |
    | size                           | 1                                               |
    | snapshot_id                    | None                                            |
    | source_volid                   | None                                            |
    | status                         | available                                       |
    | updated_at                     | 2018-10-12T19:05:13.000000                      |
    | user_id                        | ad9fe430b3a6416f908c79e4de3bfa98                |
    | volume_image_metadata          | checksum : f8ab98ff5e73ebab884d80c9dc9c7290     |
    |                                | container_format : bare                         |
    |                                | disk_format : qcow2                             |
    |                                | image_id : 154d4d4b-12bf-41dc-b7c4-35e5a6a3482a |
    |                                | image_name : cirros-0.3.5-x86_64-disk           |
    |                                | min_disk : 0                                    |
    |                                | min_ram : 0                                     |
    |                                | signature_verified : False                      |
    |                                | size : 13267968                                 |
    | volume_type                    | nfs                                             |
    +--------------------------------+-------------------------------------------------+
Note

Snapshots are saved as Image service (glance) images. If you configure the Compute service (nova) to check for signed images, then you must manually download the image from glance, sign the image, and then re-upload the image. This is true whether the snapshot is from an instance created with signed images, or an instance booted from a volume created from a signed image.

Note

A volume can be uploaded as an Image service (glance) image. If the original volume was bootable, the image can be used to create a bootable volume in the Block Storage service (cinder). If you have configured the Block Storage service to check for signed images then you must manually download the image from glance, compute the image signature and update all appropriate image signature properties before using the image. For more information, see Section 4.5, “Validating snapshots”.

4.3.1. Automatic deletion of volume image encryption key

The Block Storage service (cinder) creates an encryption key in the Key Management service (barbican) when it uploads an encrypted volume to the Image service (glance). This creates a 1:1 relationship between an encryption key and a stored image.

Encryption key deletion prevents unlimited resource consumption of the Key Management service. The Block Storage, Key Management, and Image services automatically manage the key for an encrypted volume, including the deletion of the key.

The Block Storage service automatically adds two properties to a volume image:

  • cinder_encryption_key_id - The identifier of the encryption key that the Key Management service stores for a specific image.
  • cinder_encryption_key_deletion_policy - The policy that tells the Image service to tell the Key Management service whether to delete the key associated with this image.
Important

The values of these properties are automatically assigned. To avoid unintentional data loss, do not adjust these values.

When you create a volume image, the Block Storage service sets the cinder_encryption_key_deletion_policy property to on_image_deletion. When you delete a volume image, the Image service deletes the corresponding encryption key if the cinder_encryption_key_deletion_policy equals on_image_deletion.

Important

Red Hat does not recommend manual manipulation of the cinder_encryption_key_id or cinder_encryption_key_deletion_policy properties. If you use the encryption key that is identified by the value of cinder_encryption_key_id for any other purpose, you risk data loss.

4.4. Signing Image Service (glance) images

When you configure the Image Service (glance) to verify that an uploaded image has not been tampered with, you must sign images before you can start an instance using those images. Use the openssl command to sign an image with a key that is stored in barbican, then upload the image to glance with the accompanying signing information. As a result, the image’s signature is verified before each use, with the instance build process failing if the signature does not match.

Prerequisites

  • OpenStack Key Manager is installed and enabled

Procedure

  1. In your environment file, enable image verification with the VerifyGlanceSignatures: True setting. You must re-run the openstack overcloud deploy command for this setting to take effect.
  2. To verify that glance image validation is enabled, run the following command on an overcloud Compute node:

    $ sudo crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf glance verify_glance_signatures
    Note

    If you use Ceph as the back end for the Image and Compute services, a CoW clone is created. Therefore, Image signing verification cannot be performed.

  3. Confirm that glance is configured to use barbican:

    $ sudo crudini --get /var/lib/config-data/puppet-generated/glance_api/etc/glance/glance-api.conf key_manager backend
    castellan.key_manager.barbican_key_manager.BarbicanKeyManager
  4. Generate a certificate:

    openssl genrsa -out private_key.pem 1024
    openssl rsa -pubout -in private_key.pem -out public_key.pem
    openssl req -new -key private_key.pem -out cert_request.csr
    openssl x509 -req -days 14 -in cert_request.csr -signkey private_key.pem -out x509_signing_cert.crt
  5. Add the certificate to the barbican secret store:

    $ source ~/overcloudrc
    $ openstack secret store --name signing-cert --algorithm RSA --secret-type certificate --payload-content-type "application/octet-stream" --payload-content-encoding base64  --payload "$(base64 x509_signing_cert.crt)" -c 'Secret href' -f value
    https://192.168.123.170:9311/v1/secrets/5df14c2b-f221-4a02-948e-48a61edd3f5b
    Note

    Record the resulting UUID for use in a later step. In this example, the certificate’s UUID is 5df14c2b-f221-4a02-948e-48a61edd3f5b.

  6. Use private_key.pem to sign the image and generate the .signature file. For example:

    $ openssl dgst -sha256 -sign private_key.pem -sigopt rsa_padding_mode:pss -out cirros-0.4.0.signature cirros-0.4.0-x86_64-disk.img
  7. Convert the resulting .signature file into base64 format:

    $ base64 -w 0 cirros-0.4.0.signature  > cirros-0.4.0.signature.b64
  8. Load the base64 value into a variable to use it in the subsequent command:

    $ cirros_signature_b64=$(cat cirros-0.4.0.signature.b64)
  9. Upload the signed image to glance. For img_signature_certificate_uuid, you must specify the UUID of the signing key you previously uploaded to barbican:

     openstack image create \
    --container-format bare --disk-format qcow2 \
    --property img_signature="$cirros_signature_b64" \
    --property img_signature_certificate_uuid="5df14c2b-f221-4a02-948e-48a61edd3f5b"\
    --property img_signature_hash_method="SHA-256" \
    --property img_signature_key_type="RSA-PSS" cirros_0_4_0_signed \
    --file cirros-0.4.0-x86_64-disk.img
    +--------------------------------+----------------------------------------------------------------------------------+
    | Property                       | Value                                                                            |
    +--------------------------------+----------------------------------------------------------------------------------+
    | checksum                       | None                                                                             |
    | container_format               | bare                                                                             |
    | created_at                     | 2018-01-23T05:37:31Z                                                             |
    | disk_format                    | qcow2                                                                            |
    | id                             | d3396fa0-2ea2-4832-8a77-d36fa3f2ab27                                             |
    | img_signature                  | lcI7nGgoKxnCyOcsJ4abbEZEpzXByFPIgiPeiT+Otjz0yvW00KNN3fI0AA6tn9EXrp7fb2xBDE4UaO3v |
    |                                | IFquV/s3mU4LcCiGdBAl3pGsMlmZZIQFVNcUPOaayS1kQYKY7kxYmU9iq/AZYyPw37KQI52smC/zoO54 |
    |                                | zZ+JpnfwIsM=                                                                     |
    | img_signature_certificate_uuid | ba3641c2-6a3d-445a-8543-851a68110eab                                             |
    | img_signature_hash_method      | SHA-256                                                                          |
    | img_signature_key_type         | RSA-PSS                                                                          |
    | min_disk                       | 0                                                                                |
    | min_ram                        | 0                                                                                |
    | name                           | cirros_0_4_0_signed                                                              |
    | owner                          | 9f812310df904e6ea01e1bacb84c9f1a                                                 |
    | protected                      | False                                                                            |
    | size                           | None                                                                             |
    | status                         | queued                                                                           |
    | tags                           | []                                                                               |
    | updated_at                     | 2018-01-23T05:37:31Z                                                             |
    | virtual_size                   | None                                                                             |
    | visibility                     | shared                                                                           |
    +--------------------------------+----------------------------------------------------------------------------------+
  10. You can view glance’s image validation activities in the Compute log: /var/log/containers/nova/nova-compute.log. For example, you can expect the following entry when the instance is booted:

    2018-05-24 12:48:35.256 1 INFO nova.image.glance [req-7c271904-4975-4771-9d26-cbea6c0ade31 b464b2fd2a2140e9a88bbdacf67bdd8c a3db2f2beaee454182c95b646fa7331f - default default] Image signature verification succeeded for image d3396fa0-2ea2-4832-8a77-d36fa3f2ab27

4.5. Validating snapshots

Snapshots are saved as Image service (glance) images. If you configure the Compute service (nova) to check for signed images, then snapshots must by signed, even if they were created from an instance with a signed image.

Procedure

  1. Download the snapshot from glance

    openstack image save --file <local-file-name> <image-name>
  2. Generate to signature to validate the snapshot. This is the same process you use when you generate a signature to validate any image. For more information, see Validating Image Service (glance) images.
  3. Update the image properties:

      openstack image set \
        --property img_signature="$cirros_signature_b64" \
        --property img_signature_certificate_uuid="5df14c2b-f221-4a02-948e-48a61edd3f5b" \
        --property img_signature_hash_method="SHA-256" \
        --property img_signature_key_type="RSA-PSS" \
        <image_id_of_the_snapshot>
  4. Optional: Remove the downloaded glance image from the filesystem:

    rm <local-file-name>