Chapter 4. Encrypting and validating OpenStack services
You can use barbican to encrypt and validate several Red Hat OpenStack Platform services, such as Block Storage (cinder) encryption keys, Block Storage volume images, Object Storage (swift) objects, and Image Service (glance) images.
Nova formats encrypted volumes during their first use if they are unencrypted. The resulting block device is then presented to the Compute node.
Guidelines for containerized services
-
Do not update any configuration file you might find on the physical node’s host operating system, for example,
/etc/cinder/cinder.conf
. The containerized service does not reference this file. Do not update the configuration file running within the container. Changes are lost once you restart the container.
Instead, if you must change containerized services, update the configuration file in
/var/lib/config-data/puppet-generated/
, which is used to generate the container.For example:
-
keystone:
/var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf
-
cinder:
/var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf
-
nova:
/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf
Changes are applied after you restart the container.
-
keystone:
4.1. Encrypting Object Storage (swift) at-rest objects
By default, objects uploaded to Object Storage (swift) are stored unencrypted. Because of this, it is possible to access objects directly from the file system. This can present a security risk if disks are not properly erased before they are discarded. When you have barbican enabled, the Object Storage service (swift) can transparently encrypt and decrypt your stored (at-rest) objects. At-rest encryption is distinct from in-transit encryption in that it refers to the objects being encrypted while being stored on disk.
Swift performs these encryption tasks transparently, with the objects being automatically encrypted when uploaded to swift, then automatically decrypted when served to a user. This encryption and decryption is done using the same (symmetric) key, which is stored in barbican.
You cannot disable encryption after you have enabled encryption and added data to the swift cluster, because the data is now stored in an encrypted state. Consequently, the data will not be readable if encryption is disabled, until you re-enable encryption with the same key.
Prerequisites
- OpenStack Key Manager is installed and enabled
Procedure
-
Include the
SwiftEncryptionEnabled: True
parameter in your environment file, then re-runningopenstack overcloud deploy
using/home/stack/overcloud_deploy.sh
. Confirm that swift is configured to use at-rest encryption:
$ crudini --get /var/lib/config-data/puppet-generated/swift/etc/swift/proxy-server.conf pipeline-main pipeline pipeline = catch_errors healthcheck proxy-logging cache ratelimit bulk tempurl formpost authtoken keystone staticweb copy container_quotas account_quotas slo dlo versioned_writes kms_keymaster encryption proxy-logging proxy-server
The result should include an entry for
encryption
.
4.2. Encrypting Block Storage (cinder) volumes
You can use barbican to manage your Block Storage (cinder) encryption keys. This configuration uses LUKS to encrypt the disks attached to your instances, including boot disks. Key management is transparent to the user; when you create a new volume using luks
as the encryption type, cinder generates a symmetric key secret for the volume and stores it in barbican. When booting the instance (or attaching an encrypted volume), nova retrieves the key from barbican and stores the secret locally as a Libvirt secret on the Compute node.
Procedure
On nodes running the
cinder-volume
andnova-compute
services, confirm that nova and cinder are both configured to use barbican for key management:$ crudini --get /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf key_manager backend castellan.key_manager.barbican_key_manager.BarbicanKeyManager $ crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf key_manager backend castellan.key_manager.barbican_key_manager.BarbicanKeyManager
Create a volume template that uses encryption. When you create new volumes they can be modeled off the settings you define here:
$ openstack volume type create --encryption-provider nova.volume.encryptors.luks.LuksEncryptor --encryption-cipher aes-xts-plain64 --encryption-key-size 256 --encryption-control-location front-end LuksEncryptor-Template-256 +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | description | None | | encryption | cipher='aes-xts-plain64', control_location='front-end', encryption_id='9df604d0-8584-4ce8-b450-e13e6316c4d3', key_size='256', provider='nova.volume.encryptors.luks.LuksEncryptor' | | id | 78898a82-8f4c-44b2-a460-40a5da9e4d59 | | is_public | True | | name | LuksEncryptor-Template-256 | +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Create a new volume and specify that it uses the
LuksEncryptor-Template-256
settings:$ openstack volume create --size 1 --type LuksEncryptor-Template-256 'Encrypted-Test-Volume' +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2018-01-22T00:19:06.000000 | | description | None | | encrypted | True | | id | a361fd0b-882a-46cc-a669-c633630b5c93 | | migration_status | None | | multiattach | False | | name | Encrypted-Test-Volume | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | LuksEncryptor-Template-256 | | updated_at | None | | user_id | 0e73cb3111614365a144e7f8f1a972af | +---------------------+--------------------------------------+
The resulting secret is automatically uploaded to the barbican back end.
NoteEnsure that the user creating the encrypted volume has the
creator
barbican role on the project. For more information, see theGrant user access to the creator role
section.Obtain the barbican secret UUID. This value is displayed in the
encryption_key_id
field.$ cinder --os-volume-api-version 3.64 volume show Encrypted-Test-Volume +------------------------------+-------------------------------------+ |Property |Value | +------------------------------+-------------------------------------+ |attached_servers |[] | |attachment_ids |[] | |availability_zone |nova | |bootable |false | |cluster_name |None | |consistencygroup_id |None | |created_at |2022-07-28T17:35:26.000000 | |description |None | |encrypted |True | |encryption_key_id |0944b8a8-de09-4413-b2ed-38f6c4591dd4 | |group_id |None | |id |a0b51b97-0392-460a-abfa-093022a120f3 | |metadata | | |migration_status |None | |multiattach |False | |name |vol | |os-vol-host-attr:host |hostgroup@tripleo_iscsi#tripleo_iscsi| |os-vol-mig-status-attr:migstat|None | |os-vol-mig-status-attr:name_id|None | |os-vol-tenant-attr:tenant_id |a2071ece39b3440aa82395ff7707996f | |provider_id |None | |replication_status |None | |service_uuid |471f0805-072e-4256-b447-c7dd10ceb807 | |shared_targets |False | |size |1 | |snapshot_id |None | |source_volid |None | |status |available | |updated_at |2022-07-28T17:35:26.000000 | |user_id |ba311b5c2b8e438c951d1137333669d4 | |volume_type |LUKS | |volume_type_id |cc188ace-f73d-4af5-bf5a-d70ccc5a401c | +------------------------------+-------------------------------------+
NoteYou must use the
--os-volume-api-version 3.64
parameter with the Cinder CLI to display theencryption_key_id
value. There is no equivalent OpenStack CLI command.Use barbican to confirm that the disk encryption key is present. In this example, the timestamp matches the LUKS volume creation time:
$ openstack secret list +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+ | Secret href | Name | Created | Status | Content types | Algorithm | Bit length | Secret type | Mode | Expiration | +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+ | https://192.168.123.169:9311/v1/secrets/0944b8a8-de09-4413-b2ed-38f6c4591dd4 | None | 2018-01-22T02:23:15+00:00 | ACTIVE | {u'default': u'application/octet-stream'} | aes | 256 | symmetric | None | None | +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+
Attach the new volume to an existing instance. For example:
$ openstack server add volume testInstance Encrypted-Test-Volume
The volume is then presented to the guest operating system and can be mounted using the built-in tools.
4.2.1. Migrating Block Storage volumes to OpenStack Key Manager
If you previously used ConfKeyManager
to manage disk encryption keys, you can migrate the volumes to OpenStack Key Manager by scanning the databases for encryption_key_id
entries within scope for migration to barbican. Each entry gets a new barbican key ID and the existing ConfKeyManager
secret is retained.
-
Previously, you could reassign ownership for volumes encrypted using
ConfKeyManager
. This is not possible for volumes that have their keys managed by barbican. -
Activating barbican will not break your existing
keymgr
volumes.
Prerequisites
Before you migrate, review the following differences between Barbican-managed encrypted volumes and volumes that use ConfKeyManager
:
- You cannot transfer ownership of encrypted volumes, because it is not currently possible to transfer ownership of the barbican secret.
- Barbican is more restrictive about who is allowed to read and delete secrets, which can affect some cinder volume operations. For example, a user cannot attach, detach, or delete a different user’s volumes.
Procedure
- Deploy the barbican service.
Add the
creator
role to the cinder service. For example:#openstack role create creator #openstack role add --user cinder creator --project service
Restart the
cinder-volume
andcinder-backup
services. Thecinder-volume
andcinder-backup
services automatically begin the migration process. You can check the log files to view status information about the migration:-
cinder-volume
- migrates keys stored in cinder’s Volumes and Snapshots tables. -
cinder-backup
- migrates keys in the Backups table.
-
-
Monitor the logs for the message indicating migration has finished and check that no more volumes are using the
ConfKeyManager
all-zeros encryption key ID. -
Remove the
fixed_key
option fromcinder.conf
andnova.conf
. You must determine which nodes have this setting configured. -
Remove the
creator
role from the cinder service.
Verification
After you start the process, one of these entries appears in the log files. This indicates whether the migration started correctly, or it identifies the issue it encountered:
-
Not migrating encryption keys because the ConfKeyManager is still in use.
-
Not migrating encryption keys because the ConfKeyManager's fixed_key is not in use.
-
Not migrating encryption keys because migration to the 'XXX' key_manager backend is not supported.
- This message is unlikely to appear; it is a safety check to handle the code ever encountering another Key Manager back end other than barbican. This is because the code only supports one migration scenario: From ConfKeyManager to barbican. -
Not migrating encryption keys because there are no volumes associated with this host.
- This can occur whencinder-volume
is running on multiple hosts, and a particular host has no volumes associated with it. This arises because every host is responsible for handling its own volumes. -
Starting migration of ConfKeyManager keys.
Migrating volume <UUID> encryption key to Barbican
- During migration, all of the host’s volumes are examined, and if a volume is still using the ConfKeyManager’s key ID (identified by the fact that it’s all zeros (00000000-0000-0000-0000-000000000000
)), then this message appears.-
For
cinder-backup
, this message uses slightly different capitalization:Migrating Volume [...]
orMigrating Backup [...]
-
For
-
After each host examines all of its volumes, the host displays a summary status message:
`No volumes are using the ConfKeyManager's encryption_key_id.` `No backups are known to be using the ConfKeyManager's encryption_key_id.`
You may also see the following entries:
-
There are still %d volume(s) using the ConfKeyManager's all-zeros encryption key ID.
There are still %d backup(s) using the ConfKeyManager’s all-zeros encryption key ID.
Both of these messages can appear in the
cinder-volume
andcinder-backup
logs. Whereas each service only handles the migration of its own entries, the service is aware of the the other’s status. As a result,cinder-volume
knows ifcinder-backup
still has backups to migrate, andcinder-backup
knows if thecinder-volume
service has volumes to migrate.
-
Although each host migrates only its own volumes, the summary message is based on a global assessment of whether any volume still requires migration This allows you to confirm that migration for all volumes is complete.
Cleanup
After migrating your key IDs into barbican, the fixed key remains in the configuration files. This can present a security concern to some users, because the fixed_key
value is not encrypted in the .conf
files.
To address this, you can manually remove the fixed_key
values from your nova and cinder configurations. However, first complete testing and review the output of the log file before you proceed, because disks that are still dependent on this value are not accessible.
The encryption_key_id
was only recently added to the Backup
table, as part of the Queens release. As a result, pre-existing backups of encrypted volumes are likely to exist. The all-zeros encryption_key_id
is stored on the backup itself, but it does not appear in the Backup
database. As such, it is impossible for the migration process to know for certain whether a backup of an encrypted volume exists that still relies on the all-zeros ConfKeyMgr
key ID.
Review the existing
fixed_key
values. The values must match for both services.crudini --get /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf keymgr fixed_key crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf keymgr fixed_key
ImportantMake a backup of the existing
fixed_key
values. This allows you to restore the value if something goes wrong, or if you need to restore a backup that uses the old encryption key.Delete the
fixed_key
values:crudini --del /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf keymgr fixed_key crudini --del /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf keymgr fixed_key
Troubleshooting
The barbican secret can only be created when the requestor has the creator
role. This means that the cinder service itself requires the creator role, otherwise a log sequence similar to this will occur:
-
Starting migration of ConfKeyManager keys.
-
Migrating volume <UUID> encryption key to Barbican
-
Error migrating encryption key: Forbidden: Secret creation attempt not allowed - please review your user/project privileges
-
There are still %d volume(s) using the ConfKeyManager's all-zeros encryption key ID.
The key message is the third one: Secret creation attempt not allowed.
To fix the problem, update the cinder
account’s privileges:
-
Run
openstack role add --project service --user cinder creator
-
Restart the
cinder-volume
andcinder-backup
services.
As a result, the next attempt at migration should succeed.
4.3. Validating Block Storage (cinder) volume images
The Block Storage Service (cinder) automatically validates the signature of any downloaded, signed image during volume from image creation. The signature is validated before the image is written to the volume. To improve performance, you can use the Block Storage Image-Volume cache to store validated images for creating new volumes.
Cinder image signature validation is not supported with Red Hat Ceph Storage or RBD volumes.
Procedure
- Log in to a Controller node.
Choose one of the following options:
View cinder’s image validation activities in the
Volume
log,/var/log/containers/cinder/cinder-volume.log
.For example, you can expect the following entry when the instance is booted:
2018-05-24 12:48:35.256 1 INFO cinder.image.image_utils [req-7c271904-4975-4771-9d26-cbea6c0ade31 b464b2fd2a2140e9a88bbdacf67bdd8c a3db2f2beaee454182c95b646fa7331f - default default] Image signature verification succeeded for image d3396fa0-2ea2-4832-8a77-d36fa3f2ab27
Use the
openstack volume list
andcinder volume show
commands:-
Use the
openstack volume list
command to locate the volume ID. Run the
cinder volume show
command on a compute node:cinder volume show <VOLUME_ID>
-
Use the
Locate the
volume_image_metadata
section with the linesignature verified : True
.$ cinder show d0db26bb-449d-4111-a59a-6fbb080bb483 +--------------------------------+-------------------------------------------------+ | Property | Value | +--------------------------------+-------------------------------------------------+ | attached_servers | [] | | attachment_ids | [] | | availability_zone | nova | | bootable | true | | consistencygroup_id | None | | created_at | 2018-10-12T19:04:41.000000 | | description | None | | encrypted | True | | id | d0db26bb-449d-4111-a59a-6fbb080bb483 | | metadata | | | migration_status | None | | multiattach | False | | name | None | | os-vol-host-attr:host | centstack.localdomain@nfs#nfs | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 1a081dd2505547f5a8bb1a230f2295f4 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2018-10-12T19:05:13.000000 | | user_id | ad9fe430b3a6416f908c79e4de3bfa98 | | volume_image_metadata | checksum : f8ab98ff5e73ebab884d80c9dc9c7290 | | | container_format : bare | | | disk_format : qcow2 | | | image_id : 154d4d4b-12bf-41dc-b7c4-35e5a6a3482a | | | image_name : cirros-0.3.5-x86_64-disk | | | min_disk : 0 | | | min_ram : 0 | | | signature_verified : False | | | size : 13267968 | | volume_type | nfs | +--------------------------------+-------------------------------------------------+
Snapshots are saved as Image service (glance) images. If you configure the Compute service (nova) to check for signed images, then you must manually download the image from glance, sign the image, and then re-upload the image. This is true whether the snapshot is from an instance created with signed images, or an instance booted from a volume created from a signed image.
A volume can be uploaded as an Image service (glance) image. If the original volume was bootable, the image can be used to create a bootable volume in the Block Storage service (cinder). If you have configured the Block Storage service to check for signed images then you must manually download the image from glance, compute the image signature and update all appropriate image signature properties before using the image. For more information, see Section 4.5, “Validating snapshots”.
Additional resources
4.3.1. Automatic deletion of volume image encryption key
The Block Storage service (cinder) creates an encryption key in the Key Management service (barbican) when it uploads an encrypted volume to the Image service (glance). This creates a 1:1 relationship between an encryption key and a stored image.
Encryption key deletion prevents unlimited resource consumption of the Key Management service. The Block Storage, Key Management, and Image services automatically manage the key for an encrypted volume, including the deletion of the key.
The Block Storage service automatically adds two properties to a volume image:
-
cinder_encryption_key_id
- The identifier of the encryption key that the Key Management service stores for a specific image. -
cinder_encryption_key_deletion_policy
- The policy that tells the Image service to tell the Key Management service whether to delete the key associated with this image.
The values of these properties are automatically assigned. To avoid unintentional data loss, do not adjust these values.
When you create a volume image, the Block Storage service sets the cinder_encryption_key_deletion_policy
property to on_image_deletion
. When you delete a volume image, the Image service deletes the corresponding encryption key if the cinder_encryption_key_deletion_policy
equals on_image_deletion
.
Red Hat does not recommend manual manipulation of the cinder_encryption_key_id
or cinder_encryption_key_deletion_policy
properties. If you use the encryption key that is identified by the value of cinder_encryption_key_id
for any other purpose, you risk data loss.
4.4. Signing Image Service (glance) images
When you configure the Image Service (glance) to verify that an uploaded image has not been tampered with, you must sign images before you can start an instance using those images. Use the openssl
command to sign an image with a key that is stored in barbican, then upload the image to glance with the accompanying signing information. As a result, the image’s signature is verified before each use, with the instance build process failing if the signature does not match.
Prerequisites
- OpenStack Key Manager is installed and enabled
Procedure
-
In your environment file, enable image verification with the
VerifyGlanceSignatures: True
setting. You must re-run theopenstack overcloud deploy
command for this setting to take effect. To verify that glance image validation is enabled, run the following command on an overcloud Compute node:
$ sudo crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf glance verify_glance_signatures
NoteIf you use Ceph as the back end for the Image and Compute services, a CoW clone is created. Therefore, Image signing verification cannot be performed.
Confirm that glance is configured to use barbican:
$ sudo crudini --get /var/lib/config-data/puppet-generated/glance_api/etc/glance/glance-api.conf key_manager backend castellan.key_manager.barbican_key_manager.BarbicanKeyManager
Generate a certificate:
openssl genrsa -out private_key.pem 1024 openssl rsa -pubout -in private_key.pem -out public_key.pem openssl req -new -key private_key.pem -out cert_request.csr openssl x509 -req -days 14 -in cert_request.csr -signkey private_key.pem -out x509_signing_cert.crt
Add the certificate to the barbican secret store:
$ source ~/overcloudrc $ openstack secret store --name signing-cert --algorithm RSA --secret-type certificate --payload-content-type "application/octet-stream" --payload-content-encoding base64 --payload "$(base64 x509_signing_cert.crt)" -c 'Secret href' -f value https://192.168.123.170:9311/v1/secrets/5df14c2b-f221-4a02-948e-48a61edd3f5b
NoteRecord the resulting UUID for use in a later step. In this example, the certificate’s UUID is
5df14c2b-f221-4a02-948e-48a61edd3f5b
.Use
private_key.pem
to sign the image and generate the.signature
file. For example:$ openssl dgst -sha256 -sign private_key.pem -sigopt rsa_padding_mode:pss -out cirros-0.4.0.signature cirros-0.4.0-x86_64-disk.img
Convert the resulting
.signature
file into base64 format:$ base64 -w 0 cirros-0.4.0.signature > cirros-0.4.0.signature.b64
Load the base64 value into a variable to use it in the subsequent command:
$ cirros_signature_b64=$(cat cirros-0.4.0.signature.b64)
Upload the signed image to glance. For
img_signature_certificate_uuid
, you must specify the UUID of the signing key you previously uploaded to barbican:openstack image create \ --container-format bare --disk-format qcow2 \ --property img_signature="$cirros_signature_b64" \ --property img_signature_certificate_uuid="5df14c2b-f221-4a02-948e-48a61edd3f5b"\ --property img_signature_hash_method="SHA-256" \ --property img_signature_key_type="RSA-PSS" cirros_0_4_0_signed \ --file cirros-0.4.0-x86_64-disk.img +--------------------------------+----------------------------------------------------------------------------------+ | Property | Value | +--------------------------------+----------------------------------------------------------------------------------+ | checksum | None | | container_format | bare | | created_at | 2018-01-23T05:37:31Z | | disk_format | qcow2 | | id | d3396fa0-2ea2-4832-8a77-d36fa3f2ab27 | | img_signature | lcI7nGgoKxnCyOcsJ4abbEZEpzXByFPIgiPeiT+Otjz0yvW00KNN3fI0AA6tn9EXrp7fb2xBDE4UaO3v | | | IFquV/s3mU4LcCiGdBAl3pGsMlmZZIQFVNcUPOaayS1kQYKY7kxYmU9iq/AZYyPw37KQI52smC/zoO54 | | | zZ+JpnfwIsM= | | img_signature_certificate_uuid | ba3641c2-6a3d-445a-8543-851a68110eab | | img_signature_hash_method | SHA-256 | | img_signature_key_type | RSA-PSS | | min_disk | 0 | | min_ram | 0 | | name | cirros_0_4_0_signed | | owner | 9f812310df904e6ea01e1bacb84c9f1a | | protected | False | | size | None | | status | queued | | tags | [] | | updated_at | 2018-01-23T05:37:31Z | | virtual_size | None | | visibility | shared | +--------------------------------+----------------------------------------------------------------------------------+
You can view glance’s image validation activities in the Compute log:
/var/log/containers/nova/nova-compute.log
. For example, you can expect the following entry when the instance is booted:2018-05-24 12:48:35.256 1 INFO nova.image.glance [req-7c271904-4975-4771-9d26-cbea6c0ade31 b464b2fd2a2140e9a88bbdacf67bdd8c a3db2f2beaee454182c95b646fa7331f - default default] Image signature verification succeeded for image d3396fa0-2ea2-4832-8a77-d36fa3f2ab27
4.5. Validating snapshots
Snapshots are saved as Image service (glance) images. If you configure the Compute service (nova) to check for signed images, then snapshots must by signed, even if they were created from an instance with a signed image.
Procedure
Download the snapshot from glance
openstack image save --file <local-file-name> <image-name>
- Generate to signature to validate the snapshot. This is the same process you use when you generate a signature to validate any image. For more information, see Validating Image Service (glance) images.
Update the image properties:
openstack image set \ --property img_signature="$cirros_signature_b64" \ --property img_signature_certificate_uuid="5df14c2b-f221-4a02-948e-48a61edd3f5b" \ --property img_signature_hash_method="SHA-256" \ --property img_signature_key_type="RSA-PSS" \ <image_id_of_the_snapshot>
Optional: Remove the downloaded glance image from the filesystem:
rm <local-file-name>