Why instance showing less performance with raw image than qcow2 while using ceph as backend ?
Issue
-
Start an instance with a boot volume from a qcow2 image, the instance has great performance on the volume (performing a dd with conv=fdatasync works at 500 MB/sec). On other hand instance with a boot volume from a raw images if is slower than the above (210 MB/sec).
-
Created a volume from a qcow2 image. Searched the qcow2 on ceph:
rbd -p volumes ls -l|grep 88c88720-8963-4ecb-8dc3-6557681b98f1
volume-88c88720-8963-4ecb-8dc3-6557681b98f1 20480M 2
- Created a volume from raw image. Searched the raw volume on ceph:
rbd -p volumes ls -l|grep c9e89560-41bb-426b-9b8e-a06e9bcbb0e4
volume-c9e89560-41bb-426b-9b8e-a06e9bcbb0e4 20480M
images/37dc9447-11c4-4b58-99f8-e50305a02b06@snap 2
- Started an instance from the qcow2 volume, connected on it and the following is the dd command output:
[root@fromqcow2 ~]# dd if=/dev/zero of=lola1 bs=1024k count=1024 conv=fdatasync
1024+0 record dentro
1024+0 record fuori
1073741824 byte (1,1 GB) copiati, 2,15188 s, 499 MB/s
- Started an instance from the raw volume, connected to it and the following is the dd command output:
[root@fromraw ~]# dd if=/dev/zero of=lola1 bs=1024k count=1024 conv=fdatasync
1024+0 record dentro
1024+0 record fuori
1073741824 byte (1,1 GB) copiati, 4,46357 s, 241 MB/s
- RBD children related to raw image:
[root@overcloud-controller-0 ~]# rbd children
images/37dc9447-11c4-4b58-99f8-e50305a02b06@snap
volumes/volume-3a9573d7-1df5-4cbe-914c-48275a6e530c
volumes/volume-c9e89560-41bb-426b-9b8e-a06e9bcbb0e4
- RBD children related to qcow2 image.
[root@overcloud-controller-0 ~]# rbd children
images/5a4bcb21-852d-4e8a-8bc4-eba242b8a9ef@snap
-
Instance created from raw images seems to be a children of the image, but the instance created from qcow2 images seem to be a flatten volume. Is this the cause of the performance differences ?
-
rbd bench results with manually created rbd volumes were not showing two-fold performance different which is noticed with dd command from instance.
Case 1: No parent/child relationship
$rbd create volumes/data-disk1 -s 204800 --image-format 2
$rbd bench-write volumes/data-disk1 --io-size 4096 --io-threads 16 --io-total 10000000000 --io-pattern seq
Case 2: Parent/Child relationship
$rbd create volumes/data-disk2 -s 204800 --image-format 2
$rbd snap create volumes/data-disk2@snap
$rbd snap protect volumes/data-disk2@snap
$rbd clone volumes/data-disk2@snap volumes/data-disk3
$rbd -p volumes ls -l
NAME SIZE PARENT FMT PROT LOCK
data-disk2 200G 2
data-disk2@snap 200G 2 yes
data-disk3 200G volumes/data-disk2@snap 2
$rbd info volumes/data-disk3
rbd image 'data-disk3':
size 200 GB in 51200 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.16768a12eb141f2
format: 2
features: layering
flags:
parent: volumes/data-disk2@snap
overlap: 200 GB
$rbd bench-write volumes/data-disk3 --io-size 4096 --io-threads 16 --io-total 10000000000 --io-pattern seq
Environment
- Red Hat OpenStack Platform 8.0
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.