Running "rados bench" on a SAS pool and SATA pool shows increased performance for the SATA pool, why?

Solution Verified - Updated -

Issue

  • Running a "rados bench" on two pools based on SAS disks and SATA disks respectively, shows better performance for the pool on SATA disks.

  • The setup consists of four OSD nodes with two SAS and SATA disks each.

  • All the network is on 10GB.

  • Two CRUSH rulesets, one for the SATA disks, and one for the SAS disks.

  • Two pools, one based on the SAS ruleset, and the other based on the SATA ruleset.

  • From a "rados bench", the pool on the SATA ruleset seems faster.

  • On the pool based on the SAS disks:

# rados bench -p <pool_sas> 10 write
 Maintaining 16 concurrent writes of 4194304 bytes for up to 10 seconds or 0 objects
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
     0       0         0         0         0         0         -         0
     1      15       146       131   523.875       524  0.110662  0.116181
     2      16       279       263   525.904       528  0.156652  0.117451
     3      16       418       402   535.914       556  0.116001  0.117155
     4      16       559       543   542.916       564  0.129779  0.116175
     5      16       693       677   541.521       536  0.123408  0.116826
     6      16       834       818   545.256       564  0.115398  0.116071
     7      16       981       965   551.354       588  0.128176  0.115259
     8      16      1114      1098   548.928       532 0.0536508  0.115637
     9      16      1255      1239   550.596       564  0.118188  0.115488
    10      16      1402      1386   554.329       588  0.124998  0.114782
 Total time run:         10.149064
Total writes made:      1403
Write size:             4194304
Bandwidth (MB/sec):     552.957

Stddev Bandwidth:       168.646
Max bandwidth (MB/sec): 588
Min bandwidth (MB/sec): 0
Average Latency:        0.115462
Stddev Latency:         0.0259277
Max latency:            0.329887
Min latency:            0.0396323
  • On the pool based on the SATA disks:
# rados bench -p <pool_sata> 10 write
 Maintaining 16 concurrent writes of 4194304 bytes for up to 10 seconds or 0 objects
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
     0       0         0         0         0         0         -         0
     1      16       244       228   911.257       912 0.0759171 0.0680092
     2      16       476       460   919.567       928 0.0787337 0.0684072
     3      16       700       684    911.56       896  0.069912 0.0695247
     4      16       923       907   906.646       892 0.0628496 0.0699802
     5      16      1143      1127   901.293       880 0.0827047 0.0703452
     6      16      1368      1352   901.057       900  0.100978 0.0706611
     7      16      1563      1547   883.755       780 0.0679113 0.0719268
     8      16      1765      1749   874.275       808   0.10753 0.0728132
     9      16      1994      1978   878.897       916 0.0504995 0.0725023
    10      16      2216      2200   879.799       888 0.0722877  0.072449
Total time run:         10.061829
Total writes made:      2217
Write size:             4194304
Bandwidth (MB/sec):     881.351

Stddev Bandwidth:       269.197
Max bandwidth (MB/sec): 928
Min bandwidth (MB/sec): 0
Average Latency:        0.0725998
Stddev Latency:         0.0199631
Max latency:            0.180094
Min latency:            0.0280216

Environment

  • Red Hat Ceph Storage 1.2.3

  • Red Hat Ceph Storage 1.3

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.

Current Customers and Partners

Log in for full access

Log In
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.