Select Your Language

Infrastructure and Management

Cloud Computing

Storage

Runtimes

Integration and Automation

  • Comments
  • Terrible performance of RAID10 on 256GB Dell PE820 KVM host

    Posted on

    I have a 4-way Dell PE820 with 256GB of memory that primarily runs CPU intensive KVM guests. The system has a PERC 710 controller with a battery attached. I recently added 8 Seagate ST600MM0026 SAS 600 GB drives and created the following hardware RAID 10

    • 64KB stripe size
    • write through enabled
    • disk cache enabled
    • read ahead enabled

    The RAID synced just fine and the RHEL6.5 (2.6.32-431.11.2.el6.x86_64) host sees the device just fine. I created a partition with parted

    Model: DELL PERC H710 (scsi)
    Disk /dev/sde: 2398GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    
    Number  Start   End     Size    File system  Name  Flags
     1      1049kB  2398GB  2398GB  ext4         data
    

    and formatted it ext4

    mkfs.ext4 -b 4096 -E stride=16,stripe-width=64 /dev/sde1
    

    I then mounted it without any special flags (mount /dev/sde1 /mnt/new)

    The problem is that just about any IO I try is brutally slow or just hangs. For example, I can't even get bonnie++ to finish one test, it just hangs on 'Writing Intelligently' step. Watching things like iostat and vmstat show virtually no activity on the device or other devices on the system for that matter.

    I installed tuned and enabled the enterprise-storage profile, but that did not seem to help.

    If I do cude tests like 'badblocks /dev/sde' and watch iostat, I see reasonable values like:

    Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
    sde            8134.50   1041216.00         0.00   10412160          0
    

    If I do a simple dd test like 'dd if=/dev/zero of=bogusfile count=5M',

    5242880+0 records in
    5242880+0 records out
    2684354560 bytes (2.7 GB) copied, 27.4337 s, 97.8 MB/s
    

    I see values like this from iostat

    Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
    sde             914.20         0.80    365319.20          8    3653192
    

    If I then attempt to copy this bogusfile 'time cp bogusfile bogusfile2' , it takes a long time!

    # time cp bogusfile bogusfile2
    
    real    2m37.567s
    user    0m0.032s
    sys 2m36.510s
    

    Any suggestions on what I have setup incorrectly? Thank you

    by

    points

    Responses

    Red Hat LinkedIn YouTube Facebook X, formerly Twitter

    Quick Links

    Help

    Site Info

    Related Sites

    © 2026 Red Hat