RedHat multipath vie EMC powerpath

Latest response

Hi group,

I like to listen to your opinions  on multipath and powerpath. I know someone uses multipath and other uses powerpath. I'm listening to both sides.

What is/are the advantage/disadvantage on each application?

Responses

We opted to go native multipathing in our environment. Previously, we'd used third-party multipathing for array management (PowerPath, VxDMP, etc.). However, in a multi-vendor storage environment, it meant a non-trivial amount of additional documentation and training requirements. Given the pressure to reduce training overhead (so that cheaper help could be hired), it meant a choice between native and something like VxDMP. Political issues sorta killed VxDMP (even though we had unlimite licensing for it). We're not typically doing the kinds of workloads that might make third-party multipathers attractiven and EMC's (and NetApp's) support for ALUA and general improvements in multipathd make it a good, overall solution. So, for us, native seems to be the best course to follow.

 

On the plus side, it saves us the licensing costs of PowerPath and similar add-ons.

In our environment, almost storages are EMC'products(DMX,CLARiiON...).So we choise PowerPath.If we also have multi-vendor storage,RedHat multipath may be used.

I think that the WEB site of "blog'o thnet" serves as a reference if interested. The site very cool! I also referred.
http://blog.thilelli.net/post/2009/02/09/Comparison%3A-EMC-PowerPath-vs-GNU/Linux-dm-multipath

Thing you'll want to bear in mind with that URL is that it's three years old. Some of its statements about multipathd's support for EMC devices are a bit dated, as a result. Even some of its statements about PowerPath seem a bit "odd".

Hi,

 

EMC PowerPath supports a limited number of non-EMC storage systems, and does NOT interoperate easily with a non-PowerPath multipath manager.  Therefore, to use PowerPath all of your multipath-capable devices need to be supported under PowerPath.  This could include multipath-capable boot disks controlled by a host-based RAID adapter. 

 

We are a large EMC shop that is becoming more multi-vendor in the storage area.  Several years ago, we specifically asked EMC to support some specific third party storage, and EMC declined, forcing us to use a non-PowerPath solution.

 

Three plus years ago, DM-Multipath was much less mature than EMC PowerPath and other proprietary multipath managers.  The comparison between DM-Multipath and EMC PowerPath would have been very different.  In the following years, DM-Multipath (and other native facilities of the

Linux IO stack) evolved more rapidly than EMC PowerPath.

 

With the current RHEL 6.x Request-based DM-Multipath, there are some capabilities in DM-Multipath that exceed EMC PowerPath, and there are EMC PowerPath capabilities that still exceed DM-Multipath.  In addition, there is a non-trivial cost difference.  On the DM-Multipath side, some of the "capabilities" are more of a lack-of-restrictions that may exist in PowerPath.  Some lack-of-restrictions cannot be fully exploited (yet) due to restrictions elsewhere in the IO stack. But as the hardware and Linux platform advance (such as the new dual-socket Intel Xeon E5-26xx, Sandy Bridge-EP, Romley-EP servers), EMC PowerPath seems to be trailing on being able to fully exploit the server capabilities.

 

One side note.  EMC PowerPath is an additional package that needs to be installed, and as such is independently versioned.  In many cases, you can install the most recent version of EMC PowerPath on an older version of Linux, potentially improving its IO stack.  With DM-Multipath, most customers use the "in box" version for better supportability.  This tends to link DM-Multipath into the version of the Linux distribution, and the overall compatibility risk when jumping revisions can be greater.  For example, for the RHEL 5.x user to get request-based multipath functionality, most need to upgrade to RHEL 6.x which can introduce compatibility issues beyond DM-Multipath.  Few users are back-porting request-based DM-Multipath onto RHEL 5.x,

 

Linux DM-Multipath sets up fairly easily, but the "defaults" can yield under-utilization in high-throughput environments.  Many storage vendors offer multipath.conf entries for their storage, but the challenge is that there are often several multipath.conf entries listed in different manuals ... and they do NOT agree (and most are "safe", but grossly sub-optimal).

 

For large configurations, throughput-biased configurations, configurations with more than 2 active paths per LUN, DM-multipath is somewhat "do-it-yourself".  There is not a lot of published "best practice" information regarding DM-Multipath, because "your mileage may vary" depending on characteristics of the storage, host system, SAN topology, Linux IO stack, file system manager and the application itself.  The various vendors often myopically focus on their individual product and provide little assistance in configuring the other elements of the IO stack to exploit their product, or hints on how to configure their product to exploit other elements of the IO stack.

 

Therefore, if you want very high performance or efficiency/utilization levels, you will need to take on some systems engineering work to determine the empirical combination of configuration settings that work "well" for the whole IO stack.  Alternatively, you can live with a lower performance or efficiency level, which is often good enough.

 

On a server with two fibre channel controllers, if you take the "defaults" you will likely hit a performance ceiling of about 1.05x the throughput of one fibre channel controller (850 MB/sec on 2x8gbit FC).  If you research the Linux best practices, file system best practices, and FC controller best practices, you can get to about 1.56x of the throughput (78% utilization, or ~ 1250 MB/sec).  You may need to create your own tools and scripts to monitor these performance levels, since most tools are NOT multi-path aware and will not aggregate the IO statistics for all the paths going to the same LUN. (BTW, the performance statistics on the multipath pseudo-disks are inaccurate and not on the roadmap to be fixed).

 

With additional research and proper crafting of the configuration parameters, you can achieve 98% utilization across FOUR FC controllers (over 3130 MB/sec on 4 x 8gbit FC and over 5800 MB/sec in full-duplex mode) ...assuming the storage can keep up.

 

At this performance level, you would likely be running faster than EMC PowerPath.

 

Not all customers are throughput oriented (vs. IOPs) and need this level of performance.  Many customers cannot justify the staff effort to improve the utilization and efficiency.  Unfortunately, the storage vendors are not providing much guidance beyond ~1250 MB/sec, and their defaults unknowingly deliver ~850 MB/sec.

 

If you are interested in high throughputs and large IO (IO greater than 512kb), you are much better with the request-based DM-Multipath included in RHEL 6.x.  You CAN do large IO under RHEL 5.x using the BIO-based DM-Multipath, but overall throughput is much more of a compromise and dependent on the coalescing prowess of the block-layer.  Under RHEL 6.1, large IO is much more straightforward (but still tricky).

 

Having been forced away from EMC PowerPath due to non-support for non-EMC storage, we have been very satisfied with DM-Multipath ... operating with our crafted configuration parameters up and down the IO stack.  We also use a sophisticated third party file system manager (IBM GPFS) with a 4MB blocksize today, going to a 8MB blocksize in the near future.  I can't comment specifically about other file system managers, but it is important that "the left hand knows what the right hand is doing".  FC driver/Linux block layer/multipath/and file system all need to cooperate.  If you are hosting a database, then you add another layer still.

 

Using an automotive analogy, request-based DM-Multipath under RHEL 6.1 is a competent element of the storage "drive train".  However, like automotive drive trains, all the proper "gear ratios" need to be assigned depending on the capabilities of the sub-components.  The end-to-end gear ratio is an arithmetic function of all the intermediate components with multiple combinations yielding similar end results. There can be more than one "right" combination.

 

The gear ratios and number of transmission speeds for a "car" will be much different from those of a tractor-trailer truck.  The "out of the box" experience of RHEL Linux is closer to a mid-range "car" than a tractor-trailer truck in this analogy.  Putting a high power truck engine into a "car" drive train will be sub-optimal without changing some gear ratios.  Correspondingly, putting a car engine into a "truck" drive train will also be sub-optimal without changing some gear ratios.

 

PowerPath and Linux DM-Multipath are components in the disk "drive chain", and they both have their own view of their internal "gear ratios" and what they expect the engine and the rest of drive train to look like.  EMC PowerPath's "gear ratios" are very appropriate for many configurations of engines and drive trains, but are relatively unchangeable.  The Linux native facilities can be more flexible, but may have to be explicitly configured to operate at stellar levels.  If you take the care to properly craft all the gear ratios, request-based DM-Multipath can scale to higher levels than EMC PowerPath.

 

FYI.  Our typical petabyte-class clusters running DM-Multipath are managing ~ 8 paths per LUN with 2500-4000 total paths ... because we can scale performance across multiple paths.  This level of path management requires scripting to reduce the complexity and perform the optimizations on a consistent, repeatable level.  At this level, even PowerPath would need scripting.

 

I hope that this was useful.

 

Regards,

 

Dave B.

Wow, Dave! Thanks for an incredibly thorough and helpful reply.