Recommendations for FCP storage pools

Latest response

Hello.

Do you know of any Red Hat recommendations how to configure luns for RHEV 3.1 fcp storage domain? Luns will be served through SAN from enterprise class array. Which layout should I choose:

  1. one big lun for all vms and data virtual disks
  2. many equally sized luns making up fcp storage pool as a space for all virtual disks
  3. dedicated luns for every vm.

I've spent quite a lot of time searching for relevant information unfortunately finding nothing useful.

 

Thanks in advance for help

Marek

Responses

I'm not sure what Redhat has for best practices, but here's what we ended up trying.  In one customer RHEV 2.2 installation back in 2011, we did one big LUN with all the VMs inside.  It was iSCSI, not FCP, but the same issue.  The challenge we ran into was, all VMs are important, but some are *really* important.  For the really important ones, we wanted the SAN to do regular snapshots, in addition to the normal OS level backups.

So when we migrated to 3.0 a few months ago, we built a whole new data center and exported/imported everything.  This time, we set up smaller LUNs, with each LUN holding a few VMs.  That way, we could adjust the SAN snapshot frequency for the various classes of VMs.

This represents a middle ground between one massive monolithic LUN with everything and lots of little LUNs for each VM. 

- Greg

I have similar experience to Greg. In 2.2 we had one huge SD, while in 3.x we use single 2 TB iSCSI volumes for each SD. We're not yet doing snapshots on the storage array (IBM Storwize V7000U), but we have some volumes that are replicated between sites for disaster recovery purposes, and some that are not. This is clearly stated in the SD name.

The SD's are thin-provisioned on the storage side, so we have no pressure to fill up the full 2 TB to avoid wasting storage.

 

  -jf

There probably isn't going to be a lot on this topic since this is a storage layout question and isn't a virtualization specific problem.

The best layout for your environment is going to be determined by how you intend to use the VM disks, quality of service, application workloads, etc.  Depending on these factors, any or all of your three suggestions could be the right way to present LUNs.

My suggestion is profile the various workloads to determine the overall I/O load for a group of VMs.  Once you have these I/O requirements, you can make decisions about what VMs can live together or not.  Also, if you are using Direct LUNs in addition to the Data Domains, you will run into the same choices here, althought the use case for Direct LUNs usually would follow your "normal" sizing and presentation guidelines.  For example, an Oracle DB server with Direct LUNs would not present much more I/O load to a Data Domain than other workloads.  However, using "local" storage for that VM in a Data Domain would probably change the underlying LUN presentation. 

Another example is Desktop versus Server virtualization; where Windows desktops can produce a constant 10 IOP/s during at idle.  Scale that up to 30 desktops and you needs to support 300 IOP/s on the storage fabric; at 300 you need 3000 IOP/s at idle.

tl;dr, there aren't any recommendations b/c it is a complex and individual problem that you would solve for in the same way as any other storage design.  Profile the I/O, size and spread appropriately.

-Matt