in-guest iscsi redundancy.

Latest response

I plan on virtualizing NFS 4 and Samba File Servers and are unsure of best practice regarding connecting the underlying storage through iscsi.
All the Virtual guests are installed on multipathed iscsi storage with redundant links and have bridged bonded interfaces for the file server connections.
At present live migrations work excellent and it would be preferred if that could still be the case with file-server guests.

It seems i have two options for a redundant iscsi connection to the guests.

The first option would be to use two different physical nic's with bridges on the hypervisor and through those set up a multipathed iscsi disk configuration on the virtualized file-server guest itself.
I do not know if it is possible to use multipath on a guest and if the fail over would work through the virtualized network configuration.

The second option would be to instead create a dedicated bridged bond on the hypervisor. This way the guest only gets one nic for the iscsi connection and the redundancy is handled by the bond on the hypervisor instead of multipath on the guest itself.

Does anyone know which would be the preferred option or if there are other better alternatives?

Responses

Both are valid configurations.

I have never done this, but I think it would depend on how you're doing your network redundancy.

For example, if you have two logical network paths (ie. the ISCSI target has redundant IPs in different subnets):

+-----------------+                     
|                 |  ISCSI SAN          
+--+-----------+--+                     
   |           |                        
+--+--+     +--+--+                     
|     |     |     |  Network switches   
+--+--+     +--+--+                     
   |           |                        
+--+--+     +--+--+                     
|     |     |     |  Hypervisor physical
+--+--+     +--+--+                     
   |           |                        
+--+--+     +--+--+                     
|     |     |     |  Hypervisor bridges  
+--+--+     +--+--+                     
   |           |                        
+--+-----------+--+                     
|                 |  Virtual machine    
+-----------------+                     

then you'd need to multipath inside the guest.

However, if you've got one logical path to the target (ie. the ISCSI target has one IP):

+-----+                     
|     |  ISCSI SAN          
+--+--+                     
   |                        
+--+--+                     
|     |  Network switch   
+-+-+-+                     
  | |                      
+-+-+-+                     
|     |  Hypervisor bond
+--+--+                     
   |                        
+--+--+                     
|     |  Hypervisor bridge  
+--+--+                     
   |                        
+--+--+                     
|     |  Virtual machine    
+-----+                     

Then you'd use the hypervisor bond for redundancy.

However, this is all just theory. Perhaps others could contribute valuable actual experience setting such things up.

I'm not 100% clear on what you're trying to achieve. In general, guest's disk-I/O is done through the hypervisor rather than directly to the guests. Primary reasons for doing iSCSI to the guests tend to revolve around implementing clustering solutions at the guest level (rather than leveraging your hypervisor's HA capabilities) or otherwise trying to more-closely model physical-style configurations.

What's the actual goal you're trying to achieve?

Right now i have physical NFS and Samba file servers that use mounted multipathed LUN's from a SAN for the storage. The rest of my servers are virtual guests also through multipathed LUN's from the SAN (works great). I wish to have the file servers as virtual guests as well so i could become less dependent on the physical servers themselves. The goal is to have a fully virtual server environment.

My question is what the best way would be to get the existing and new multipathed LUN's mounted in the virtual file server considering redundancy and performance. I would also like to keep the nice features like Live Migration working for the file server guests as well.

Just so I'm understanding your request, you're only worried about storage connection-redundancy from your VM to the backing datastore (SAN?), correct? You're not talking about share connectivity from the clients of your VM's CIFS/NFS service?

In general, barring other limitations, I would tend to put all of a VM's storage into vDisks and let the underlying hypervisor handle the path-optimization and redundancies to the storage. That said, depending on your hypervisor (are you using RHEV, vShere, Hyper-V or what?) the path from physical to virtual is somewhat limited.

The most obvious path is to backup the SAN LUNs your fileserver hosts are using, build a VM-equivalent host, present sufficient vDisks to it to contain your shared data, restore your SAN LUNs' backups to the VMs.

Depending on your hypervisor, you may be able to present your SAN LUN directly to your VM via a raw device mapping. Downside to this method is it frequently negatively impacts your VM's HA protections.

Ok, i think that attaching the multipathed LUN's to the guest directly through XML definitions just like the guests own system disk and use the hypervisors own redundant ISCSI paths would be the preferred solution then.
I use RHEL 6 as hypervisor by the way.

There are some questions for this setup that comes to mind, the first being the need to be able to add or remove disks (LUN's) on the fly without rebooting the fileserver guest in question.
It would be less than ideal if i had to shutdown the server because i re-sized an existing LUN that was assigned as storage.

I guess i need to setup a proper lab to test both solutions, comparing read/write performance and evaluate up and downsides taking flexibility and redundancy into account as well.

I got some interesting suggestions to my question, thank you all for all the help. :-)

Adding and removing disks won't really require anything special - just a rescan of your HBAs.

Changing device, geometry (resizing LUNs) will require some additional steps. Basically, you'll need to offline the filesystem and, if using LVM, offlining the volume. Once you've fully offlined the device node(s), you can then use blockdev (or some other utilities) to rescan the actual geometry. Once the new geometry is seen, you can repartition so that the new blocks are visible (either by extending the existing partitions or adding additional partitions to the existing partition-table). Once that's done, you can grow your (PVs, VGs and LVs, then) filesystem(s). After everything's grown, you can reonline everything.

This avoids the reboot, but won't avoid an outage of the fileservice.

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.