in-guest iscsi redundancy.
I plan on virtualizing NFS 4 and Samba File Servers and are unsure of best practice regarding connecting the underlying storage through iscsi.
All the Virtual guests are installed on multipathed iscsi storage with redundant links and have bridged bonded interfaces for the file server connections.
At present live migrations work excellent and it would be preferred if that could still be the case with file-server guests.
It seems i have two options for a redundant iscsi connection to the guests.
The first option would be to use two different physical nic's with bridges on the hypervisor and through those set up a multipathed iscsi disk configuration on the virtualized file-server guest itself.
I do not know if it is possible to use multipath on a guest and if the fail over would work through the virtualized network configuration.
The second option would be to instead create a dedicated bridged bond on the hypervisor. This way the guest only gets one nic for the iscsi connection and the redundancy is handled by the bond on the hypervisor instead of multipath on the guest itself.
Does anyone know which would be the preferred option or if there are other better alternatives?
Responses
Both are valid configurations.
I have never done this, but I think it would depend on how you're doing your network redundancy.
For example, if you have two logical network paths (ie. the ISCSI target has redundant IPs in different subnets):
+-----------------+
| | ISCSI SAN
+--+-----------+--+
| |
+--+--+ +--+--+
| | | | Network switches
+--+--+ +--+--+
| |
+--+--+ +--+--+
| | | | Hypervisor physical
+--+--+ +--+--+
| |
+--+--+ +--+--+
| | | | Hypervisor bridges
+--+--+ +--+--+
| |
+--+-----------+--+
| | Virtual machine
+-----------------+
then you'd need to multipath inside the guest.
However, if you've got one logical path to the target (ie. the ISCSI target has one IP):
+-----+
| | ISCSI SAN
+--+--+
|
+--+--+
| | Network switch
+-+-+-+
| |
+-+-+-+
| | Hypervisor bond
+--+--+
|
+--+--+
| | Hypervisor bridge
+--+--+
|
+--+--+
| | Virtual machine
+-----+
Then you'd use the hypervisor bond for redundancy.
However, this is all just theory. Perhaps others could contribute valuable actual experience setting such things up.
I'm not 100% clear on what you're trying to achieve. In general, guest's disk-I/O is done through the hypervisor rather than directly to the guests. Primary reasons for doing iSCSI to the guests tend to revolve around implementing clustering solutions at the guest level (rather than leveraging your hypervisor's HA capabilities) or otherwise trying to more-closely model physical-style configurations.
What's the actual goal you're trying to achieve?
Just so I'm understanding your request, you're only worried about storage connection-redundancy from your VM to the backing datastore (SAN?), correct? You're not talking about share connectivity from the clients of your VM's CIFS/NFS service?
In general, barring other limitations, I would tend to put all of a VM's storage into vDisks and let the underlying hypervisor handle the path-optimization and redundancies to the storage. That said, depending on your hypervisor (are you using RHEV, vShere, Hyper-V or what?) the path from physical to virtual is somewhat limited.
The most obvious path is to backup the SAN LUNs your fileserver hosts are using, build a VM-equivalent host, present sufficient vDisks to it to contain your shared data, restore your SAN LUNs' backups to the VMs.
Depending on your hypervisor, you may be able to present your SAN LUN directly to your VM via a raw device mapping. Downside to this method is it frequently negatively impacts your VM's HA protections.
Adding and removing disks won't really require anything special - just a rescan of your HBAs.
Changing device, geometry (resizing LUNs) will require some additional steps. Basically, you'll need to offline the filesystem and, if using LVM, offlining the volume. Once you've fully offlined the device node(s), you can then use blockdev (or some other utilities) to rescan the actual geometry. Once the new geometry is seen, you can repartition so that the new blocks are visible (either by extending the existing partitions or adding additional partitions to the existing partition-table). Once that's done, you can grow your (PVs, VGs and LVs, then) filesystem(s). After everything's grown, you can reonline everything.
This avoids the reboot, but won't avoid an outage of the fileservice.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
