RHEV environment with Hypervisor(s) on non-routable subnet

Latest response

We are considering implementing our RHEV environment using Hypervisors that are on a completely segregated network segment (for security, etc...)  This will actually be a physical separation (not just logical). 

Has anyone used a dual-homed RHEV Manager configuration?  (one public, one private)

I have attached a rudimentary image that I have envisioned as a fairly optimal configuration.  Note the directional arrows (LibreOffice Draw to the rescue!)

Thanks for your input!

 

      

Responses

We run something like that on IBM HS22 blades with 2x1GbE and 2x10GbE.

The "rhevm" network is a private network, not routed anywhere. Only hosts accessible on that network is the hypervisors and the rhevm host. This is the two 1GbE links in active/passive bonding mode.

On the 10GbE active/passive bonds we run a "frontend" VLAN for the hypervisors to access basic services (NTP, DNS, incoming SSH, etc.. and also iSCSI). Plus all VM VLANs are running on this bond.

The only pain we experience with this is that there seems to be no way to configure a default gateway on the frontend network of the hypervisors trough the rhevm webui, so we have to manually fix that in the config files.

Oh.. it's also a pain that we didn't configure the rhevm network to be VLAN tagged. We want to run a separate backup VLAN on this bond, but can't do that when VLAN tagging isn't enabled for the rhevm-vlan, and fixing this requires downtime on the full installation.

Thanks Jan!

Just to add to this, to use SQUID to proxy client access to spice console requires 3.2 which is going to be released soon.

> The only pain we experience with this is that there seems to be no way to configure a default gateway on the frontend network of the hypervisors trough the rhevm webui, so we have to manually fix that in the config files.

Like you did already, this requires manual configuration on hypervisors to change gateway to a different interface other than rhevm. For future RHEV version, we are looking at http://www.ovirt.org/Features/Multiple_Gateways

> Oh.. it's also a pain that we didn't configure the rhevm network to be VLAN tagged. We want to run a separate backup VLAN on this bond, but can't do that when VLAN tagging isn't enabled for the rhevm-vlan, and fixing this requires downtime on the full installation.

3.1 supports mixing tagged and untagged vlans provided "rhevm" is configured as a non-vm network initially. If this is configured as non-vm network, you can try adding backup vlan on top of it without disruption.

OTOH, it's a chicken and egg problem to move rhevm to tagged network before adding another tagged backup vlan. If you first change vlan tag on switch without rhevm configured using that vlan id, networking will break. If you first configure rhevm on hypervisor with vlan tag without configuring switch with the vlan id, networking will break. That is why it requires downtime.

> OTOH, it's a chicken and egg problem to move rhevm to tagged network

 

It seems to me that such changes should be solvable by a rolling procedure, instead of full system downtime. Allow us to do the change in rhevm, but don't have rhevm roll out the change to active hosts. That way we could fix it host by host.

We have the same situation with configuring MTU. We need jumbo-frames on the "frontend" VLAN the hypervisors are using for accessing iSCSI. This wasn't supported via web-interface in rhev 3.0, so we've always fixed this by editing ifcfg-files manually. When 3.1 came with support for setting the MTU trough the webui, we're not able to use this since it requires full system downtime. Therefore we still have to manually fix this on every hypervisor installation/re-installation. (We actually have received a procedure from support for fixing this directly in the engine-db, but haven't dared to run it yet.. am hoping there soonish will come a release that lets us do it in a supported way).

 

  -jf

> It seems to me that such changes should be solvable by a rolling procedure, instead of full system downtime. Allow us to do the change in rhevm, but don't have rhevm roll out the change to active hosts. That way we could fix it host by host.

This and changing MTU on the fly is partly possible in 3.1 via a db hack. In 3.2 changing vlan tag and adding MTU can be done from RHEV-M via GUI.

In 3.1: Hack the db to change vlan tag or MTU for logical network under DC. (contact support)

In 3.2: Edit the logical network under DC and change MTU or vlan tag.

Both 3.1 and 3.2: This change will not be rolled out to any hypervisor automatically. Instead if you go to Hosts -> Network Interface -> Set UP Host Network, you will see an icon next to this logical network saying "Not synchronized". That means the host is not in sync with the configuration set up for this logical network. Eg, Logical network is configured with MTU 9000, but host has its interface configured with 1500 MTU. When you edit the logical network under Hosts tab for that hypervisor, you will see an  optin saying "Sync Network". When you check that and save, the configuration will be synced to host. This need to be done one by one for each host.

Thanks! Great to get confirmation that this will be fixed in v3.2.

 

How far of is 3.2? I was expecting it this spring/summer, and tought I saw a v4 on roadmaps for this fall. Guess we'll know more in a week :-)

Hi Guys,

 

We also planned to do this kind of networking but then I did some tests. We decided to go with RHEL hosts instead of RHEV-H because of several reasons. So the problems that I figured out during these tests with RHEL as hosts:

  • as Jan-Frode said, you can not set up the the default GW through the GUI (this does not really a big problem for us as it will stay as we are using RHEL hosts and it will keep the installation setting coming from the kickstart)
  • adding the RHEL host into RHEV does not work. We tried it with two cases:
    • one public and the rhevm IP is configured on the host when you adding it on the RHEV-M. The default GW is from the public network so it can reach our infrastructure (DNS, yum repo, etc): when you add the RHEL host on the RHEV-M gui after the additional package installations it starts to configure the networking and one of the ssh to the host fails -> so the adding procedure fails on the GUI. Then you start the adding procedure again (the network is already configured by RHEV from the first run) and it gets added without problem (ugly)
    • only rehvm IP is assigned on the RHEL host (as that network is segregated the host does not have access to the core infrastucture (DNS, yum)): adding a host fails as it can not install the necessary packages AND can not resolve the rhevm hostname. Solution: add the necessary packages during the installation of the host and create a host entry for the rhevm machine so it can resolv it without DNS

 

To be honest we are still looking for the perfect solution for the network which splits the rhevm traffic with data (including display). Last time we tested it in the following way:

  • each RHEL host has two bonds (active/passive)
  • the first bond is using VLAN tagging. One of the VLAN tagged networks are used as display network (so the host has an IP from this range) and the others are used for data for the VMs.
  • the second bond is the rhevm network with the gateway

Unfortunately, this config also does not work by default as the host can not be reached witht the IP of the display network (did not investigated too far but I guess the syn,ack from the host is going through the rhevm bond instead of the "public bond" so the TCP session does not get established)

 

We would be really interested what it the recommended way of doing this separation.

 

I will continue to work on this next week and will post the outcome here...

 

Regards,

Balazs

3.2 released a couple of days back.

Unfortunately, this config also does not work by default as the host can not be reached witht the IP of the display network (did not investigated too far but I guess the syn,ack from the host is going through the rhevm bond instead of the "public bond" so the TCP session does not get established)

Looks like you are hitting the same problem. Ie default gateway is configured via rhevm, but for successful connetion to display network, return packets should be routed through a gateway in display network.

Current work around is to configure a gateway from display network as default gateway. This cannot be done via GUI, but you need to edit /etc/sysconfig/network and add GATEWAY=ip and do persist on each hypervisor. Then configure rhevm without a gateway.

Long term solution is going to be implemented in a future version. You can see the ovirt write up about it at http://www.ovirt.org/Features/Multiple_Gateways

Have you opened a case with Red Hat support for other two issues?

Hi Sadique,

 

Thanks for the fast reply.

This means that currently there is no way of separating the rhevm traffic and the data/display traffic without manually editing file(s) on the hypervisors. :(

I did not open cases yet because I wanted to finish the investigation first. I will also open a RFE to help to get this Multiple_gateways feature into RHEV but I guess it will still take some months as it is not even implemented in the upstream :(

 

I recommend you open cases for the first two issues.

Also good to open an RFE for multiple gateway feature as it helps to give more priority to the RFE when Product Managment weighs it for inclusion in a future version.

Thank you all for your responses and input.  I'm actually at Summit and I hope to run into some of the RHEV gurus.  I hope to have assimilated all the data you have provided to have an intelligent conversation to see if there is a work-around.  The part I did not mention (at the time did not seem quite as relevant) - a big reason we would like the seperation is that we would like to have the Hypervisor MGMT traffic on a separate Infiniband network that is not an extension of our core Ethernet environment (hence the Squid proxy).  I again thank you folks, as you brought up some very good and relevant points that require some serious consideration.