Temporarily control a RHEV-H 2.2 host with RHEV-M 3.1?

Latest response

If there's a way to make this work, I can save multiple 40+ mile car round trips.

I have a small RHEV 2.2 data center with 2 hosts.  I am **migrating** (not upgrading) to RHEV 3.1.  I have a brand new, freshly baked RHEV-M 3.1 server with an uninitialized data center.  Of course, to make my new 3.1 data center come alive, I need a host.  My only choices are the two RHEV-H 2.2 hosts. 

I have enough capacity to run my VMs on one 2.2 host, freeing one of my hosts to join my RHEV 3.1 installation.  So my migration strategy is, join one host to RHEV 3.1, build up a data center, networks, storage domain, and cluster, then export/import my virtual machines, and then bring in the remaining host and retire 2.2. 

I know I can boot my host from a 3.1 CD/DVD, wipe it, and install a new RHEV-H 3.1 and connect it to my RHEV-M 3.1.  But this means I need to physically travel to the site, which is logistically challenging. 

So instead, I want to ssh into my RHEV-H 2.2 host, run the ovirt-config-install script, and connect it to my new RHEV-M 3.1 server.  Once the RHEV-H 2.2 host is connected to the RHEV-M 3.1 server, I will install RHEV-H 3.1 from the RHEV-M GUI and end up with a fully supported RHEV-H 3.1 host.  From here, I can build out the rest of my new data center. 

I believe this worked with 3.0 - will it also work with 3.1?


- Greg Scott


Aw nuts - apologies for the multiple posting.  Apparently the "Save" button posts a new copy every time it's clicked.  If the moderators want to remove the earlier copies of this question, perfectly fine by me.


- Greg

RHEV3.1 does not support 2.2 compatibility mode. So adding 2.2 hypervisors (5.x based)  may not work.

You can explore the pxe installation option by setting up a pxe server to install RHEV-H which will not require physical access to the server if CONFIGURED PROPERLY.


I was afraid of that.  Although "may not work" is different than "will not work".  I may still try it and see what happens unless anyone knows it won't work. 

It's a shame I don't have KVM over IP at this site, then I could set up the hosts to PXE boot. 

- Greg

Well for what it's worth, I can report this definitely does not work.  Turns out, I had an old 2.2 host sitting in my lab and I'm setting up a fresh 3.1 setup here.  So as long as I had the pieces sitting here....

I booted my old 2.2 host, ran the 2.2 ovirt-config-install script, and told this host about my new 3.1 RHEV-M.  I rebooted the host and my 3.1 RHEV-M found it after the host rebooted.  So far so good.  But it went downhill from here. 

Next, RHEV-M complained this host was non-responsive and suggested I put it into maintenance mode.  Good idea - that's what I wanted to do anyway.  When I put my 2.2 host into maintenance mode, the task took a very long time to complete - but it eventually completed successfully.   I have a hunch someting timed out in the process and nobody told the host about it. 

Next, I tried to upgrade/reinstall from my 3.1 RHEV-M.  This failed twice.  Each time I tried the upgrade, the host console flashed a message about ISO9660 - I don't remember the exact text - but the host seemed to find the .ISO file on my RHEV-M system.  But that's as far as it got.  Right after that, RHEV-M reported an upgrade failure.

So for anyone migrating a 2.2 installation to 3.n where n>0, you'll have to physically travel to the site to get your hosts onboard. 

I guess a feature request for 4.n and beyond is, keep the support for mixing old 3.n hosts with future RHEV-M installations going forward.  As data centers grow in complexity and size, this will become a necessity and it's just not reasonable to expect customers to do fresh installs of RHEV-M and all hosts, and migrate terabytes of virtual machines to a new datacenter with each major upgrade. 

An ugly upgrade path from 2.2 to 3.n is OK - but won't be OK going forward.


- Greg

Hey Greg, thanks for letting us know about your experience with this. Hopefully it will be helpful to other users.