Migrating from RHEL 6.2 KVM to RHEV 3.0

Latest response

I currently have a RHEL KVM Host running VMs from a local disk (LVMs on a seperate partition from the OS).


I would like to use RHEV-M to manage my existing VMs. Is this wise? Also, is there anything I should look out for. I'm trying to figure out my approach. I would like to keep RHEL but I don't want to run RHEV in an unsupported configuration. 


I want to assume that I can add the RHEL boxes to RHEV-M and on boot all the VMs would come up in RHEV-M.


Any advise or input would be greatly appreciated!





I have a few questions, which may also help a few others respond.


What is your goal of using RHEV-M over KVM on RHEL?

Are you planning on implementing some type of "shared storage"? (i.e. iSCSI or NFS, or Fiber Channel)

Does your system have power management (ILO, RDAC?)

What do you hope to gain by having RHEV-M managing your VM's?

Do you currently have experience with RHEV and RHEV-M?


When you import a RHEL box into RHEV-M, it "takes over" a few things.  Sorry I can't be more specific - but it expects networks to be set up in a certain way, you need to have a few storage domains, it will install certain packages, etc... nothing terribly surprising, but it may catch you off guard a bit.


I don't have an opinion whether it's a good idea, but there is a level of commitment you will need to accept.  I'm certainly not trying to talk you out of the idea, as RHEV is a great product.


I believe you can still eval the product and I highly recommend that if you have not worked with the product much.  You can get by with only 2 machines to do your testing/eval and get a good feel for how it works.  The eval provides an outstanding guide to follow that walks you through a number of scenarios.


As for your question about whether you can "auto-start" your guests, that is a good question that came up during class (which I had forgotten about).  I hope someone else will chime in with an answer. 

RHEV provides a console manager that lets you define guests, define storage and define networks.  It lets you connect with a spice viewer to the host with a click of a button - it's a little like having your mother sitting beside you telling you how to run the system, but all in all it's pretty slick.


The advantage is that it's essentially 'point-and-click' to do these things, provided you have the backend features in place. iSCSI and NFS storage would be examples of this. Though it does give you the option when you install of specifying and iso domain for example that is local to the machine, but that's it.


As for restarting your guests, one of the nicest features, though in my opinion, a little lacking in maturity, is the load balancing. You can define your guests as low,medium and high priority HA guests. You can also define HA or not. RHEV will then do it's best to move your guests around based on this criteria, should one of you nodes fail. You can give it some parameters also to specify host-load criteria. I don't know how well that portion works, I haven't tested it all that much.


RHEV has power management for the guests also, and it's  a MUST in my opinion to implement this. There's nothing more dangerous then a guest that's flapping on and offline.


As for the downside, you essentially give up direct control of your virt environment. For example, you can't mix storage types within a data center. If you start with NFS, then so shall it be all of your hosts - they must use it and can't use iSCSI for example.  You can define a new data center with a new type, but you are commited in that particular data center. Not a big deal for me, because I put NFS in front of all my iSCSI storage anyway - you may not want to do this, or you may.


Also, the console manager is not HA option out of the box and there is no 'easy' HA option. You can implement it ( tech note is here:  https://access.redhat.com/knowledge/techbriefs/setting-rhev-m-highly-available-cluster-rhev-30 ) - but you need to know what you are doing.  Same goes for NFS for example, it's up to you to make sure your storage is HA (  https://access.redhat.com/knowledge/refarch/2011-deploying-highly-available-nfs-red-hat-enterprise-linux-6 ).


RHEV is a great start, and they have lots of features to come. You can be up and running in a few hours no problem at all ( assuming you have a storage setup already). You do lose some ability to easily manually manipulate the files and so on.


My decision on this was based on the sophistication of the ultimate end user. I wanted the average IT tech to be able to logon and create new VM's for windows or RHEL or Centos or Fedora or, or, or....easily and quickly.  RHEV gives you this in spades.


But there's no reason you can't mix and match. I have some guests running with virsh that can't/shouldn't live in RHEV - the Console manager for example shouldn't live in the RHEV setup ( you CAN put it in a seperate RHEV setup it's not managing of course...) So it resides in virsh on two of the storage/NFS servers. Works fine.  Of course, you can't migrate across those domains into RHEV - but that wasn't my goal anyway.


If you want fast, easy, reliable then do it. If your end user wants slick and easy VM setup and load balancing, it's a great solution.  Your best option is probably to trial it if you have the hardware available.


Good luck, hope this helps!

OK, so let me take it from the start - your VMs running in RHEL will not be automatically picked up if you simply join the hosts to RHEV-Manager as hypervisor nodes. RHEV with KVM and a RHEV hypervisor are different virtualization systems, and in order to move VMs around you will need to run virt-v2v to convert and import those VMs.


While you can run VMs on hosts' local storage, that causes you to miss out on lots of clustering features RHEV is especially good at, like Joe said, so if it is possible to use shared storage, using RHEV will make much more sense.