RFE : Whishlist and Feature request for RHEV 3.1

Latest response

I've been testing 3.0 for a little while now, there's some features that I'd like to see added, and I'm sure I'm not alone :

 
Increase CPU and RAM during runtime  to have the ability to scale-up or scale-down vm via the virtio balloon driver of KVM for memory and the cpu hotpluging.
 
 
I have heard that these functionalities will be included in 3.1 of rhev.
Any confirmation about it ?

Responses

If you want to know what features are coming to RHEV, it's very easy to simply take a look at oVirt, and see what's already there and what is just about to come in. Chances are, these will make it into RHEV if they are stable and beneficial to the product

The Ovirt 3.2 release notes

http://www.ovirt.org/OVirt_3.2_release_notes

are full of new features, many of which should end up in RHEV 3.2 onwards.

Personally I would love to see a feature to reboot a VM via RHEV-M, and not have to shut it down, wait for it then run it again, and the same for the RHEL hypervisor (instead of logging into the host and rebooting it. 

Thanks Dan - I looked over the OVirt release notes a few days ago.  One biggie I was hoping to find was support for more than one type of storage domain.  If we had this, then we could build a RHEV cluster with only two RHEL hosts and no physical shared storage. 

Two physical boxes and a bunch of software to make virtual shared storage.  This is a biggie to me because I lost a customer and will never get them back because I didn't have a RHEV solution like this to offer and my cost with RHEV was much higher than the competition cost with VMware.  VMware has a solution called a VSA, using Suse Linux and a virtual cluster that advertises itself back to the hosts as an NFS server.  HP/Lefthand also has a solution that runs with VMware.  RHEV desperately needs something like this. 

Another possibility is Using Red Hat storage as a backend, but I don't know what the Red Hat Storage subscription costs or if it's ready yet.

thanks

- Greg

Different storage types in a single DC is a feature very high on our priorities, this will, however, require some very serious changes to the way storage is currently managed, so it will take a bit of time to get there

The question is: WHEN RHEV 3.2 is going to be available ;-)

I would like to see the "rhevm-shell" bulked up a bit. Ideally I would like to be able to script the following.

1) Create a snapshot
2) Create a VM (clone) based on that snapshot
3) Create a template based on that VM (clone)
4) Delete the Clone
5) Delete the Snapshot

It would make it easier than doing it manually from the UI.

Also; It would be nice to be ablet to export a VM to a OVF file.

http://www.redhat.com/rhecm/rest-rhecm/jcr/repository/collaboration/sites%20content/live/redhat/web-cabinet/static-files/documents/2012-07-26-Red-Hat-Enterprise-Virtualization-3-1

 

RHEV 3.2, target H1-2013

- RHEL 6.4 hypervisor

- SLA / QoS for CPU, memory and network

- Extension framework for RHEV plugins

- Offload basic storage operations to array – clone, delete, etc

 

RHEV 4.0, target H2-2013 

- Based on RHEL7 Hypervisor

- Cluster-wide Service Level / QOS Management

- Third party plugin framework for RHEV-H

- Network Management Service - Multilayer Virtual Switch (I've heard rumors about both openvswitch and Cisco Nexus v1000 type)

- Remove need for SPM

- Mixed storage types in same pool – iSCSI, FC, NFS

Thanks for the updates and timetable.  This is very helpful.  Keep it coming.

I guess as long as we're wishlisting -

Don't force users into a painful export/import process for upgrades.  Provide a reasonable path to upgrade from 3.n to n.m, by building a new RHEV-M against the old RHEV-M DC.  And then with the new RHEV-M, upgrade the old DC to whatever new version it needs to be to support all the cool new features.  Please please please don't force users into a painful export/import cycle for all the VMs.  This is OK for 2.n to anything newer, but not OK for 3.n to anything newer because now we're emphasiging the Enterprise part of RHEV. 

Provide an easy upgrade or merge from RHEL/KVM/Libvirt to a RHEV managed DC without going through a painful export/import cycle. 

Provide a way for RHEV-M to be inside its own RHEV environment. 

In the near term - support for multiple storage types is lots of work and a while away - OK, fair enough, now we know the score.  But for now, what about supporting two or more RHEL/KVM/Libvirt systems using Gluster (Red Hat Storage) bricks for VM backend storage?  This at least provides an answer for smaller virtualization environments.  This is theoretically available right now, but was unsupported and not recommended because Gluster was designed as a NAS to replicate files, not for backend VM storage.  But that was a year ago - will it work now?

thanks

- Greg Scott

Don't force users into a painful export/import process for upgrades.  Provide a reasonable path to upgrade from 3.n to n.m, by building a new RHEV-M against the old RHEV-M DC.  And then with the new RHEV-M, upgrade the old DC to whatever new version it needs to be to support all the cool new features.  Please please please don't force users into a painful export/import cycle for all the VMs.  This is OK for 2.n to anything newer, but not OK for 3.n to anything newer because now we're emphasiging the Enterprise part of RHEV

This is exactly the case, you can upgrade from 3.0 to 3.1, and will be able to upgrade from 3.1 to 3.2 when it comes out. Moreover, you can do that with almost no downtime. And there is a procedure available for upgrade (or rather migration) from 2.2 to 3.0. The process is between 3.0 and 3.1 is:

1. Upgrade RHEV-M - you'll get a new RHEV-M with older version clusters in there, still running the VMs

2. Create a new cluster of the current version, in the same DC

3. Turn VMs off in old clusters, change cluster affinity in VMs and start them in the new clusters.

You can do that by gradually taking the existing hosts, upgrading them, and starting them in the newer version clusters. Once all the VMs are moved, delete the old clusters.

 

Provide an easy upgrade or merge from RHEL/KVM/Libvirt to a RHEV managed DC without going through a painful export/import cycle. 

virt-v2v is the way to do that, and that goes through an export domain. 
 

Provide a way for RHEV-M to be inside its own RHEV environment. 

Working on that one, very high priority

 

In the near term - support for multiple storage types is lots of work and a while away - OK, fair enough, now we know the score.  But for now, what about supporting two or more RHEL/KVM/Libvirt systems using Gluster (Red Hat Storage) bricks for VM backend storage?  This at least provides an answer for smaller virtualization environments.  This is theoretically available right now, but was unsupported and not recommended because Gluster was designed as a NAS to replicate files, not for backend VM storage.  But that was a year ago - will it work now?

You can use RHS as a posixFS. Whether that would suite your needs, is down to benchmarking of course - you are right about the initial design of Gluster as a file NAS. AFAIK (and I'm a bit detached from that particular side of things, so don't take this as an official statement), RHS as a RHEV store is still TP and is not supported on the hypervisors, only on separate nodes for now. 
 

"""

The process is between 3.0 and 3.1 is:

1. Upgrade RHEV-M - you'll get a new RHEV-M with older version clusters in there, still running the VMs

2. Create a new cluster of the current version, in the same DC

3. Turn VMs off in old clusters, change cluster affinity in VMs and start them in the new clusters.

You can do that by gradually taking the existing hosts, upgrading them, and starting them in the newer version clusters. Once all the VMs are moved, delete the old clusters.

"""

 

That sounds like a strange procedure.. Didn't think we needed to turn off any VMs, or change cluster affinity for the VMs. What we did to upgrade was:

1. Upgrade RHEV-M

2. Upgrade hypervisors to RHEVH-20121212+.

3. When all hypervisors in a cluster were upgraded, change cluster compatibility to v3.1.

4. When all clusters had v3.1 compatibility, change DC compatibility to v3.1.

No VM downtime needed.. Hope the upgrade routine for 3.1->3.2 will be similiar.

Why not simply create a template from a VM, without all the extra steps? 

add template --vm-name MyVM --name MyNewTemplate

You're right, I still had the upgrade from 2.2 in mind, where a reinstall of RHEL5-RHEL6 was required

I probably should have been more clear:

> This is exactly the case, you can upgrade from 3.0 to 3.1, and will be able to upgrade from 3.1 to 3.2
> when it comes out. Moreover, you can do that with almost no downtime. And there is a procedure
> available for upgrade (or rather migration) from 2.2 to 3.0. The process is between 3.0 and 3.1 is: ...

This is cool, but my wishlist item here is for upgrades from 3.nn to 4.nn and beyond. 

I tried a 2.2 to 3.0 upgrade and it was, well, ugly.  And if I wanted to take advantage of the 3.0 good stuff, I would eventually need to build a new 3.0 DC anyway and export/import the VMs from the old DC and into the new DC.  This is ugly but OK for 2.2 upgrades because everyone realizes 2.2 is an early version and can plan accordingly. 

I appreciate that upgrades from 3.a to a newer 3.b are smooth and easy.  But going forward from 3.n to 4.n and beyond, everyone's expectations will rise, and to force everyone to export/import from the old DC and into a new DC for 3.nn to 4.nn and beyond just will not fly.  Especially with rapid development and major new versions coming out at least annually.  Instead, build a new 4.nn or 5.nn or 6.nn or (pick your version) of RHEV-M, connect it to the existing DC, upgrade the hypervisors one by one, then click a button to make the upgraded DC have all the features of the then new version. 

Think about this whole idea of using the export domain.  Let's say I have a VM that uses, say, a couple of 500 GB virtual disks.  I need 1 TB for the original VM, another TB for the export domain, and another TB for the new VM.  I need 3 TB to support a 1 TB VM; 1 TB + 2 temporary TB.  I can see lots of storage vendors salivating at that prospect, but end user customers rioting in the streets.  And all the arguments about RHEV saving the customer money go out the window because I need all that temp storage and downtime for major upgrades. 

>>Provide an easy upgrade or merge from RHEL/KVM/Libvirt to a RHEV managed DC without going
>>
through a painful export/import cycle.

> virt-v2v is the way to do that, and that goes through an export domain.

Right...  The wishlist item is to get rid of that requirement long term.  Imagine a customer who builds, say, a new Windows email server in a VM.  The customer doesn't have budget for a SAN, so uses a RHEL/KVM/Libvirt VM for now.  A year later, email volume is booming and now he has budget for a SAN and is ready for RHEV.  To get there, I have to take his email server offline for at least 4 hours, virt-v2v it to the Export domain, import it into the RHEV DC, boot and test the imported VM, and then clean up the leftover old copies.  And this all assumes everything works properly the first time.  If any issues, another several hours of downtime to try it again.

Meanwhile, the VMware guys are laughing at us and telling the customer he can put his email server inside an ESXi system for now and live storage migrate it later into a full-blown cluster later on when ready, all with no downtime.  I know merging RHEV and RHEL/KVM/Libvirt is a big deal and lots of work and will take time - but this really really really needs to become part of the plan. 

- Greg

My wishlist items:

  • Much more dynamic network changes. Forcing us to stop whole clusters to change MTU is just terrible. Not allowing us to do a rolling convert fram untagged "rhevm" network, to tagged network is causing lots of problems for us.
  • OSX spice plugin
  • Delete snapshots without stopping VM.
  • Add/remove memory to running VM.
  • Mix local-storage and SAN-storage on same hypervisor/cluster. 
  • Serial-console access to all VMs. I don't need fancy graphical consoles, so a simple text-based (serial?) console access would be great.  http://www.ovirt.org/Features/Serial_Console_in_CLI
  • Better VM crash detection. We've often seen VMs that looks OK in the rhevm webui, but which are stuck/crashed/hung.
  • Better dashboard. I think the webadmin-dashboard is a lot less useful than what we had in the ActiveX performance view in pre v3.1.
  • Support for simple expressions controlling where to run VMs "Don't run on same hypervisor as VM X", "Run on same VM as Y", "Start if VM Z is dead, and fence it".
  • Automated snapshots, and export to exportdomain for backup purposes.
  • Possibility of sending signals to the kernel. (NMI crash dump, sysrq)

I would like to say most of these are being considered as future features. It's just a matter of priority and availability of them upstream. Eg, Memory hot add/remove will be available in RHEL7, so RHEV4 is expected to have it. Changes to networking without stopping the whole cluster is also under review. OSX spice plugin is under development upstream, but not stable enough to port to RHEV at this time.

Priority of each Feature Request is decided based on customer demand. We measure customer demand by looking at how many customers have officially requested this by opening a case with GSS. We link each case opened by cusotmers to the RFE request opened with Product Management and Engineering and they use that to decide its priority. So if you can open a case with GSS for these RFEs, they definitely going to get higher priority.