Convert IDE drive to virtio
Hi , I have a win2008 r2 x64 vm which is configured with an IDE type drive.
I would like to convert the IDE drive to a virtio drive without loosing the windows installation. I can' t find any official procedure to do this.
I was thinking of creating a second virtio hd same size of the ide drive and then transfer everything from one to the other drive by imaging the first drive. But even doing this there are other problems because the rhevm won't let me add a secondary vbootable drive.
Inned advise on this and what it would the best options without reinstalling the all windows vm.
Any suggestions?
thank you
Marcello
Responses
A few questions, before I suggest anything... 1) What version of Windows? 2) Is VM already in RHEV, or are you moving the VM from some other environment? 3) What is the VM disk format?
For example, Windows 2008 Server, device discovery is a bit more reboust than Windows 2003 Server, so in theory you only need to inject the drivers into the VM for Windows 2008, Windows 2003 might take a bit more work. This addresses the issue that Virt-IO is not a native configuration in Windows. Same holds true for Windows XP versus Windows 7, device manager in newer versions of Windows is a bit smarter (well, is configured to be more forgiving.)
Now for the interesting question, what is the format of the VM disk? Is it RAW? Or someother format? If RAW, you don't have to do any explicit conversion (say with qemu-img), as long as the Virt-IO driver is in jected to the Windows driver repository, you should be able to flip the controller type to Virt-IO, and Windows 2008 for example, should discover the VM disk as a valid bootable disk.
We in the past, have moved a lot of VMs between VMware and KVM, and given that KVM likes IDE for system drive, and VMware would usually default to SCSI, as long as we had the right drivers in the Windows driver repository, it was plainless. Although with older Windows versions we had to tweak the registry to get get Windows device manager detect at boot sequence to scan for IDE devices, when moving from VMware to KVM, since SCSI moved to IDE.
You situation would be similar I think. As long as you embed the Virt-IO drivers, Windows should see the change in the controller type, sense it, and discover the attached disk. I would copy the VM, and work on a copy, to test the theory of course.
If you are familiar with Windows deployment methods and now to inject drivers into the Windows OS, and image format is RAW, should be doable, in theory.
Lets set things straight. First of all some terminology:
IDE and virtio are types of virtual disk controllers
RAW and COW are image formats, where RAW is faster but has less features, and COW has a bit of overhead, but can support snapshots.
This means you have 2x2 options - raw+virtio, raw+ide, cow+virtio and cow+ide.
The VM disk image is either COW or RAW, and it is set and cannot be changed without some low level conversion. How the image will be accessed is really a matter of what controller you use, and this is where virtio and ide come into play - or rather the driver for them, used by the guest OS.
RHEV will not allow changing disk controller types after a disk has been created because the guest OS usually doesn't like it. Take a physical machine with an LSI controller, install Windows, and then take the disk to another machine with a 3ware controller - you'll get a BSOD at boot time, and the same will happen if you change IDE to virtio and vice versa.
Same as in physical environments, drivers can be installed, injected and generally tampered with, but that is really out of scope for RHEV support.
If this change is critical for your environment nonetheless, and you are certain it will not make your VMs malfunction, you can open a support ticket and ask for a SQL script that will update the relevant disk types. This is something I really advise against in production, always preferring to plan ahead and use the right controller when I create my VMs, but I do agree there are different cases and scenarios.
Moreover, if you consider being able to change disk controller types a critical feature for your business, please ask for this as a feature request, through the support ticket.
Dan,
I disagree with you on one point, true selecting the right controller at the right time is best practice. However, a virtual platform should not force a guest OS to align to the virtual platform. On bare metal, the fact is, you can change controllers, and disks, etc., as needed, when needed, the only issue is if you know how to do it right. This is why we have dynamic device discovery methods in modern OSes.
RHEV is just restrictive and inflexible, by its current design. This is understandable at its current level of maturity, but with Xen, VMware, Hyper-V, etc., all allowing easier virtual device changes, it does make RHEV look like it is restrictive for the sake of being retrictive, this is a tpoic that has come up more than once in discussions I have heard about RHEV adoption. Have when possible defended RHEV, because I know this is not the strategic goal, I know RHEV will become more flexible over time. That said, Red Hat should avoid setting the impression, with customers, that things have to be done the RHEV way, or take a hike.
RHEV has a relatively short window to be seen as the ease of use winner, in the KVM space, which KVM has suffered from for years, did not Xen and VMware prove that is it the flexibility of the platform, the management control of the platform, that wins customers?
Microsoft is struggling with VMM, because 3rd party solutions are maturing Hyper-V support, providing better flexibility in management and control of Hyper-V, this is why Microsoft has been focused on System Center enhancements geared to their private cloud initiative, to try to (re)capture the ease of use scope for Hyper-V, but it is not going to be an easy path for Microsoft to follow.
I would like to see Red Hat avoid the same pitfall.
Dan,
I disagree with you on one point, true selecting the right controller at the right time is best practice. However, a virtual platform should not force a guest OS to align to the virtual platform. On bare metal, the fact is, you can change controllers, and disks, etc., as needed, when needed, the only issue is if you know how to do it right. This is why we have dynamic device discovery methods in modern OSes.
RHEV is just restrictive and inflexible, by its current design. This is understandable at its current level of maturity, but with Xen, VMware, Hyper-V, etc., all allowing easier virtual device changes, it does make RHEV look like it is restrictive for the sake of being retrictive, this is a tpoic that has come up more than once in discussions I have heard about RHEV adoption. Have when possible defended RHEV, because I know this is not the strategic goal, I know RHEV will become more flexible over time. That said, Red Hat should avoid setting the impression, with customers, that things have to be done the RHEV way, or take a hike.
RHEV has a relatively short window to be seen as the ease of use winner, in the KVM space, which KVM has suffered from for years, did not Xen and VMware prove that is it the flexibility of the platform, the management control of the platform, that wins customers?
Microsoft is struggling with VMM, because 3rd party solutions are maturing Hyper-V support, providing better flexibility in management and control of Hyper-V, this is why Microsoft has been focused on System Center enhancements geared to their private cloud initiative, to try to (re)capture the ease of use scope for Hyper-V, but it is not going to be an easy path for Microsoft to follow.
I would like to see Red Hat avoid the same pitfall.
And again I do not disagree with you, Schorschi, moreover, removing certain limitations, like this one, is exactly the direction development is moving in. Though, like I've already mentioned, if other vendors are doing something, it doesn't mean it's always right :)
Have you seen any other parts of RHEV that require a lessening of restriction? I'll be happy to hear about those
Yes, given that I am part of an emerging technologies team, I often am pushing for ideas, concepts, etc. that are 12 and even 24 months ahead of finalized design elements. To which, my firm recently identified in RHEV 3.0 beta (and before a bit), about 100 items for discussion, of which about 20 or a bit more seemed reasonable as actionable for feature requests/enhancements in 3.1 and later. This short list was published to our Red Hat TAM, and a few others. I will dig up the short list and forward it to you. I will not publish it here in detail, since some of the items where specific to the firm I work for.
However, in general, from memory, the following came out of the the analysis of RHEV 3.0....
1) Storage related... domain model design is due for some strategic changes, ease of use, ease of recovery, import and export of individual VMs, a logical association between UUIDs and human friendly names of objects in the storage domain tree, eliminate the need for a storage domain versus an export domain, etc. We have discussed these before. Hardware management related. Establish a CIM provider/subscriber model for hardware monitoring, allow HP, Dell, IBM, etc. CIM (agent) provider injection to the ovirt-node/RHEV-H image, enhance the REST API to allow hardware monitoring and management. Establish custom VM BIOS/EFI support, custom NIC firwmare support (whick KVM VMs can do already, supporting iPXE (a-la gPXE) customization. Do not do power management via out-of-band device, use direct API. Some environments require out of band devices such as HP iLO, Dell RAC, IBM iMM to be disabled, so RHEV-H nodes need a better power control/management option. Hyper-V and VMware deal with this both at bare metal and VM level well.
2) Security related, and default configuration related... Removal of default objects, no default named objects, an empty slate, security banners added, removal of branding (no marketing), removal of admin@internal explicit references, dependencies, when AD integration enabled, local secuirty context must be disabled, etc. These all apply to a maturity of RHEV that any enterprise class organization would need. Live migration, long distance live migration, these are already on the roadmap... expected, as I recall. Removal of SSH host certificates, establish LDAP/Kerberos based tokenized access (AD integration) for all aspects of VM interaction including SPICE/VNC session control. Inclusion of 3rd party authentication/authorization using public and/or private CA, PKI infrastructure.
3) More radical ideas... etablish direct support for VHDs, improve v2v-virt tool to allow migration of VMs that already are visible on shared storage, such as between VMware and RHEV, if they can see the same NFS mount, then why use OVF, why re-embedded the VM disk data? We wrote scripts to move VMs back and forth from VMware to KVM that allowed a VM to be moved in a few seconds when NFS mount as in common. RHEV shoul allow this. Add to RHEV-M support for LXC, we are moving towards LXC, in prepation for RHEL 7 possible inclusion of an official variant of LXC. So, the logical management tool for LXC containers would be RHEV-M, no? A fully memory resident variant of ovirt-node a-la RHEV-H, to that a stateless variant of RHEV-H is possible, and target a true competitor for ESXi (auto-deploy, memory resident).
I trust the above, is sufficient to feed discussion thoughtful debate. As for your latest comment... I laughed, because after reading it... the phrase "The customer is always right" came of mind. My hope is my comments encourge others to chime in, suggest ideas, express views, etc. I have a long history with virtualization, and commenting on the strategic aspects of design... my name is rather unique, at least in the IT world, and especially applicable to virtualization... Google maybe my friend or my enemy, I have not decided which... yet!
Lots of great point here, and since they've been fed to the TAM, I trust they are already in the system as RFEs.
There actually is a thread dedicated as a wishlist tracker here: https://access.redhat.com/discussion/rhev-wishlist-feature-requests
I think you could contribute a lot to it. Besides, if you're into development, the oVirt project is happy to accept code commits, and is actually a good way to monitor the upcoming RHEV features
I'm attempting this on a Windows 7 x64 guest in RHEV 3.0. Here's what I've done so far:
Shutdown guest, add a second disk (data), type virtio
Boot up guest, allow Windows to detect disk and add driver (I already had latest RHEV tools loaded)
Verified that I could now access and format the second (virtio) disk
Shutdown guest
Export guest to NFS export domain
Mount export domain from RHEV-M interface
Connect to one of the RHEV-H hosts via admin account over SSH; hit F2 to get into shell as root
Use 'mount' to find the mount point for the export domain and determine the correct OVF file to edit -- I was able to deduce this based on date. For my environment, the path looked like this:
/rhev/data-center/mnt/<YOUR NFS SHARE>/c2b0fea0-44ad-4969-88dd-0194b39f1442/master/vms/9a7e3456-6f1d-4863-bfd0-8c0dd0a9443b/9a7e3456-6f1d-4863-bfd0-8c0dd0a9443b.ovf
(Can anyone shed some light on how these paths are generated?)
Using vi, edited the .ovf file and changed the section referencing the first disk:
ovf:disk-interface="IDE"
to:
ovf:disk-interface="VirtIO"
Exited out of RHEV-H console
Deleted the original exported VM from RHEV. (RHEV will refuse to import a VM with the same "VM ID", and renaming it won't change the ID. I haven't figured out how to alter a VM ID.)
Imported the edited VM into RHEV
Powered on. Success! Windows 7 came up with no problems -- no reconfiguring required. Both disks show up in Device Manager as a "Red Hat Virtio SCSI Disk Device".
Red Hat needs to make this a normal function within the RHEV-M interface. In the meantime, I hope this helps!
Best,
Daniel
Hi Daniel,
c2b0fea0-44ad-4969-88dd-0194b39f1442 is your Export SD's UUID, and 9a7e3456-6f1d-4863-bfd0-8c0dd0a9443b is the VM UUID. These numbers can be obtained from the CLI or the API
As for the normal procedure, please file an RFE to be able to change the VM interface type, through a support ticket.
Hi Dan.
Yes, an RFE has been filed. But so far there isn't even a KB article. So in the hopes that it helps, here is an easier method that doesn't involve exporting/importing the guest:
Ensure guest has latest RHEV Tools installed.
Shutdown guest.
For the purposes of adding storage driver, add a small (e.g. 8GB) second disk, as VirtIO.
Startup guest.
Confirm the disk is accessible in Disk Management.
Confirm presence of driver:
dir %WINDR%\system32\drivers\viostor.sys
Shutdown guest.
Optionally, delete the second disk from RHEV-M interface. It has served its purpose.
From any Linux terminal, issue something like this to view all guests and their disks, etc.:
curl -X GET -H "Accept: application/xml" -H "Content-Type: application/xml" -k -u user@domain:"yourpassword" https://<YOUR_RHEV-M_Server>:8443/api/vms
Drill down further to view your guest's disks:
curl -X GET -H "Accept: application/xml" -H "Content-Type: application/xml" -k -u user@domain:"yourpassword" https://<YOUR_RHEV-M_Server>:8443/api/vms/<yourvm id>/disks
Finally, this PUT command changes a disk's interface to "virtio":
curl -X PUT -H "Accept: application/xml" -H "Content-Type: application/xml" -k -u user@domain:"yourpassword" -d "<disk><interface>virtio</interface></disk>" https://<YOUR_RHEV-M_Server>:8443/api/vms/<yourvm id>/disks/<your disk id>
Startup guest. It should just work.
Cheers,
Daniel
Daniel, sorry to be a dolt, but I tried your curl command, several different ways and googled the heck out of the error I'm getting and still can't get it to work. Can you help me out?
[root@rhevm ~]# curl -X GET -H "Accept: application/xml" -H "Content-Type: application/xml" -k -u RHEVAdmin@mldemo.net:xxxxxx https://192.168.0.31:8443/api/vms
curl: (7) couldn't connect to host
[root@rhevm ~]# curl -X GET -H "Accept: application/xml" -H "Content-Type: application/xml" -k -u RHEVAdmin@mldemo.net:"xxxxxx" https://192.168.0.31:8443/api/vms
curl: (7) couldn't connect to host
Regards,
James
Heh, well I found I don't have to do this at all. I just went into the RHEVM Manager and changed both disks on my W2008 VM to Virtio driver and it comes right up.
But it didn't work with W2012, still investigating that. Will try with my XP and WIN7 machines.
Am I missing something else simple?
James
So I verified I can do this on my Windows XP, 7 and 2008 VMs. Thanks for the tip Daniel. It's even easier than you posted:
- Install with IDE drivers
- Install RHEVM Tools
- shutdown
- Add 2nd small disk with virtio driver
- reboot machine, go into Device Manager and make sure the disk is there with virtio driver
- shutdown the machine
- delete the 2nd small disk if desired
- change the interface on the boot disk in RHEVM from IDE to VirtIO
- reboot vm and it should come up. Windows will probably complain that you have to restart the system, but after that it should be good.
James