Scientific Linux desktop (VDI) in RHEV environment.

Latest response

Hi folks,

I need some help here regarding VDI setup environment in RHEV.

Scenario: A group of students need to access the Scientific Linux desktops assigned as a group, to groups of students. They will access the User Portal and boot up the VM. After they are done with their work, the VM will revert back to its original settings, erasing any work they saved locally. Each individual VMs will boot up with individual hostname settings; will not be the same hostname for the 10 VMs throughout.

Simple setup but there are some questions.

  1. I intend to create a pool of virtual machines based on a template with Stateless settings in place. Every other settings in pool remain default.

  2. For a pool of 10 VMs, I intend to have each of the VMs booting up with customized settings such as hostname1, hostname2..........hostname10. I am stuck at this.

  3. Stateless VMs doesn't seem to work.

If provisioning the VMs to students doesn't work (via virtual machine pool) as mentioned in the above method, I appreciate suggestions provided to me for this setup and its requirements. Thanks.

  • Jack

Responses

Welcome, Jack! I think someone here should be able to help out with this.

Hi David. Wow I am really hoping someone can help me out with this. :(

I'm really stuck on quite a number of factors and the project is not going on smooth with the dateline that's a week away. Called up tech support and the call can't get through.

I'm pleased to see you're getting some great advice from community experts in this discussion, but a bit concerned that you weren't able to get through to Red Hat Support. If you encounter this problem again, you can always open a support case online here: https://access.redhat.com/support/cases/new/

Yes David. I'm really glad to receive help down here. I guess we all ought to be concerned about the support from Red Hat.

I have yet to get updates from them since my last comments to the case more than 30 hours ago. Not forgetting their support line which I cannot even get through.

Back to the feature of template deployment, hopefully we can have a feedback to Red Hat to facilitate the automatic deployment of VMs; something what the competitors' product can do.

For what it's worth, I've had great experiences with Red Hat support. I've had teams literally around the world support me with some truly nasty problems. I've never had a problem getting through to the North America number, but that number forwards to some of the follow-the-sun sites after hours on my side of the world and I've had some 2AM calls go off in the weeds. But when that happened, the person I was working with gave me a cell number to call so we could still stay connected.

  • Greg

Hi Jack,

I have been providing EL6 virtual desktops with RHEV for a while.

I believe that any VM that is a member of a pool is stateless. If the Pool Type is Automatic, then the state of the virtual disk will be rolled back without manual intervention.

You should seal your master VM before making a template. Instructions for sealing RHEL guests are in the RHEV Evaluation Guide. For virtual desktops, I skip the "touch /.unconfigured" step.

The hostnames of the EL6 guests are assigned via DHCP and DNS.

I may be missing out on the sealing of the VMs portion, that's why my VMs doesn't seem to be stateless.

However, how do I make the pool of 10 VMs to boot up as separate machines with individual hostnames etc.? Is that going to come with sealing of the master image?

The steps are:

  1. Create a VM which will be the master.
  2. Seal the VM as per the Evaluation Guide.
  3. Create a Template from the sealed VM.
  4. Create a Pool from the Template
  5. Add users to the Permissions for the Pool
  6. When creating the Pool you specify the number of VMs to create in the Pool. (You can add more later.) Each of the VMs in the Pool is an individual VM.

Pool member VMs are provisioned automatically through the User Portal when a user clicks on the "Play" button for the Pool. At that point, the title of the Pool icon changes from the name of the Pool to the name of the member VM.

I have followed the steps as mentioned above. For point number 2, I have tried 2 scenarios, both with and without touch /.unconfigured when sealing the VM. For point number 5, I have tried both assigning specific users to the pool and assigning user groups to the pool. My users and user groups are in my Active Directory.

Pool VMs are indeed provisioned automatically when the user clicks on them and the name of the VM changed to the name of the VM member. However, when I got into SPICE console, all I can see is the blue screen where it prompts me for changing the language, root password, keyboard layout, DNS configurations and etc. After which, I see no difference from the VMs that are provisioned. All of them are having the same hostnames with localhost.localdomain. Also, the hostnames are not assigned by the DHCP server, as you have mentioned in the previous comment.

Not sure if we are talking about the same function requirements, but what I have in mind is actually to allow the users to provision their VMs automatically and the VMs will bootup with different hostnames without the need to manually configure the settings.

I believe that if you touch /.unconfigured then you get the configuration prompts that you refer to. If you get configuration prompts from a Pool VM without touch /.unconfigured in the sealing process, then we would have to investigate that further.

I forgot to mention that you should remove the HOSTNAME line from /etc/sysconfig/network when you are sealing the master VM.

I have tried both scenarios with and without tweaking /.unconfigured. I have removed the hostname line as well.

This idea is probably a little off the wall, but what if you made an individual VM for each of those 10 users. Start them all from the same template so they're the same. Login to each VM as root and set up hostnames, making each VM unique. Shut down and snapshot each VM, so then when a user starts their VM, the snapshot moves forward and you have a point in time copy from before the boot. When the user shuts down their VM for the day - I'm not sure how to automate this - but come up with a hook that reverts the snapshot at every shutdown and then maybe another hook that creates a new snapshot at every boot. If this angle could work, then you would have individual VMs that stay virgin.

Or I might be pitching one of those ideas that seems really good late at night, not so good after the sun comes up. And I'm not sure how to automate those preboot snapshots and post-shutdown reverts, but maybe that REST API has something that might work? This capability would seem right up its alley.

I've heard the arguments in favor of having the DHCP server assign individual hostnames and such. The part nobody talks about is, how does the DHCP server know which hostnames go with which hosts (or with which VMs in this case)? The answer is, by the MAC Address. So in your DHCP server, you have to set up this ugly block of parameters for each MAC Address you care about, and then you have to keep it up with every change in virtual or physical systems. You have to know all the MAC Addresses in advance so you can configure DHCP accordingly. A bunch of VMs might be a little easier to set up this way versus physical systems - except that if each VM in one of these pools is created dynamically, how are you supposed to know their MAC Addresses in advance so you can tell the DHCP server about them? Maybe the MAC Addresses have some kind of predictable pattern? I'm writing this a little bit out of ignorance here because I haven't set up a pool of VMs like this yet.

  • Greg

Hello Greg,

your comments are certainly helpful in providing some insights to how I can try to overcome this issue.

I believe you are referring the snapshots which you have described in the first portion as the stateless VMs. That is, everything else resets back to original intended state upon shutdown.

Now for the DHCP portion, I will need to find out how the whole thing works; such that the VMs are able to get their hostnames changed automatically to the desired ones be it MAC addresses or meaningful names. The thing is that I have no idea how this "automatic setup" even works. The sealing of the linux VMs as stated in the RHEV evaluation guide does not provide further details. I'm still seeing blue screens asking for manual input of information. I need the system to proceed automatically without manual intervention from the users.

The auto-allocated MAC address range is specified in the MacPoolRanges property that can be accessed with rhevm-config.

RHEV-M can update a VNIC to a specific MAC address which is outside of the auto-allocation range. When you create a Pool of VMs, you can use a rhevm-shell script to update the MAC addresses to a fixed range.

After you have first set up a new Pool:

  1. decide on the fixed MAC address range
  2. update the VNICs with a rhevm-shell script
  3. add the MAC addresses to your DHCP configuration file

Eventually, you will want to update the VMs in the pool:

  1. remove Pool
  2. create a new Pool with the same name from an updated template
  3. update the MAC addresses of the new Pool with the rhevm-shell script

With this method, you can update the Pool without having to modify the DHCP configuration file.

Yes - the snapshot idea should accomplish the same goal of being stateless if there's a way to make it work. The idea is, if you're trying to reach the sky and one mountain doesn't work, maybe take a look at a different mountain.

On DHCP - the hostnames themselves don't need to be MAC Addresses. The DHCP server needs to know the MAC Address for each VM and then the DHCP configuration has a paragraph for each one with the hostname, maybe an IP Address reservation, and maybe other parameters. Windows DHCP servers might not support some of this stuff, so you may need to use Linux DHCP - but that may create a hassle on your Windows side of the house. So the statement, "The DHCP server can assign the hostname" is literally true, but doesn't tell the whole story.

The act of sealing seems to be pretty much what a system builder would do. Forget virtual machines - let's say you buy a new desktop PC from your favorite vendor and it comes preloaded with RHEL. The system builder probably sealed it the same way the documentation sugests sealing your virtual machines.

Hang on a second - those blue screens asking for information - that's probably what is supposed to happen at first login of a sealed RHEL machine. Just like Windows mini-setup. You fire up your brand new RHEL system and the first thing it does is ask you some mini-setup questions. The behavior makes sense, just doesn't work for your situation.

Here is an experiment - clone an existing RHEL VM if you have one handy. Disconnect its NIC, or maybe connect it to a different network, then boot it and login to it. Do that same touch .unconfigured step and shut it down. Fire it back up and I'll bet you get those same blue-screen mini-setup questions. I'll bet if you chase down the logic at boot time, I'll bet the startup scripting uses that empty file named .unconfigured as a flag to prompt for all that stuff.

  • Greg

I have yet to find any other way to automatically assign hostnames to self provisiomed machines. I am now manually binding the MAC addresses to the hostnames.

Will proceed to try sealing the machines again in another 10 hours' time.

Yup, I think binding MAC addresses with hostnames is the only way to do it. Apparently these pools have a predictable way to assign MAC Addresses?

I was a little bit loose in my language last night. The word, "hostname" has different meanings depending on context. There are DNS hostnames and the computer hostname. Or maybe a more general way to put it - what I call myself and what everyone else calls me. These are not always the same.

When the DHCP server assigns a hostname, what really happens is, the DHCP server leases an IP Address to the client and then either the DHCP client or DHCP Server tells the DNS server about it, depending on how you set it up. In your case, you probably want the DHCP server to register the client with DNS since you found a way to bind a MAC Address and hostname together. That takes care of what everyone calls calls you. I'm not sure if the DHCP client will actually modify its own hostname with a hostname command - but you don't care because this what you call yourself. If everyone calls themselves the same name, say, localhost.localdomain, but answers externally to, say, myhost{n}.example.com, then you should be OK.

(Woops, just edited a typo - changed a "@" to ".")

  • Greg

From Jack:

Back to the feature of template deployment, hopefully we can have a feedback to Red Hat
to facilitate the automatic deployment of VMs; something what the competitors' product can do.

Note that the DHCP issue and somehow uniquely indentifying otherwise identical VMs created from a template is architectural. Everyone, all competitive products, have to deal with this, some how some way.

I don't know how the other guys do it, but Aram's comments about esentially setting up a pool of MAC Addresses to go along with the pool of VMs seems like a sensible way to do it.

I'd love to know how this project turns out.

  • Greg

Sorry guys but I haven't been updating this for a while. The Scientific Linux client wasn't bootable after I installed a bunch of applications. No idea why but I have to re-install the packages from the last known good template. Lesson learnt, create multiple snapshots at different stages to identify where went wrong. I got no issues after re-installing the packages though.

I am now stuck with the USB redirection from client machines to the Scientific Linux guests. I got to Red Hat Global Service Support but they were unable to advise further as SL is not explicitly stated in the support list of guests. Now, has anyone here came across the fact that SL guests are able to get USB redirected?

And yes Greg, I'll be posting a mini write-up after I got this project settled. :)

For USB redirection to an EL6 guest, the VM setting for USB Support should be set to Native.

On the client, USB redirection software has to be installed.

  • For EL6 clients, the package is usbredir.
  • For Windows clients, the program name is UsbClerk. For 64-bit Windows, you have to make sure that you install 64-bit UsbClerk. In case it is not installed automatically, the UsbClerk installer (usbclerk-setup.exe) is in the same folder as virt-viewer.exe. This may be in Program Files or the user's AppData\Local folder.
    When USB redirection is working, the "USB device selection" option should be enabled in Remote Viewer's File menu.

Hi guys,

this project has been handed over successfully. Thanks for the help from you guys. Maybe I'll provide a little writeup of this project.

This project tender was an abrupt one, I would say. I was notified of this project 2 weeks prior to the actual kick-off date which leaves me about 1 week to read up and get the idea of RHEV. Of course, I got a pre-sales consultant going down with me which ultimately some items needed to be changed half-way through the project.

Hardware: 1x Windows AD (Dell R620), 1x Windows File Server (R620), 1x RHEV Manager (Dell R620), 2x Hypervisors (Dell R815), 2x Disk Arrays (Dell MD3200i), 2x dedicated SAN switch (Cisco C2970), 1x production core switch.

Scope of work: To provide VDI solutioning for students, where VM image is catered for particular module. VMs to be stateless. Users to be authenticated against AD, with DHCP, DNS features installed. File Server to be accessible via the VMs to allow storage of documents. Disk arrays to provide 10TB, after RAID 5, for RHEV datastore and Windows File Server volume.

Each controller in the Dell MD has 2 ports utilized, 1 port to each C2970. Hypervisor has 2 ports utilized, 1 port to each C2970 as well. The other 2 reserved for rhevm and the VDI network. All in all, network and IP addressing has no problems.

I had a colleague working with me on the storage portion. Volume provisioning from the MD is relatively easy, not much effort needed. Adding data domain in RHEV is not difficult too. Of course, we have to decide on the naming conventions since we have 2 MDs and they are seperate enclosures. We have 2 volumes of 500GB for a start, 1 volume from each MD.

Master image preparation is not much of an issue, we had to think of ways to allow the students to log in to the Scientific Linux VMs using their AD accounts. We installed Centrify Free and it worked for us. DHCP leases as showing up and all is good. Access to the Windows File Server from the VM works fine too, via the File Browser. Out of good will, we installed the software packages for the customer according to their list provided. We did that on a Friday afternoon. All seems good and when we came back on Monday, we got a shock. The VM just couldn't start. Panick.

We had to revert back to the last-known bootable template and install the software packages all over again. Of course, being the paranoid engineers we are, I made multiple snapshots - like once every 2 or 3 software installed? No problems after all software packages are in though. :)

Making USB redirection work is a pain though. We tried SL guests, Windows 7 guests, Windows 7 clients and Windows 8 clients and it just doesn't seem to work. Have tried disabling firewalls, placing guests and clients in the same subnet and still doesn't work. The log collector will show Red Hat support some clues, hopefully. (SL is not supported in GSS. Had the customer to handle this particular case since project was handed over)

We had a problem. Having all VMs in the pool with the same hostname doesn't seem like a good idea. From the good suggestions from the guys here, we had to create a list of VMs in pools with specific MAC address and IP addresses assigned to them. Of course, we had to reserve the entries and assign individual hostnames one by one, going down the list of 50 VMs. This method is tested and proven to be working, but a real test to your ability to keep yourselve awake from doing the same thing 50 times.

By the way, some other BUs from the customer wants to have the VMs' hostnames to be that of the students' user accounts name. I told them straight in the face that it isn't going to be a real VDI if that is the case, and don't even talk about the VMs being stateless. Anyway, that's just some "interesting" opinion down there. You know, those organization red tapes and stuff.

UAT and project sign-off takes a couple of hours too. Tested placing Hypervisor hosts in maintenence mode explicitly, live migrations, shutting down one of the iSCSI ports (we had 2 paths), placing one of the MD controllers to offline mode etc. The test results are positive.

Oh, forgot to mention the documentation of the installation document, UAT document and simple SAN training document really drains off the mental power. Should have quoted for more man days for that. 6 hours of work felt like an hour of laptop-staring exercise gone just like that, at the click of your fingers.

Project is done and the stress of not meeting dateline is gone. Handed over and learnt a lot of stuff. Documents took 3 days to complete and felt satisfied. (Did someone say working on documentation is @#$!#$?) Glad I had help from RHEV Discussion Group here with the folks' contribution from real life experiences.

Wishing smooth projects to all. Cheers.

  • Jack

Thanks for the comprehensive writeup, Jack! Much appreciated. I'm glad the community here was able to assist with your successful project, and hopefully others will find this helpful also.

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.