Not able to attach NFS storage as a Storage domain

Latest response

Hi,

 

 

Error in creating a Storage Domain. The selected storage path is not empty (probably contains another Storage Domain). Either remove the existing Storage Domain from this path, or change the Storage path).
(Error code: 361)
I wan to add old NFS storage which was previously connected with my old RHEV-H hypervisor and now i have removed that old RHEV-H. VMs are store on old NFS storage and i want to add with new RHEV-H but i am getting below error message 
 
 

 

Error in creating a Storage Domain. The selected storage path is not empty (probably contains another Storage Domain). Either remove the existing Storage Domain from this path, or change the Storage path).
(Error code: 361)

Responses

RHEV-M wants to own the file structure/layout on the NFS mount.  So if RHEV-M does not have a reference to the domain layout, you get this error message routinely.  If RHEV-M once owned the storage domain on the NFS mount, you would be able to import it.   But even then, I have had a few issues with RHEV-M not importing domains.  For example, you create a RHEV-M instance, destroy it, and re-create it, this includes wipe of the back-end database of course, and then try to add/import a domain, sometimes it works, sometimes it does not. 

 

I have yet to find any way to force import of storage domain that RHEV-M refused to import.  What would be nice is a way to walk the storage domain 'tree' and select VMs to be imported per VM.  This is one area that RHEV is weak on, compared to Microsoft Hyper-V or VMware vSphere.  Since RHEV 1.0, we have suggested in strong terms to Red Hat to allow more flexibility in Storage domain control and recovery. 

 

Most of the time, we have to manually pull the VM information files and data files, out of the old storage domain, destroy the storage domain structure on the NFS, let RHEV-M create a new structure, and then, create empy VMs, and drop the VM disks into the structure, a bait and switch if you will.  This is not easy, nor straight-forward, validing meta data files by human is complex and not for the faint at heart, and doing all of this via CLI under RHEV-M ownership of the storage domain is error prone.

The error means you already have a RHEV domain at the same export path. Basically, it's a directory with a UUID-like name. Remove it, or change the export path, and it will work.

RHEV-M wants to own the file structure/layout on the NFS mount.  So if RHEV-M does not have a reference to the domain layout, you get this error message routinely.  If RHEV-M once owned the storage domain on the NFS mount, you would be able to import it.   But even then, I have had a few issues with RHEV-M not importing domains.  For example, you create a RHEV-M instance, destroy it, and re-create it, this includes wipe of the back-end database of course, and then try to add/import a domain, sometimes it works, sometimes it does not. 

 

See my answer below, it explains what RHEV is looking for in NFS exports. The assumption is, that if a similar directory already exists, there might be data there used by another RHEv-M instance, and we do not want to delete that.

 

 

I have yet to find any way to force import of storage domain that RHEV-M refused to import.  What would be nice is a way to walk the storage domain 'tree' and select VMs to be imported per VM.  This is one area that RHEV is weak on, compared to Microsoft Hyper-V or VMware vSphere.  Since RHEV 1.0, we have suggested in strong terms to Red Hat to allow more flexibility in Storage domain control and recovery. 

 
proper importing will work for export domains, but the existing data domains are not means to be imported or exported. Their structure and metadata is not built for that.
 

Most of the time, we have to manually pull the VM information files and data files, out of the old storage domain, destroy the storage domain structure on the NFS, let RHEV-M create a new structure, and then, create empy VMs, and drop the VM disks into the structure, a bait and switch if you will.  This is not easy, nor straight-forward, validing meta data files by human is complex and not for the faint at heart, and doing all of this via CLI under RHEV-M ownership of the storage domain is error prone.

 

Why don't you use export domains?

Hi Dan,

 

You mena i have to change this folder name "9b5ae493-fb37-4c29-afed-e6323a8bc2f8", after that can i get my all VMs back ?

 

What do you mean "get back" ? The domain already holds VMs from another RHEV instance, or another domains in this RHEV instance?

Once a storage domain is orphaned, there is no clean way to recover it.  RHEV-M will tell you that must re-initialize the domain, which wipes everything out, or you have to bait and switch, renaming the UUID and creating a new tree structure, neither of these is easy to deal with, even if you know how to copy the VMs back into the new tree, RHEV-M is less than forgiving.

 

What RHEV-M should do, is let you select an existing UUID via the RHEV-M GUI, thus instructiing RHEV-M to re-parse the tree, and re-discover the VMs.  Or let you import each VM from a damaged or orphaned tree, into a new tree if needed.

 

The idea that you can only wipe out and entire tree, including any VMs in the tree that is damaged or orphaned is a weak design.  This has been a documented issue since we evaluated the RHEV 1.0 beta.

 

I guess it just not been a priority for Red Hat to address.  Understandable during RHEV 2.0, but missing from 3.0?  Not understandable from our perspective.  Microsoft Hyper-V and especially VMware have various methods to recover orphaned VMs, that are easy and straight-forward, RHEV does not.  This is a significant gap.

 

This NFS storage was connected with old RHEV-M, which is already deleted and now i want to attach this storage with my new RHEV-M.

if attachment is not possible so how to import my vms from this storage?

I ran into the same issue when my NFS mount point was also a volume exported from my RHEVM host. (lab environment)

  • I.e. I created a volume and mounted it as /exportf/nfs which obviously has lost+found.  Apparently RHEVM would find lost+found in that base directory and determine it was not "empty"
  • I had to tell my NFS configuration to use /export/nfs/blah (or whatever).

 

A VM is more than a set of disks, to get discovered, it also requires additional metadata - VM name, which disk images belong to it, how much RAM was allocated, what CPU config was used etc. A RHEV Data Domain does not necessarily hold this kind of data, because that is the job of the RHEV-M database. This is why, if a domain is orphaned, it turns into  collection of disk images, without much of a way to find out which image belonged to which VM. 

 

When you use export domains, for every VM an OVF file was generated, with the VM description and disk affinity, and this is why export domains can be easily moved between setups, attached, detached and whatever else you want to do with them. 

 

This design has some flaws of course, but the pros usually outweigh these, because under normal conditions, a domain should not become an orphan, and if you have a backup of the database, and the storage is intact, it's not hard to restore a setup to an operational condition.

If this was a simple data domain and not an export domain, there is no supported way to import the VMs. The domain should only contain VM images, while your RHEV-M database had allt he VM definitions. Without both, all you have is just an image with no way of knowing where it belongs

Was this with 3.0? Because with 2.2, it would only check the export path for UUID-like directories, ignoring everything else.

 

Generally speaking of course, it is better to keep the export path clean, it should be dedicated to RHEV anyway

This is the very point I was discussing above, in my reply before.  The re-use of storage domains that are populated is very... how to say it... quirky.  We routinely have issues with RHEV-M no honoring existing domains, this has nothing to do with export or non-export domains.  There is no way to force RHEV-M honor an existing domain it decides it does not like.  This has been an issue to some degree since RHEV 1.0.  We identified the issue in RHEV 1.0.

 

To be honest, I don't like the UUID based model, I understand, I see the benefit, I just don't like it, and it drives the operational teams completely nuts.  It is too abstracted from human understanding, without extenstive time with the design.  This is an expensive learning curve issue for us.

 

VMware use of named directories, to find VMs and re-import VMs by individual import on demand allows for a more operational friendly option if a datastore is lost or not recognized.  RHEV-M needs this same option or this same level of administrative flexibility.  To be blunt, the idea of a 'normal' storage domain, versus an 'export' storage domain is goofy, and seems to be an after thought in the overall RHEV-M design.  Yes, I understand why it was done, but given the mature of other solutions on this point, oVirt Project design could have been improved.  Instead, if feels like, it was designed to be different on purpose, just it would not be similiar to what VMware and to an extent Hyper-V have done in the past.

 

There should also be a way to force explicit selection of the master storage domain on-demand, override the auto-discovery, or better yet, get rid of the master based concept completely.  VMware and Hyper-V both have more operational friendly ways to control storage and VM discovery, although with Hyper-V, it is a bit more abstracted than VMware.

I have not attempted the same task with recent GA release, I just changed my approach ;-)  I can test it out again on a new build in a few day and see if the behavior remains.

 

thanks

You make quite a few valid points here, and the "master" paradigm is getting reworked, though it's quite a serious overhaul, so I am not sure it'll be ready soon.Probably in V3 or V4 storage version.

 

I do however not understand why you run into so many storage issues. A typical RHEV setup does not require anyone to look at the storage internally, everything is managed by RHEV-M and the hosts, with no need to manually touch anything. Maybe you're doing something untrivial, that requires a special treatment? If so, I would like to hear some details, maybe those will turn into future features that can improve the way we do storage. Seriously though, "VMWare does it in a different way" is not a valid argument here, I'm sure you can see that.

Glad to hear or see, the storage model is getting some TCL.  And I should provide a bit of context to our experience.  First, I am part of the emerging technologies group within the firm I work for, so we routinely focus on future feature set, and try to influence key features, request same of vendors/developers/publishers that will benefit our strategic objectives.  We often focus on a 'roadmap' 12 to 24 months into the future.

 

As for RHEV issues, we created a number of issues on purpose to stress and evaluate RHEV 3.0, and this exposed how easy it is to confuse RHEV in reference to creation, import and control of storage domains.  As I noted, we have been evaluating RHEV since 1.0 beta.  We have noted a number of issues since then, that have been address and others have not.  The storage model is one such that needs rework, IOHO (in our humble opinion).

 

In reference to RHEV storage domain design, we would suggest the following, 1) eliminate the export domain versus storage domain concept, a storage domain is a storage domain, period.  2) if the current UUID model is to be retained, establish logical symbolic links that are human name friendly, so when operations needs to get into the weeds, they can, and they understand what they see.  3) Using OVF is not efficient, unless OVF is changed to allow references to VM disks, avoiding always embedding VM disk data.  Embedding of VM disk data is not needed if a VM is moved from one domain to another that are on the same storage frame, especially in a cold migration scenario.  Maybe when RHEV supports live migration, more to the point, a long distance live migration feature, where the back-end storage changes, this could be part of that scope, 4) Learn from others, VMware, XenServer, Hyper-V, Parallels, OpenVZ, LXC, etc., as the 'Art of War' suggests... Know your competition... what did they do in concept right, that supports easier support and recovery?  Ease of use, when things ago wrong, always translates into less TCO and higher ROI.  And things always go wrong, that is the nature of IT.  :)

 

Thus, the idea of an export domain for example is completely out of line with what other solutions provide.  RHEV will always be judged against other solutions.  So, although  I can understand you perspective about VMware, I think it blinds Red Hat to opportunities, when not targeting similar feature set, with the same ease of use elements and then move beyond to an improved design.  When something was done well, leverage it.  Right, now, RHEV is not as easy to use or support as other environments, and this must be addressed.  Especially when things break ease of use is critical.  A real, factual, example... XenCenter/XenServer is in the same situation as RHEV, when XenCenter works it is smooth, but then the XenCenter runs into issues, it is horrible to resolve same, forcing use of the CLI, and the XenCenter/XenCenter CLI has pitfalls and gaps on ease of use.  This significantly impacts its potential adoption from our perspective, to the extent we have routinely disqualified XenServer from consideration by us as a parallel solution to VMware.

 

Faster, Better, Cheaper is fine, as long as Better given equal time to Faster and Cheaper.  VMware gained market share focusing on Better, meaning ease of use, to a significant degree (and it took them a while to get it right, to be sure).  However, we expect Red Hat to get it right, (cough) Better, faster than VMware or Microsoft did!

 

We like RHEV, but we see great need for improvement, especially from an operational perspective, ease of use when broken scenario.  RHEV has come a long way, but it will be compared to VMware forever, unless, some day, it is the leading solution based on ease of use, then the tables will be turned.

Thanks for this elaborated feedback, those kind of feedbacks are exactly what we are looking for

 

 

The current structure was not selected just to be different from VMWare, it was selected to leverage existing commonly used open source storage, specifically LVM and NFS, both were not at the time (and now) designed to serve as a clustered file system. This was done in a way that allowed to scale considerably (200 hosts per cluster ATM, and this is the QE'd limit)

 

 

Fortunately since RHEV is in the wild for some time now, we are aware of most of the valid issues you have raised. The master domain concept is to be changed having each storage domain as stand alone. This will allow for mix of different domain types (FC, iSCSI, NFS etc) in the same data center. You will be able to move storage domains with their VMs between data centers and RHEV setups. Export domain is planed to be changed to be used only for inter platforms exporting and importing (In RHEV 3 it already only exposes NFS structure and in the future will have simple file structure). Live storage migration and more.

 

 

You can read more at:  http://ovirt.org/wiki/Storage_-_oVirt_workshop_November_2011

 

 

Thanks again for this feedback, it is clear that it comes form years of experience , it does assure us that we are on the right track.