Trouble creating an export storage domain

Latest response

Hello,

I am trying to create an export storage domain so that I can import VMs from our production VMware system for testing.

I have spent half the day on this doing what I think needs to be done, but I am having a LOT of trouble for sure...

So I have one of my hypervisor hosts - I have 4 Dell 2950s in my lab that I am testing this on. One of them has 3 extra disks in it configured in RAID 5 (hardware RAID). I would like to use them for the export storage domain. I identified the disk, lsscsi VERY useful.

Then I created a partition on the disk with fdisk, and set set the type to 83. Wrote the changes fine. This is where things begin to come unglued.

I then tried to make an ext3 FS on it. I can't it keeps saying it is busy and everything I see on google points to it being part of a software raid... It's NOT. Never has been ... it used to be windoze FS on there, NTFS. Guess I will try and lowlevel format it using the RAID controller next.

I did manage to partition it and put an EXT2 FS on it with parted. But then I realised that I the mount point I had created was gone, as the hypervisor nodes are not persistent. So I can persist the mount point I guess, but then trying to mount it - I can't as it says it is already mounted! Just going around in circles.

So basically I thought with this extra space not doing anything and doesn't need to be shared that I could put the NFS EWXPORT system there, but it seems like there are so many obstacles that I am starting to wonder if I am doing it the right way? Am I, or is it stupid to try and put this on the hypervisor... I guess I could try and put it on the RHEVM system, but that isn't very big and I may have some big VM to import and when I tried to create an export storage domain, it has the hosts there, so I assumed, and now I am not sure/.

Thanks
Bill

Responses

Hey Bill - I still struggle with the Storage Domains (and I have been using RHEV for almost 3 years and am certified). I don't mess with them enough and they do not seem intuitive to me at times - so, don't be too hard on yourself ;-)

It sounds like you have a lot of flexibility with your configuration.

When I am building out my test lab, I do the following:
* Install RHEL on the system with several disks (2 of the smallest disk for the OS [mirrored]) the remainder of the disks will be used for VM images and your NFS storage domain - hostname: rhel6a. Make everything as redundant/resliant as possible (mirror disks, bond network interfaces, etc..). I recommend using LVM for your 2nd LUN. This will provide greater flexibility.
* Install KVM on rhel6a
* Build RHEV Manager on a VM on rhel6a. I typically put my RHEV manager on the same volume as my OS - leaving all avaiable I/O to the NFS storage domain you will export. Be sure to size it large enough to host your ISO-Domain (I create a separate volume for ISOS to allow for growth later)
** Select NFS as your data center storage choice
* Create your NFS storage domain export and assign 36:36 (vdsm:kvm) for the share permissions. Share it out to your Hypervisor (subnet or specific IPs)
* Build the remaining systems using RHEV-Hypervisor or RHEL (rhel6b, rhel6c, rhel6d)
* Attach your Hypervisors to your RHEV-Manager and activate the storage domains (ISO-Domain and Data Domain)

On rhel6a (I am doing this from memory, so be sure to validate my suggestions, and there may be some missing steps)
NOTE: if the parted command complains about the alignment, change it to "ext3 2048s 100% set 1 lvm on"

parted -s /dev/sdb mklabel gpt mkpart primary ext3 0 100% set 1 lvm on
partprobe /dev/sdb
pvcreate /dev/sdb1
vgcreate vg_RHEV /dev/sdb1
lvcreate -L100g -nlv_datadomain vg_RHEV
mkfs.ext4 /dev/mapper/vg_RHEV-lv_datadomain
echo "/dev/mapper/vg_RHEV-lv_datadomain /exports/nfs/datadomain ext4 defaults 1 2" >> /etc/fstab
mkdir -p /exports/nfs/datadomain && mount /exports/nfs/datadomain && chown 36:36 /exports/nfs/datadomain
echo "/exports/nfs/datadomain   0.0.0.0/0.0.0.0(rw) #rhev data domain" >> /etc/exports
lvcreate -L100g -nlv_exportdomain vg_RHEV
mkfs.ext4 /dev/mapper/vg_RHEV-lv_exportdomain
echo "/dev/mapper/vg_RHEV-lv_exportomain /exports/nfs/exportdomain ext4 defaults 1 2" >> /etc/fstab
mkdir -p /exports/nfs/exportdomain && mount /exports/nfs/exportdomain && chown 36:36 /exports/nfs/exportdomain
echo "/exports/nfs/exportdomain 0.0.0.0/0.0.0.0(rw) #rhev export domain" >> /etc/exports
exportfs -a

The RHEV Getting Started Guide does a great job explaning how to get up and running (and I believe it even focuses on NFS). Another option is iSCSI which works great for a situation like yours.

I, personally, would not put a NFS EXPORT domain on the Hypervisor. Even in a pinch. I tend to leave all management on the Hypervisors to the RHEV Manager (firewall, shares, etc...) so there is no conflict. If this was going to be a production environment, I would do this a bit differently.

Hey James, thanks for that! Really appreciate it. I was wondering if I sholdn't have just put RHEL on one host and loaded KVM - I can certainly do that anyway and then it looks like it should be pretty straight forward. I am guessing that the RHEVH must be messing with what I am trying to do on it.

Regarding the Hypervisor - some reasons I leave it be
* networking foo (bridging, IPtables, etc...)
* Volume Group management
* user management

All of those areas are impacted by RHEV. In my environment I run RHEL as a Hypervisor and I learned the hard way that you can't mess with sudoers... or try to manipulate a VG/LV that is "owned" by RHEV ;-)