LVM problems on RHEL with KVM installed

Latest response


I am pretty much completely out of ideas now. I have been away from Linux for a while and kind of missed out on the whole LVM thing.

I have read a LOT of documents and am trying to implement something now.

I have a RHEV lab and one of the hosts I have loaded RHEL with the hypervisor installed.

The host is a Dell 2950 and it has two RAID groups in it, a boot RAID 1 of 67GB and a RAID 5 of 576GB that I did not included in the drives to use by RHEL during the basic server install.

This hos is connected to a SAN as well.

So basically, I want to use the local RAID5 for an NFS export and I want to add it to a LVM group so that I can then create my NFS export on it.

The problem is that I cannot do a pvcreate on this device.

[root@rhel-h-215 ~]# pvcreate -v /dev/sdq
DEGRADED MODE. Incomplete RAID LVs will be processed.
Device /dev/sdq not found (or ignored by filtering).
Wiping cache of LVM-capable devices

I have googled this error extensively and tried everything. The Redhat KB

and done

parted -s /dev/sdq mklabel msdos

pvcreate /dev/sdq

and I have tried making a gpt partition deleting partitions - bascially, I ONLY get the device not found message. I also don't have any filtering.. it is the standard lvm.conf, though I did try changing some values.

vgscan does not show my device. I guess that is the problem... but why... and also why when i try to pipe the out put of vgscan to more less file etc it doesn't work, doesn't notice I sent it to more.

and I can't grep the output of it??? I am not getting a warm comfy feeling from lvm at this point.

Can anyone maybe give me some direction?



Hey Bill - some of us had helped a person a few months back with a similar issue (but it's going to take me a while to find it.) In the interim: try installing gparted (sorry to direct you to a GUI ;-). It's a GUI for managing disks... but... it also has an area where it explains what it is doing in the background.

Also - using a "whole disk" for LVM is perfectly fine - I, however, prefer to first put a partition on it (old habit I guess)

parted -l | grep ^Disk
parted -s /dev/sdq mklabel msdos mkpart primary ext3 0 100% set 1 lvm on

One thing that came to mind: since you have SAN included, you likely have implemented multipath. However, you probably have not excluded the Dell controller (and LUNs). I would Google Search "Dell /etc/multipath.conf" or "Dell /etc/multipath.conf RHEL"... etc...
If I recall correctly I have had similar issues in the past when multipath would "own" the device and basically lock it. If you happen to be a Solaris guy - recall the situation in Veritas when Veritas and Solaris would "fight" over the devices when you were trying to manage them.

This is a bit challenging to explain in this small of an area.
* check multipath to see if it is currently included. If it is, it will show a "dm" device
an example I found online that should be accurate

 36782bcb070070100166a7a1640a66db2 dm-0 DELL,PERC 6/i

Run the multipath command to see all devices and find the "dm-*" device you need to remove.

multipath -ll -v2
find /dev | egrep 'sdq|dm-
dmsetup remove /dev/mapper/dm-0

At this point you should be able to manage your device again.

To make sure it doesn't happen again, you can add the following to your /etc/mulitpath.conf (please find some examples online to know exactly where)

blacklist {
      device {
              vendor DELL
              product Perc*

So - try that and see what happens. The more I think about it, the more it reminds me of the multipath issue I had in the past. You may want to also research the specific multipath configuration for the Array you are attaching to (i.e. HDS or whatever). The "vanilla" multipath.conf is generally pretty good with the defaults, but sometimes you need to tweak a few settings.

EDITING TIP: To post your code to the forum, use 3 x tilde's ;-)
code goes here

James, Thank you! Multipath was exactly the problem! I blacklisted all Dell.* and pvcreate now works... Not sure exactly how I am adding this to my current system... but at least I am making progress again!