RHBA-2007:0046 - Bug Fix Advisory
lvm2-cluster bug fix update
Bug Fix Advisory
Updated lvm2-cluster packages that include various bug fixes are now
The lvm2-cluster packages contain support for logical volume management in
a clustered environment.
This update ensures that the bugs fixed by lvm2 advisory RHBA-2007:9103 and
device-mapper advisory RHBA-2007:9100 are also fixed in a clustered
Additionally, this update:
- Changes the default 'locking_type' from 'external shared library' (which
has the value 2) to 'built-in' (value 3). This is stored in the 'global'
section of '/etc/lvm/lvm.conf'. You can use 'lvmconf --enable-cluster' and
'--disable-cluster' to toggle this setting. Check this after installing or
updating this package. All machines in your cluster should use the same
- Adds new configuration parameters 'fallback_to_clustered_locking' and
'fallback_to_local_locking'. The default settings allow you still to
manipulate local Volume Groups when your cluster is unavailable.
- Fixes several bugs in 'clvmd' that can cause the daemon to hang or crash.
- Fixes some cases where a locking bug allowed metadata to be read
incorrectly whilst it was being changed.
- Displays more detailed error messages in many cases and suppresses a few
- Fixes 'vgsplit' to handle clustered Volume Groups correctly and require
them to be resizeable.
- Adds debugging messages to 'clvmd' if invoked with '-d'.
- Prevents one node from removing a Logical Volume if it is in exclusive
use on another node.
- Adds a startup timeout command line option '-T' to 'clvmd'.
- Adds a 'reinitialise' command line option '-R' to 'clvmd' to instruct a
running daemon to reinitialise its internal caches after devices have
changed externally. (This option does not always work.)
- Changes the 'status' exit code from the 'clvmd' initialisation script to
bring it into line with similar scripts.
- Updates the 'clvmd' initialisation script to wait for 'clvmd' to exit
before restarting it and to use the new '-T' option.
All users of lvm2-cluster should upgrade to these updated packages, which
include these bug fixes.
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
To update all RPMs for your particular architecture, run:
rpm -Fvh [filenames]
where [filenames] is a list of the RPMs you wish to upgrade. Only those
RPMs which are currently installed will be updated. Those RPMs which are
not installed but included in the list will not be updated. Note that you
can also use wildcards (*.rpm) if your current directory *only* contains the
Please note that this update is also available via Red Hat Network. Many
people find this an easier way to apply updates. To use Red Hat Network,
launch the Red Hat Update Agent with the following command:
This will start an interactive process that will result in the appropriate
RPMs being upgraded on your system.
- Red Hat Enterprise Linux Resilient Storage for x86_64 4 x86_64
- Red Hat Enterprise Linux Resilient Storage for x86_64 4 ia64
- Red Hat Enterprise Linux Resilient Storage for x86_64 4 i386
- BZ - 187271 - `service clvmd status` returns 0 when clvmd is NOT running (should return non 0)
- BZ - 187812 - clvmd init script should time out and fail if there's no quorate cluster
- BZ - 208131 - service clvmd restart fails with connect fail on local socket
- BZ - 209370 - node shouldn't be able to remove an LV if another node has it activated exclusively
- BZ - 210511 - vgsplit can cause clvmd to not propagate vg activation cmds
- BZ - 213484 - RFE: lvremove man page should mention that LVs need to now be deactivated before deleting
- BZ - 213737 - an attempt to stop clvmd when it's already stopped will cause invalid VG errors
- BZ - 214529 - vgremove reports LV exists after all lvs have been deleted
- BZ - 232707 - The clvmd daemon may be deactivated with open vgs
- BZ - 235092 - clvmd operations timeout under heavy load == bad for recovery scenarios
Red Hat Enterprise Linux Resilient Storage for x86_64 4