RHEL 7.x: KickStart LVM provisioning issues

Latest response

I have created a customized kickstart script to dynamically provision disk(s) based on some simple logic & a template. The errors are as follows: 'Unable to allocate requested partition scheme.' Please see attached screen shots at bottom of post to see that all disks are selected.

Looking at the anaconda.log It seems that errors start immediately following the 'is_valid_stage1_device(sda)' function call (which returns 'true').

stage1 device cannot be of type lvmvg
stage1 device cannot be of type lvmlv

I know some of the anaconda API has changed;
- A breaking change in 6.7 introduced issues with using --percent & --grow without the use of --size
- A regression bug affected the same configuration parameters in 7.0

This is experienced with RHEL versions 7.0, 7.1 & 7.2. The configuration shown below works fine with 6.5, 6.6 & 6.7.

Am I missing something in the API for LVM partitions? Any help, tips & resources is appreciated.

The following /tmp/disks configuration file is created based on the number of physical disks & system information with %include /tmp/disks.

# Zero the MBR

# Clear out partitions for sda
clearpart --all --initlabel --drives=sda

# Create a /boot partition on sda of 500MB
part /boot --size=500 --fstype="ext4" --ondisk=sda

# Create an LVM partition of 172337MB on sda
part pv.root --size=172337 --ondisk=sda --grow --asprimary

# Create the root volume group
volgroup rootvg --pesize=4096 pv.root

# Create a memory partition of 31962MB
logvol swap --fstype="swap" --name="swaplv" --vgname="rootvg" --size=31962

# Create logical volume for the / mount point
logvol / --fstype="ext4" --name="rootlv" --vgname="rootvg" --size=102400

# Create logical volume for the /var mount point
logvol /var --fstype="ext4" --name="varlv" --vgname="rootvg" --size=40960

# Create logical volume for the /export/home mount point
logvol /export/home --fstype="ext4" --name="homelv" --vgname="rootvg" --size=10240

# Create logical volume for the /tmp mount point
logvol /tmp --fstype="ext4" --name="tmplv" --vgname="rootvg" --size=2048

# Create new physical volume on sdb as pv.optapp.1
part pv.optapp.1 --size=78741 --grow --ondisk=sdb

# Create new physical volume on sdc as pv.optapp.2
part pv.optapp.2 --size=100352 --grow --ondisk=sdc

# Create new physical volume on sdd as pv.optapp.3
part pv.optapp.3 --size=100352 --grow --ondisk=sdd

# Create new physical volume on sde as pv.optapp.4
part pv.optapp.4 --size=78741 --grow --ondisk=sde

# Create a new volume group with all the physical volumes
volgroup optappvg pv.optapp.1 pv.optapp.2 pv.optapp.3 pv.optapp.4 --pesize=4096

# Create a new logical volume for /opt/app mount point
logvol /opt/app --fstype="ext4" --name="optapplv" --vgname="optappvg" --size=268639

Kickstart halts here

As you can see all disks are currently selected despite the error shown in the previous screen shot

When you look at the disks and what is valid for versions 6.5, 6.6 & 6.7 of RHEL enterprise they are all shown as not just being used but also that the root bootable partition is correct.

And a the error message is less than informative about why it is failing


The following resolved the issue:

clearpart --all --initlabel --drives=sda,sdb,sdc,sdd,sde

I dynamically get the network (NET_KS) and storage (DEV_KS) devices with the following in %pre, if this helps ...

I.e., setting and using the DEV_KS in the clearpart ... --drives= option might be helpful.


## %pre - Functions

# fndSysDev - find a device, in decreasing order of preference (first = most, last = least)
#   $1  = root path under /sys -- e.g., 'block', 'class/net', etc...
#   $2+ = list (space separated) of devices, in increasing order of preference
#   ret = return string of device found, empty if not
fndSysDev() {
    local P="$1"
    local L="$@" 
    local l=""
    local r=""
    if [ "${P}" != "" ] ; then
        for l in ${L} ; do
            [ -d "/sys/${P}/${l}" ] && r="${l}"
            [ "${r}" != "" ] && break
    echo "${r}"

# NET_KS - Network Device
export NET_KS="$(fndSysDev 'class/net' eno1 eth0 enp*)"

# DEV_KS - Storage Device
export DEV_KS="$(fndSysDev 'block' nvme0n1 vda sda)"

This, of course, won't work for Multi-Disk (md) devices, including many firmware/fake software RAID (FRAID) that are picked up by MD, or when MD should be setup with kernel software RAID.