Max size of filesystem for kickstart?

Latest response

I am attempting to kickstart a RHEL7 VM host (on vmware 6.0) with a 10TB sda. I can reduce the size to 1.9TB and it kickstarts just fine.

Is there a CAP somewhere that limits the size of a kickstart sda for a RHEL7 host? Is there a way to get around this that doesn't involve just growing the size after the system is kickstarted?

Responses

Mike,

What error/symptoms are you seeing when you attempt to provision with the 10TB sda?

After netboot during disk configuration it complains about incorrect disk setup. Suggesting space was not enough and we needed a "biosboot partition".

The problem appears to be a BIOS versus EFI boot type. Documentation I've found suggests that BIOS has a 2TB cap and EFI does not. Sorting out the required EFI boot files to see if this is true.

We had asked Red Hat for an answer for this, they provided us this...

==== begin quote

Options: -Perform the partitioning and lvm setup in %pre and exclude any partitioning or clearpart in the partition section of the kickstart. Or -Configure smaller partitions in the kickstart, then use %post to increase their size.

%post 
lvextend -r -l +100%FREE 6Tvol 

(Or something much more complex. based on your needs)

==== end quote

The below is another option that we used at our location as well.... under a one-time unique situation. (credit below to my kind co-worker)

I think it goes without saying you have to build and initialize the physical RAID -- but I have found that using the "fast initialization" is anything but fast. My experience has been it is as much as 10 times faster to not use "fast initialization". It is much better to create the RAID, start the full initialization separately, and wait until it is finished completely rather than trying to build on a RAID that is available but not fully initialized (like you can if you fast initialized it). However, that being said, we've found that we had to use "fast" with some vendor's raid arrays bios.

(NOTE: we generally do not make our primary OS disk as huge as mentioned below, This is one example based on a very unique situation we were directed; you can change this to whatever disk/partition you need for your own needs)

%pre 
#!/bin/sh 
/usr/sbin/parted -s /dev/sda mklabel gpt 
%end
  • You have to use clearpart --linux instead of clearpart --all (because you don't want to clear the gpt label)

Then you can set your partitions as we always have with the exception of increasing the physical extents have to be large enough for the largest desired Logical Volume. for example we needed a 6T volume in the above mentioned KS so I had to set the physical extents to 128M (3rd line below)

IMPORTANT NOTE: CHANGE BELOW TO ACCOMMODATE EFI

part /boot --fstype ext4 --size=500 --ondisk=sda --asprimary
    part pv.20 --size 100 --grow --ondisk=sda
    volgroup disk0 --pesize 131072 pv.20
    part swap --recommended --asprimary     ####( if you have a lot of memory you may want to set a reasonable swap space instead of using '--recommended')
    #### Then set your desired logical volumes (some examples below)
    logvol / --name=slash --vgname=disk0 --size=20480
    logvol /desiredmountpoint --name=desiredLVname --vgname=disk0 --size=25600
    logvol /hugelogicalvolumemountpoint --name=6Tvol --vgname=disk0 --size=614400
    logvol /reserve --name=ReserveRemainingSpace4FutureUse --vgname=disk0 --size=2000 --grow

ADDED: Change the filesystem type as needed, i.e. if you are using xfs and not ext4 if you are using RHEL7 (xfs is of course the default for rhel7)

apologies for the multiple edits...

I was able to resolve this by adding this one line to the kickstart script:

part biosboot --size=1

part /boot --fstype=ext4 --size=1000
part pv.008002 --grow --size=1
part biosboot --size=1
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.