Partition table format

Latest response

Would someone care to give us an example or two? is one of the most useless pieces of "documentation" I have ever read.


Hi James,

I'm a member of the Satellite Documentation Team. We're planning on adding more conceptual information and examples to the Satellite 6 documentation in the future. I've created this bug ( to add more details to the provisioning table section. Was there any particular kinds of examples or information you would find useful?


Basically anything. Right now the section has zero information. How do we define partitions? What about LVM groups? What are the commands?

When I click "New Partition Table" I'm presented with two blank text boxes ( The help simply says: "Enter the Layout for the Partition Table. The Layout textbox also accepts dynamic disk partitioning scripts." It doesn't tell us what any of that means, nor does it link to information that would.

Hi James,

Thanks for the feedback and taking the time to respond. I've added your comment to the bug. So far I can see we definitely need to include detailed commands and definitions. We also need to include more concrete examples. If there's any specific things you feel would be important to be included beyond that feel free to let me know and I can pass it along to the writer who works on improving that section. If you have any examples of use cases that would be really helpful as it helps us better understand how customer's are using the product.

Thanks again,


The partition layouts seem to be identical to those you might use in any other RHEL kickstart. I literally copied and pasted the partition layout from my RHEL7 kickstart on Sat 5.6 into the partition layout box on Sat 6. The full syntax for the kickstart partitioning is in various RHEL Installation Guides. (chapter 23 in the RHEL7 Install Guide)


Glad to hear it. That'll make conversion at least a little easier.

I'm not assuming anything is the same with Sat6. Thus the lack of clarity in the documentation being frustrating.

I'm curious if anyone has had some experience with what partitioning layout worked well for Satellite 6.x on RHEL 7.x,

Summary: We have about 2TB worth of storage available (separate array away from the operating system), and want to provision enough storage so we don't have to rebuild the server for storage needs any time soon. We're hoping some who have had actual time with the Satellite server and what sizes they provided for their partitions can provide feedback on what works. We shall use LVM but are hoping for an idea of what has worked for others over time.

I found what's recommended at installation, but it seems the partition sizes listed at the link below only show 1) the minimum sizes for installation and 2) some idea for minimal growth (except when syncing all RHEL channels from RHEL 5-7). However, I'd really rather build a Satellite server once, and not have to add more partitions later.

I've discovered with partitioning with the Satellite, there's a difference from what's recommended and what actually happens in the field.

I do see at Red Hat Docs for Satellite insallation the following:

    Packages that are duplicated in different repositories are only stored once on the disk. Additional repositories containing duplicate packages will require less additional storage. The bulk of storage resides in the /var/lib/mongodb/ and /var/lib/pulp/ directories. These end points are not manually configurable. Make sure that storage is available on the /var file system to prevent storage issue

Directory Installation Size Runtime Size with RHEL 5, 6, and 7 synchronized Considerations


/var/cache/pulp/ Inst:1 MB 10 GB (Minimum) See the notes in this section’s introduction. Has anyone had any experience with the size of this over time?


/var/lib/pulp/ Inst:1 MB 200 GB
Notes: Will continue to grow as content is added to Satellite Server. Plan for expansion over time. Symbolic links cannot be used. Has anyone had any experience with the size of this over time?


/var/lib/mongodb/ Inst:3.5 GB 25 GB
Notes: Will continue to grow as content is added to Satellite Server. Plan for expansion over time. Symbolic links cannot be used. NFS is not recommended with MongoDB. Has anyone had experience with this partition's growth over time?


/var/log/ Inst:10 MB 250 MB None We're obviously going to make /var/log/ bigger than 250MB, it would be madness to use that. our normal /var/log we use from experience will work fine.


/var/lib/pgsql/ Inst:100 MB 2 GB
NOTES: A minimum of 2 GB of available storage in /var/lib/pgsql/ with the ability to grow the partition containing this directory as data storage requirements grow.
Even 2G seems rather minimalist.
Has anyone had any experience with the size of this over time?


/usr Inst:3 GB Not Applicable None

Connected/Disconnected below
    /opt (Connected to RH)

/opt Inst:500 MB (Connected Installations) Not Applicable NOTES: Software collections are installed into the /opt/rh/ and /opt/theforeman/ directories. Write and execute permissions by root are required for installation into to the /opt directory.

    /opt (disconnected satellite)

/opt Inst:2 GB (Disconnected Installations) Not Applicable
Notes: Software collections are installed into the /opt/rh/ and /opt/theforeman/ directories. Write and execute permissions by root are required for installation into to the /opt directory. A copy of the repositories used for installation is stored in this directory. Has anyone had any experience with the size of this over time?

I copied/pasted all that above from the documentation. It seems somewhat minimal. If anyone might have some idea of what they actually used, it would be greatly appreciated. There are obvious ones like /var/log that we will follow suit with our normal partitioning, but the ones I'm concerned about are satellite-specific.

Thanks, R. Hinton

Hello, I recommend to read guide that explains some partitioning schemes.

Hi Lukas,

Thanks for the link, I couldn't access it; I tried 3 different accounts I have access to. All 3 of these accounts have valid, current subscriptions. In all three instances, I received "access denied" to the link you mentioned.

The link actually said: (

3 Access Denied

You do not have permission to access the page you requested.

If this is knowledge content, it may be unpublished or retired. 
Documents may be retired when they are outdated, duplicated, 
or no longer necessary. Please try searching the Customer 
Portal for the most current, published information.

We've read the documentation already. We've taken some best guess with this. We were hoping in this discussion I re-activated hopefully someone might post their own results and lessons learned.

While the "life of the party" seems to be under "/var", we were curious of the breakdown and we'd rather --not-- provision as a minimalist partitioning just because it's the lowest documented method on paper. We really do want an idea of what heavy-handed partitioning would be so that we don't have to partition again for the foreseeable future. It's not like we don't have storage, we have plenty. We would rather measure twice and cut once, and were hoping someone in the field who already faced some pain would help.

Thanks R. Hinton

That doc (which was the 2015 Performance Tuning Guide) was deprecated and replaced with the 2016 version, available here, so our apologies on that.

Regarding your questions above

  • /var/cache/pulp - This is the Satellite 6 equivalent to /var/cache/rhn (if you are familiar with Sat5.) . This is where Satellite stored data it is working on prior to it being added to storage. (in /var/lib/pulp). This data is transient by design, so the recommendation of 10GB might be a bit small (as there are a few repos > 10GB), but this isn't a 'grow over time' directory. This is a 'size it for the high watermark' directory.
  • /var/lib/pulp - This is the hardest directory to plan for, as it is completely dependent on 1) what repos you download, and 2) which RPMs are duplicated between repos. What I have found helpful is to use a tool like pulp-planner which allows you to actually size /var/lib/pulp based on what repos you are going to download.

And one directory that appears to be missed is /var/lib/qpidd/. Plan for 2MB per client that you run katello-agent on. (the 2MB is the reserved space for the clients durable messaging queue. (And now I'll file a BZ on that).

Apologize for providing a dead link, thanks Rich for correcting me.

Unfortunately, with all the documentation updates it's still fairly lacking. What should be entered if users are not using kickstart/satellite for provisioning systems and partitioning is handled at the template level? Can I just copy/paste my fstab contents?

What "templates" are you referring to?

So you're just talking about deploying virtual machines using an existing machine as a template? If this is the case you wouldn't be doing anything with the 'partition table' items within Satellite. Your deployed virtual machines would just have the same disk layout (same actual disk really) as the template. The partition table configuration items within Satellite are only pertinent to kickstart provisioning.

Right, that was the plan at first, but it's a requirement to have a Partition Table selected under Host Groups... Is "Kickstart default" a safe option?

Ah, right, that old goofiness. Yes, if you are not actually doing kickstart provisioning this will actually never have an effect so it's all safe.

Do we have any specific documentation for partition tables with naming the multipath devices?

I put in a case a long time ago on this. After much discussion, I closed the case, and went with this, which is excessive overkill, but we're good with excessive overkill.

Filesystem                           Type      Size   Used  Avail  Use%  Mounted           on
/dev/mapper/raid1_disk0-slash        xfs       85G    1.4G  84G    2%    /
/dev/mapper/raid1_disk0-usr          xfs       27G    2.8G  24G    11%   /usr
/dev/sda1                            xfs       1014M  191M  824M   19%   /boot
/dev/mapper/raid5_varsat-opt         xfs       254G   930M  253G   1%    /opt
/dev/mapper/raid5_varsat-growlvm     xfs       66G    33M   66G    1%    /growlvmsdb
/dev/mapper/raid5_varsat-var         xfs       4.1T   940G  3.2T   23%   /var
/dev/mapper/raid1_disk0-tmp          xfs       37G    139M  36G    1%    /tmp
/dev/mapper/raid1_disk0-varspool     xfs       13G    35M   13G    1%    /var/spool
/dev/mapper/raid1_disk0-home         xfs       7.9G   394M  7.6G   5%    /home
/dev/mapper/raid1_disk0-growlvm      xfs       66G    33M   66G    1%    /growlvmsda
/dev/mapper/raid1_disk0-vartmp       xfs       21G    33M   21G    1%    /var/tmp
/dev/mapper/raid1_disk0-varlog       xfs       57G    1.9G  55G    4%    /var/log
/dev/mapper/raid1_disk0-varlogaudit  xfs       25G    67M   25G    1%    /var/log/audit

From the kickstart...

part biosboot --fstype="biosboot" --ondisk=sdb --size=1
part pv.221 --fstype="lvmpv" --ondisk=sda --size=570751
part /boot --fstype="xfs" --size=1024
part pv.227 --fstype="lvmpv" --ondisk=sdb --size=4576253
volgroup raid5_varsat --pesize=4096 pv.227
volgroup raid1_disk0 --pesize=4096 pv.221
logvol /var/log/audit  --fstype="xfs" --size=25000 --fsoptions="nodev,nosuid" --name=varlogaudit --vgname=raid1_disk0
logvol /home  --fstype="xfs" --size=8096 --fsoptions="nodev" --name=home --vgname=raid1_disk0
logvol /growlvmsda  --fstype="xfs" --size=67600 --name=growlvm --vgname=raid1_disk0
logvol /growlvmsdb  --fstype="xfs" --size=67600 --name=growlvm --vgname=raid5_varsat
logvol /tmp  --fstype="xfs" --size=37000 --fsoptions="nodev,nosuid" --name=tmp --vgname=raid1_disk0
logvol /var/spool  --fstype="xfs" --size=13000 --fsoptions="nodev" --name=varspool --vgname=raid1_disk0
logvol /var  --fstype="xfs" --grow --size=100 --fsoptions="nodev" --name=var --vgname=raid5_varsat
logvol /usr  --fstype="xfs" --size=27288 --name=usr --vgname=raid1_disk0
logvol /var/log  --fstype="xfs" --size=58000 --fsoptions="nodev" --name=varlog --vgname=raid1_disk0
logvol /var/tmp  --fstype="xfs" --size=21000 --fsoptions="nodev" --name=vartmp --vgname=raid1_disk0
logvol /  --fstype="xfs" --size=86665 --name=slash --vgname=raid1_disk0
logvol swap  --fstype="swap" --size=16064 --name=swap --vgname=raid1_disk0
logvol /opt  --fstype="xfs" --size=260000 --name=opt --vgname=raid5_varsat

Sorry, that probably does not specifically address multipathing Balaji. That being said, we have no issues with space.



Thanks RJ.

Because it is mentioned in kickstart document not to mention the device names (example: mpatha, sda) and rather provide the wwid of the disks. But each server will have different wwids and we cannot have more partition tables created.

I recommend either using labels, LVM or UUIDs, never the raw device. The kickstarts I've made for the satellite servers using that partitioning cited in my previous post have LVM and LVM is used. Even if multipath is active, there are links to deal with that, which multipath makes by default.

It's a bad idea to have raw device names in any kickstart or /etc/fstab because if one partition gets deleted for whatever reason, then the other partitions end up getting mounted on different mount points.

Another example of this, let's say you have SAN (storage area network) presented storage. If you mount it with "/dev/sdd1" instead of a UUID number and someone changes the port on a different SAN switch, the device name will change. However if you use the UUID number, that will not change (unless someone reformats the LUN). I use UUID numbers for SAN presented storage for that reason.

To summarize, using one of the 3 methods of 1) labels 2) LVM or 3) UUIDs avoids this.



Hi, I notice this has been closed but if you need documentation on the Dynamic Disk partitioning, I found this to be very useful

Has good examples etc. to help you get a handle on how it all works. I've started to use it, especially for calculating out swap sizing and same layout, different sizes type situations.

Regards David