Separate file systems / volumes / partitions for the OS?

Latest response

I was wondering what folks are doing these days as far as creating separate file systems (volumes,partitions)? In the past I typically like to separate out:

/
/var
/usr
/home
/opt
/tmp
/boot

Into separate LVM volumes and file systems on say a RHEL6 system.

The arguments for doing this:
1. filling up /var, /tmp, /home still doesn't fill up your root partition /
2. ease of monitoring / diagnosing full file systems
3. duration of fsck is generally faster on smaller file systems
4. security, you could make /usr ro in theory

I'm thinking about this in context of a virtual machine of some sort.

In general it seems like doing this is more trouble than its worth these days, especially with cloud type deployments. Most cloud providers seem to give you a straight up "/" and that is it for the OS. If you want a scheme with separate volumes like above it can be done, but you generally have to jump through a bunch of hoops either importing images into the cloud or doing surgery on their images.

In general I'd like to stay away from customizing OS cloud images as much as possible and use the cloud provided image and apply customization on top of that. The more VM / Cloud solutions one uses the more one spends their time updating images.

So I guess I'm looking for what folks think is "best practice" these days. I'm leaning toward living with one big root partition for the OS.

Thanks!

Responses

It all depends on what you are trying to achieve. If you need to be, say PCI DSS compliant, you will even need two more file systems:

/var/log /var/log/audit

Why more file systems are appropriate for corporate world - some ideas:

  1. Separation of management of the O/S from applications;
  2. Easier backup management (decide what to back up and what not);
  3. Simpler and fine-grained system monitoring;
  4. Updates to O/S do not affect applications;
  5. Data growth for applications does not affect O/S and vice versa;
  6. File system separation is required by various standards;
  7. Logical to how humans organize their storage at home (I doubt anyone will put all socks, shirts, and hardware tools in one drawer);
  8. Re-deployment of O/S does not affect applications and can be treated separately.

Best wishes and good luck

We don't have any choice on whether to partition: unless a system's partitioning meets security guidelines, it wont be allowed onto production networks.

That said, at least in the context of AWS, reliably creating a compliant, partitioned AMI is both dead easy and quick. No real need to "jump through a bunch of hoops either importing images into the cloud or doing surgery on their images". All you need to do is to a chroot'ed install to an attached storage object that has been partitioned via your desired means (typically LVM).

The length of time it takes will depend on how quick you can do a yum-based install. With our process, it's less than 20 minutes from the time the process is kicked off till the AMIs are being copied to all the regions we deploy systems to.

The biggest problem with LVM in such environments comes if you try to treat such instances like cattle rather than pets. Fixing an instance's root volume-group becomes problematic if you try to mount it to another host built from the same AMI - particularly when it comes to UUID collisions.

I concur with Tom, we've experienced what he describes. Oh, and (speaking solely on partitioning) we use LVM to achieve this. Also see this discussion https://access.redhat.com/discussions/641923

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.