File system size different from LV size

Latest response

Hi All;

I try to build a highly available cluster. This is the procedure I followed to create all of 6 file systems:

I first created physical volumes using multipath friendly names:

pvcreate /dev/mapper/mpathau  etc. 

Then I created vg's:

vgcreate -c y ggs_vg /dev/mapper/mpathau /dev/mapper/mpathav /dev/mapper/mpathat /dev/mapper/mpathas etc

And then created lv's:

lvcreate -n ggs_lv -l 100%FREE ggs_vg

Finally created gfs2 file system on it:

mkfs -t gfs2 -p lock_dlm -t golgat00:ggs_fs -j 8 /dev/ggs_vg/ggs_lv

Add all of them as cluster resources, and configure a service group for each, in order to manipulate each file system individually. Everything looks working fine in cluster level:

[root@node1 ggs_vg]# clustat
Cluster Status for golgat00 @ Thu Aug 13 23:02:31 2015
Member Status: Quorate

 Member Name                                                     ID   Status
 ------ ----                                                     ---- ------
 node1                                              1 Online, Local, rgmanager
 node2                                               2 Online
 /dev/mapper/mpathawp1                                               0 Online, Quorum Disk

 Service Name                                              Owner (Last)                                              State
 ------- ----                                              ----- ------                                              -----
 service:ggs_sg                                            node1                                       started
 service:oradata01_sg                                      node1                                     started
 service:oraflsh01_sg                                      node1                                      started
 service:orared01_sg                                       node1                                     started
 service:orared02_sg                                       node1                                      started
 service:p1edm1d5_sg                                       node1                                     started
[root@node1 ggs_vg]#

But my problem is file system sizes are not correct. If I check file systems:

[root@node1 ggs_vg]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              59G  4.4G   52G   8% /
tmpfs                 253G   62M  253G   1% /dev/shm
/dev/sda1             283M   79M  190M  30% /boot
/dev/sda6             198G   60M  188G   1% /data1
/dev/sda5             9.8G   24M  9.2G   1% /home
/dev/sdb1             275G   63M  261G   1% /data2
/dev/mapper/tools_vg-tools_lv
20G   44M   19G   1% /opt/tools
/dev/mapper/swm_vg-swm_lv
15G   38M   14G   1% /opt/app/swm
/dev/mapper/p1edm1f2_vg-p1edm1f2_lv
197G   60M  187G   1% /opt/app/p1edm1f2
/dev/mapper/ggs_vg-ggs_lv
52G  1.1G   51G   2% /opt/app/p1edm1d5/ggs
/dev/mapper/oradata01_vg-oradata01_lv
52G  1.1G   51G   2% /opt/app/p1edm1d5/oradata01
/dev/mapper/orared01_vg-orared01_lv
52G  1.1G   51G   2% /opt/app/p1edm1d5/oraredo01
/dev/mapper/orared02_vg-orared02_lv
52G  1.1G   51G   2% /opt/app/p1edm1d5/oraredo02
/dev/mapper/oraflsh01_vg-orafls01_lv
52G  1.1G   51G   2% /opt/app/p1edm1d5/oraflsh01
/dev/mapper/p1edm1d5_vg-p1edm1d5_lv
52G  1.1G   51G   2% /opt/app/p1edm1d5

This is nothing to do with logical devices I configured:

[root@node1 ggs_vg]# vgs
  VG           #PV #LV #SN Attr   VSize   VFree
  ggs_vg         4   1   0 wz--nc 415.98g    0
  oradata01_vg  11   1   0 wz--nc   3.40t    0
  oraflsh01_vg   5   1   0 wz--nc 899.98g    0
  orared01_vg    8   1   0 wz--nc  47.97g    0
  orared02_vg    8   1   0 wz--nc  47.97g    0
  p1edm1d5_vg    4   1   0 wz--nc  51.98g    0
  p1edm1f2_vg    4   1   0 wz--n- 199.98g    0
  swm_vg         1   1   0 wz--n-  15.00g    0
  tools_vg       1   1   0 wz--n-  20.00g    0
 [root@node1 ggs_vg]#

This is correct size I should see. But all file system look like 52Gb each. This is unexpected. I tried to install new gfs2 on logical devices . This temporary resolved my problem. Temporary because weirdly in the beginning file system size was normal. But then it turned back to 52GB. I believe I can produce same thing by re-installing fs. Reboot of both nodes didn't resolve either.

Responses

Hello,
Your vgs output indeed does show the volume group at 415G and the command you pasted showed you creating the logical volume with "100%FREE", but can you post the lvs ggs_vg output just to confirm whether the logical volume ended up getting created at the full, expected size or whether it somehow only saw 52G as being free? This would at least confirm whether the problem is in how LVM is creating the volume, or how mkfs.gfs2 is creating the file system.

Thanks,
John Ruemker, RHCA
Principal Software Maintenance Engineer
Red Hat Global Support Services

[root@node1 ~]# lvs /dev/ggs_vg/ggs_lv
LV     VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
ggs_lv ggs_vg -wi-a----- 415.98g
[root@node1 ~]#

I think it looks fine. BTW I later on discovered this 52GB file system size is the base file system size. I mean I have 6 shared file systems:

/opt/app/p1edm1d5           50
/opt/app/p1edm1d5/ggs           414
/opt/app/p1edm1d5/oradata01     3500
/opt/app/p1edm1d5/oraflsh01            883
/opt/app/p1edm1d5/oraredo01     46
/opt/app/p1edm1d5/oraredo02     46

As you see, /opt/app/p1edm1d5 is common for all mount point. And p1edm1d5_lv size is 52gb which is all file system size showed by df command. So I have some ideas in mind like if cluster stop reading mount point name after a character limit. Because if I change cluster configuration and in luci if I change all mount point names such as /test1 /test2 /test3 /test4 /test5 /test6 everything works perfectly with correct file system size. Thanks for comment.

Oh, I see now. That makes more sense, although is definitely still unexpected behavior. It does look to just be a problem with df then, rather than an issue with how large mkfs makes the file system. Whatever it is, it looks likely to be a bug that we'll want to investigate further.

I just did a quick test using the same paths you've specified, and didn't have any issues getting df to report the full file system size. I'm using the latest gfs2-utils and coreutils.

/dev/mapper/clust-lv2--gfs2
1.0G 259M 766M 26% /opt/app/p1edm1d5
/dev/mapper/clust-lv3--gfs2
2.0G 259M 1.8G 13% /opt/app/p1edm1d5/ggs

At this point my suggestion is for you to open a case so we can collect more information about your environment and configuration and look more deeply at what's happening. If you'd rather continue troubleshooting here, then my general questions would be:

  • Are you doing anything special in how you mount these file systems? Any unusual mount options or processes for mounting them, or just using /etc/fstab and mostly the default options?

  • Did you just set up this cluster, or has it been going for awhile? I'm curious if this problem started after something changed, or if its been like this since you first set the cluster up.

  • If you mount these file systems somewhere else, not in a subdir of another GFS2 fs, does df show the correct size?

Thanks,
John

<clusterfs device="/dev/oradata01_vg/oradata01_lv" force_unmount="1" fsid="39965" fstype="gfs2"  mountpoint="/opt/app/p1edm1d5/oradata01" name="oradata01_rs" options="noatime,nodiratime"/>
<clusterfs device="/dev/ggs_vg/ggs_lv" force_unmount="1" fsid="48964" fstype="gfs2" mountpoint="/opt/app/p1edm1d5/ggs" name="ggs_rs" options="noatime,nodiratime"/>
 <clusterfs device="/dev/orared01_vg/orared01_lv" force_unmount="1" fsid="56974" fstype="gfs2" mountpoint="/opt/app/p1edm1d5/oraredo01" name="orared01_rs" options="noatime,nodiratime"/>
  <clusterfs device="/dev/p1edm1d5_vg/p1edm1d5_lv" force_unmount="1" fsid="50602" fstype="gfs2" mountpoint="/opt/app/p1edm1d5" name="p1edm1d5_rs" options="noatime,nodiratime"/>
  <clusterfs device="/dev/oraflsh01_vg/orafls01_lv" force_unmount="1" fsid="52748" fstype="gfs2" mountpoint="/opt/app/p1edm1d5/oraflsh01" name="oraflsh01_rs" options="noatime,nodiratime"/>
  <clusterfs device="/dev/orared02_vg/orared02_lv" force_unmount="1" fsid="13299" fstype="gfs2" mountpoint="/opt/app/p1edm1d5/oraredo02" name="orared02_rs" options="noatime,nodiratime"/>

This is a new deployment. So it was same since the beginning.
As I said if I mount with different name such as /test1 .... /test6 yes it does show correctly. I didn't use other file system. This is an active/passive cluster. So only one node will be active at a time. Do you recommend gfs2 or ext4?

I have no entry in fstab, as cluster doesn't want it.

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.