File system size different from LV size
Hi All;
I try to build a highly available cluster. This is the procedure I followed to create all of 6 file systems:
I first created physical volumes using multipath friendly names:
pvcreate /dev/mapper/mpathau etc.
Then I created vg's:
vgcreate -c y ggs_vg /dev/mapper/mpathau /dev/mapper/mpathav /dev/mapper/mpathat /dev/mapper/mpathas etc
And then created lv's:
lvcreate -n ggs_lv -l 100%FREE ggs_vg
Finally created gfs2 file system on it:
mkfs -t gfs2 -p lock_dlm -t golgat00:ggs_fs -j 8 /dev/ggs_vg/ggs_lv
Add all of them as cluster resources, and configure a service group for each, in order to manipulate each file system individually. Everything looks working fine in cluster level:
[root@node1 ggs_vg]# clustat
Cluster Status for golgat00 @ Thu Aug 13 23:02:31 2015
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
node1 1 Online, Local, rgmanager
node2 2 Online
/dev/mapper/mpathawp1 0 Online, Quorum Disk
Service Name Owner (Last) State
------- ---- ----- ------ -----
service:ggs_sg node1 started
service:oradata01_sg node1 started
service:oraflsh01_sg node1 started
service:orared01_sg node1 started
service:orared02_sg node1 started
service:p1edm1d5_sg node1 started
[root@node1 ggs_vg]#
But my problem is file system sizes are not correct. If I check file systems:
[root@node1 ggs_vg]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 59G 4.4G 52G 8% /
tmpfs 253G 62M 253G 1% /dev/shm
/dev/sda1 283M 79M 190M 30% /boot
/dev/sda6 198G 60M 188G 1% /data1
/dev/sda5 9.8G 24M 9.2G 1% /home
/dev/sdb1 275G 63M 261G 1% /data2
/dev/mapper/tools_vg-tools_lv
20G 44M 19G 1% /opt/tools
/dev/mapper/swm_vg-swm_lv
15G 38M 14G 1% /opt/app/swm
/dev/mapper/p1edm1f2_vg-p1edm1f2_lv
197G 60M 187G 1% /opt/app/p1edm1f2
/dev/mapper/ggs_vg-ggs_lv
52G 1.1G 51G 2% /opt/app/p1edm1d5/ggs
/dev/mapper/oradata01_vg-oradata01_lv
52G 1.1G 51G 2% /opt/app/p1edm1d5/oradata01
/dev/mapper/orared01_vg-orared01_lv
52G 1.1G 51G 2% /opt/app/p1edm1d5/oraredo01
/dev/mapper/orared02_vg-orared02_lv
52G 1.1G 51G 2% /opt/app/p1edm1d5/oraredo02
/dev/mapper/oraflsh01_vg-orafls01_lv
52G 1.1G 51G 2% /opt/app/p1edm1d5/oraflsh01
/dev/mapper/p1edm1d5_vg-p1edm1d5_lv
52G 1.1G 51G 2% /opt/app/p1edm1d5
This is nothing to do with logical devices I configured:
[root@node1 ggs_vg]# vgs
VG #PV #LV #SN Attr VSize VFree
ggs_vg 4 1 0 wz--nc 415.98g 0
oradata01_vg 11 1 0 wz--nc 3.40t 0
oraflsh01_vg 5 1 0 wz--nc 899.98g 0
orared01_vg 8 1 0 wz--nc 47.97g 0
orared02_vg 8 1 0 wz--nc 47.97g 0
p1edm1d5_vg 4 1 0 wz--nc 51.98g 0
p1edm1f2_vg 4 1 0 wz--n- 199.98g 0
swm_vg 1 1 0 wz--n- 15.00g 0
tools_vg 1 1 0 wz--n- 20.00g 0
[root@node1 ggs_vg]#
This is correct size I should see. But all file system look like 52Gb each. This is unexpected. I tried to install new gfs2 on logical devices . This temporary resolved my problem. Temporary because weirdly in the beginning file system size was normal. But then it turned back to 52GB. I believe I can produce same thing by re-installing fs. Reboot of both nodes didn't resolve either.