Is it possible to Shrink an OpenShift Persistent Volume (PV) or its associated Gluster Volume ?

Solution Verified - Updated -

Issue

We have a PV of 10GB that was expanded to 20GB:

[root@node-10 ~]# oc describe pv pvc-12345-xxxxx-6789
Name:            pvc-12345-xxxxx-6789
Labels:          <none>
Annotations:     Description=Gluster-Internal: Dynamically provisioned PV
                 gluster.kubernetes.io/heketi-volume-id=111222333444  <<--- gluster volume
                 gluster.org/type=file
                 kubernetes.io/createdby=heketi-dynamic-provisioner
                 pv.beta.kubernetes.io/gid=2018
                 pv.kubernetes.io/bound-by-controller=yes
                 pv.kubernetes.io/provisioned-by=kubernetes.io/glusterfs
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    glusterfs-storage   <<------------!!!!
...

This was possible as the Storage Class "glusterfs-storage" had the value allowVolumeExpansion configured by customer to true:

[root@node-10 ~]# oc describe sc glusterfs-storage
...
allowVolumeExpansion: true                    <------------- !!!

The underlying gluster volume for this PV was initially a replica-3 and after the expansion
three more bricks were added , becoming a Distributed-Replicate 2 x 3 = 6 , with size of 20GB

sh-4.2# gluster volume info vol_111222333444

Volume Name: vol_111222333444
Type: Distributed-Replicate
Volume ID: xxxxxxx-aaaaaaa-xxxxx
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.0.0.1:/var/lib/heketi/mounts/vg_1111111111/brick_01010101010101/brick
Brick2: 10.0.0.2:/var/lib/heketi/mounts/vg_2222222222/brick_02020202020202/brick
Brick3: 10.0.0.3:/var/lib/heketi/mounts/vg_3333333333/brick_03030303030303/brick
Brick4: 10.0.0.1:/var/lib/heketi/mounts/vg_1111111111/brick_04040404040404/brick
Brick5: 10.0.0.2:/var/lib/heketi/mounts/vg_2222222222/brick_05050505050505/brick
Brick6: 10.0.0.3:/var/lib/heketi/mounts/vg_3333333333/brick_06060606060606/brick
Options Reconfigured:
features.barrier: disable
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
server.tcp-user-timeout: 42
cluster.brick-multiplex: on
  • Q. Is it possible to remove the last three added bricks to go back to the previous size and replica-3 type?

Environment

  • OCS 3.11

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content