6.2. Restoring original label values for /dev/log

To restore the original selinux label, execute the following commands:
  1. Create a directory and soft links on all nodes that run gluster pods:
    # mkdir /srv/<directory_name>
    # cd /srv/<directory_name>/   # same dir as above
    # ln -sf /dev/null systemd-tmpfiles-setup-dev.service
    # ln -sf /dev/null systemd-journald.service
    # ln -sf /dev/null systemd-journald.socket
  2. Edit the daemonset that creates the glusterfs pods on the node which has oc client:
    # oc edit daemonset <daemonset_name>
    Under volumeMounts section add a mapping for the volume:
    - mountPath: /usr/lib/systemd/system/systemd-journald.service
      name: systemd-journald-service
    - mountPath: /usr/lib/systemd/system/systemd-journald.socket
      name: systemd-journald-socket
    - mountPath: /usr/lib/systemd/system/systemd-tmpfiles-setup-dev.service
      name: systemd-tmpfiles-setup-dev-service
    Under volumes section add a new host path for each service listed:

    Note

    The path mentioned in here should be the same as mentioned in Step 1.
    - hostPath:
       path: /srv/<directory_name>/systemd-journald.socket
       type: ""
      name: systemd-journald-socket
    - hostPath:
       path: /srv/<directory_name>/systemd-journald.service
       type: ""
      name: systemd-journald-service
    - hostPath:
       path: /srv/<directory_name>/systemd-tmpfiles-setup-dev.service
       type: ""
      name: systemd-tmpfiles-setup-dev-service
  3. Run the following command on all nodes that run gluster pods. This will reset the label:
    # restorecon /dev/log
  4. Execute the following command on the node which has oc client to delete the gluster pod:
    # oc delete pod <gluster_pod_name>
  5. To verify if the pod is ready, execute the following command:
    # oc get pods -l glusterfs=storage-pod
  6. Login to the node hosting the pod and check the selinux label of /dev/log
    # ls -lZ /dev/log
    The output should show devlog_t label
    For example:
    #  ls -lZ /dev/log
    srw-rw-rw-. root root system_u:object_r:devlog_t:s0    /dev/log
    Exit the node.
  7. In the gluster pod, check if the label value is devlog_t:
    # oc rsh <gluster_pod_name>
    # ls -lZ /dev/log
    For example:
    #  ls -lZ /dev/log
    srw-rw-rw-. root root system_u:object_r:devlog_t:s0    /dev/log
  8. Execute the following command to check the status of self heal for all volumes:
    # oc rsh <gluster_pod_name>
    # for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done  | grep  "Number of entries: [^0]$"
    Wait for self-heal to complete.
  9. Perform steps 4 to 8 for other pods.