Show Table of Contents
6.2. Restoring original label values for /dev/log
To restore the original selinux label, execute the following commands:
- Create a directory and soft links on all nodes that run gluster pods:
# mkdir /srv/<directory_name> # cd /srv/<directory_name>/ # same dir as above # ln -sf /dev/null systemd-tmpfiles-setup-dev.service # ln -sf /dev/null systemd-journald.service # ln -sf /dev/null systemd-journald.socket
- Edit the daemonset that creates the glusterfs pods on the node which has oc client:
# oc edit daemonset <daemonset_name>
Under volumeMounts section add a mapping for the volume:- mountPath: /usr/lib/systemd/system/systemd-journald.service name: systemd-journald-service - mountPath: /usr/lib/systemd/system/systemd-journald.socket name: systemd-journald-socket - mountPath: /usr/lib/systemd/system/systemd-tmpfiles-setup-dev.service name: systemd-tmpfiles-setup-dev-service
Under volumes section add a new host path for each service listed:Note
The path mentioned in here should be the same as mentioned in Step 1.- hostPath: path: /srv/<directory_name>/systemd-journald.socket type: "" name: systemd-journald-socket - hostPath: path: /srv/<directory_name>/systemd-journald.service type: "" name: systemd-journald-service - hostPath: path: /srv/<directory_name>/systemd-tmpfiles-setup-dev.service type: "" name: systemd-tmpfiles-setup-dev-service
- Run the following command on all nodes that run gluster pods. This will reset the label:
# restorecon /dev/log
- Execute the following command on the node which has oc client to delete the gluster pod:
# oc delete pod <gluster_pod_name>
- To verify if the pod is ready, execute the following command:
# oc get pods -l glusterfs=storage-pod
- Login to the node hosting the pod and check the selinux label of /dev/log
# ls -lZ /dev/log
The output should show devlog_t labelFor example:# ls -lZ /dev/log srw-rw-rw-. root root system_u:object_r:devlog_t:s0 /dev/log
Exit the node. - In the gluster pod, check if the label value is devlog_t:
# oc rsh <gluster_pod_name> # ls -lZ /dev/log
For example:# ls -lZ /dev/log srw-rw-rw-. root root system_u:object_r:devlog_t:s0 /dev/log
- Execute the following command to check the status of self heal for all volumes:
# oc rsh <gluster_pod_name> # for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
Wait for self-heal to complete. - Perform steps 4 to 8 for other pods.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.