Unable to boot the server configured with mdraid for /boot and root filesystems
Issue
-
We have a system which is configured to use
mdraiddevices for/bootandrootfilesystem, but after recent patching activity it started to fail in boot process. The console logs during boot process showed following errors before failure in boot process:[ 10.192497] testhost1 systemd-udevd[925]: starting '/sbin/mdadm -I /dev/sdm2' [ 10.194130] testhost1 systemd-udevd[643]: '/sbin/mdadm -I /dev/sdm1'(err) 'mdadm: we match both /dev/md/boot and /dev/md/boot - cannot decide which to use.' [ 10.194665] testhost1 systemd-udevd[643]: '/sbin/mdadm -I /dev/sdm1' [924] exit with return code 2 [ 10.195528] testhost1 systemd-udevd[644]: '/sbin/mdadm -I /dev/sdm2'(err) 'mdadm: we match both /dev/md/rootvg and /dev/md/rootvg - cannot decide which to use.' [ 10.196089] testhost1 systemd-udevd[644]: '/sbin/mdadm -I /dev/sdm2' [925] exit with return code 2Below is the
mdraidconfiguration used on server:$ cat /etc/mdadm.conf # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md/boot level=raid1 num-devices=2 UUID=123456789:c001dee1:457829631:1d1cb1e1 ARRAY /dev/md/rootvg level=raid1 num-devices=2 UUID=54a54ds6:554d54saa:da454s5ad:1d1cb222 ARRAY /dev/md/boot metadata=1.0 UUID=123456789:c001dee1:457829631:1d1cb1e1 name=testhost1:boot ARRAY /dev/md/rootvg metadata=1.2 UUID=54a54ds6:554d54saa:da454s5ad:1d1cb222 name=testhost1:rootvgOnce the boot process was failed and control was left at
dracutprompt, we are able to activate themdraiddevices manually and start the system. -
Mdadm array is going to degraded state after every reboot.
Environment
- Red Hat Enterprise Linux 9
- Red Hat Enterprise Linux 8
- Red Hat Enterprise Linux 7
mdraiddevices used for/bootandroot
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.