I have this problem as well ( 4 disks + hot spare with one raid10 /lvm) and one raid1 (/) array ).
Workaround:
Setting BOOT_DEGRADED=true in /etc/initramfs-tools/conf.d/mdadm works like a charm.
You need to do an
#sudo update-initramfs -u
afterwards.
Drawback: booting takes longer than normal, since mdadm waits 30s to assemble the (presumably) broken array and I'm not sure what happens if the arrays are *really* broken
I have this problem as well ( 4 disks + hot spare with one raid10 /lvm) and one raid1 (/) array ).
Workaround:
Setting BOOT_DEGRADED=true in /etc/initramfs- tools/conf. d/mdadm works like a charm.
You need to do an
#sudo update-initramfs -u
afterwards.
Drawback: booting takes longer than normal, since mdadm waits 30s to assemble the (presumably) broken array and I'm not sure what happens if the arrays are *really* broken