(bionic) boot with degraded RAID array for non-root device enters emergency mode
Bug #1825075 reported by
Trent Lloyd
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
mdadm (Ubuntu) |
Confirmed
|
High
|
Unassigned |
Bug Description
When booting a Bionic system where an MDADM RAID device is degraded (e.g. 1 of 2 disks appear) and that device is for a non-root filesystem (e.g. /home) the boot enters emergency mode and the MD device stays inactive.
This can be reproduced using the live server installation CD, 3 disks - 1 for /, and 2 for a software RAID which is then mounted to /home as ext4.
After first boot, shutdown the system, remove one of the two RAID disks, and boot the system.
To post a comment you must log in.
A degraded array used for the root filesystem DOES boot as expected on Bionic. My guess is that during boot the "poor-mans mdadm-last- resort@ .timer" code from debian/ initramfs/ script. local-block is executed where as with non-root devices that is not the case and it potentially should use the actual mdadm-last-resort@ timer service?
Any filesystem in /etc/fstab not appearing results in an emergency mode boot (I'm not sure that is always sensible, but never the less, you can apparently specify 'nofail'). That leaves me guessing that the main issue I imagine is that the device doesn't appear - the question is why the timers/etc don't promote the degraded array to running after the timeout and allow the system to boot. Does the emergency mode timeout happen before it can do so?