boot with degraded RAID5 (not system drive) triggers initramfs and requires user input to proceed
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
mdadm (Ubuntu) |
Confirmed
|
Undecided
|
Unassigned |
Bug Description
I have a remote ubuntu 12.04 with 5 harddrives, the ubuntu system and swap are installed on drive /dev/sde and in addition I have software raid5 over sda0, sdb0, sdc0. sdd0 partitions, the raid5 is used as a physical-volume for LVM.
The system is a headless remote system that must restart/boot automatically without any user input.
I tried to simulate disk failure on the raid5 by pulling one drive out of the system while it was running.
everything looks ok, I can continue to use the raid5 and it is reported as degraded and send event triggered email.
BUT - when I shut it down and try to boot, it detects the degraded raid5 array and goes into initramfs where I need to respond manually and click exit in order to let it continue the boot, but as I said it is a remote unit that should boot/restart automatically.
I already tried the following:
1. I modified /etc/initramfs-
2. Just to make sure I also ran: sudo dpkg-reconfigure msadm and set the boot degraded option through this tool too.
Is this a bug in ubuntu or am I doing something wrong?
If it is a bug is there a workaround this issue until it will be solved?
I am relatively new to both linux and ubuntu so I realy need your help to resolve this problem.
Appreciate your time and your help
If this is indeed a bug, any idea of a reasonable temp workaround this problem to let the system boot automatically without manual intervention, having initramfs continue without any prompt about the degraded raid?