boot fails for LVM on degraded raid due to missing device tables at premount
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
lvm2 (Ubuntu) |
Confirmed
|
Undecided
|
Unassigned |
Bug Description
This is a Trusty installation with combined root + /boot within LVM on top of mdraid (type 1.x) RAID1. Raid1 was built with one missing disk (degraded).
[method: basically create raid/VG/LV setup in shell first then point installer at the lvm. At the end of the install create a chroot, add the mdadm pkg, and update-initramfs before reboot.]
The boot process fails with the following messages:
Incrementally starting RAID arrays...
mdadm: CREATE user root not found
mdadm: CREATE group disk not found
Incrementally starting RAID arrays...
and slowly repeats the above at this point.
workaround:
- add break=premount to grub kernel line entry
- for continued visibility of text boot output also remove quiet, splash and possibly set gxmode 640x480
now @ initramfs prompt:
mdadm --detail /dev/md0 should indicate a state of clean, degraded, array is started so this part is ok
lvm lvs output attributes are as follows:
-wi-d---- (instead of the expected -wi-a----)
lvs manpage this means device tables are missing (device mapper?)
FIX: simply run lvm vgchange -ay and exit initramsfs. This will lead to a booting system.
affects: | cryptsetup (Ubuntu) → lvm2 (Ubuntu) |
summary: |
- boot fails on raid (md raid1) + LVM (combined / + /boot) + degraded + boot fails on degraded raid (mdraid) due to LVM root (combined / + + /boot) missing device tables at mount time |
summary: |
- boot fails on degraded raid (mdraid) due to LVM root (combined / + - /boot) missing device tables at mount time + LVM based boot fails on degraded raid due to missing device tables at + premount |
summary: |
- LVM based boot fails on degraded raid due to missing device tables at + boot fails for LVM on degraded raid due to missing device tables at premount |
description: | updated |
here is an interesting find:
replacing the missing disk (mdadm --add /dev/md0 /dev/sdb1) and waiting for sync to complete leads to proper booting system
the system continued boot even after I --failed and --removed the second disk.
I could not return the system to it's original boot fail state until I zero'd the super block on the second disk.
some additional messages that I had not seen before (after boot is failing again)
device-mapper: table: 252:0: linear: dm-linear: Device lookup failed
device-mapper: ioctl: error adding target to table
device-mapper: table: 252:1: linear: dm-linear: Device lookup failed
device-mapper: ioctl: error adding target to table
(then repeats the above once more)