Activity log for bug #2004485

Date Who What changed Old value New value Message
2023-02-01 15:28:51 Bjarne Schmidt bug added bug
2023-02-01 15:29:49 Bjarne Schmidt description The new autoinstall installer which deprecated the previously used debian-installer supports the creation of an raid array during setup (in my case raid1). Also as part of the installer the /dev/md0 device is registered as Physical Volume in LVM, a Volume Group is being created and a Logical Volume is created which will be mounted at "/" for the root filesystem. Creating such an array works fine and results in a flawlessly working system. After the installation I verified that the array is in sync by issuing "cat /proc/mdstat". To simulate a failure of one disk I erased one of the disks completely and rebooted the system. The expected behaviour would be: The system boot's normally since there is still one disk available. The actual behaviour: The system is stuck during boot displaying the following relevant messages: "Begin: Mounting root file system ... Begin: Running /scripts/local-top ... Volume Group "VG" not found" "Cannot process volume group VG" "Gave up waiting for root file system device" What helped resolving the issue (before removing wiping one of the disks): echo "sleep 60" > /target/etc/initramfs-tools/scripts/init-premount/init-mdfix chmod 744 /etc/initramfs-tools/scripts/init-premount/init-mdfix update-grub update-initramfs -u I assume the system is trying to find the root file system but this is not yet available since the raid array is taking too long to initialize after one of the disks has been wiped. After applying the above fix and wiping one of the system booted normally after waiting for 60 seconds. This seems to be enough for the raid to initialize in a degraded state. The new autoinstall installer which deprecated the previously used debian-installer supports the creation of an raid array during setup (in my case raid1). Also as part of the installer the /dev/md0 device is registered as Physical Volume in LVM, a Volume Group is being created and a Logical Volume is created which will be mounted at "/" for the root filesystem. Creating such an array works fine and results in a flawlessly working system. After the installation I verified that the array is in sync by issuing "cat /proc/mdstat". To simulate a failure of one disk I erased one of the disks completely and rebooted the system. The expected behaviour would be: The system boot's normally since there is still one disk available. The actual behaviour: The system is stuck during boot displaying the following relevant messages: "Begin: Mounting root file system ... Begin: Running /scripts/local-top ... Volume Group "VG" not found" "Cannot process volume group VG" "Gave up waiting for root file system device" What helped resolving the issue (before removing wiping one of the disks):   echo "sleep 60" > /target/etc/initramfs-tools/scripts/init-premount/init-mdfix   chmod 744 /etc/initramfs-tools/scripts/init-premount/init-mdfix   update-grub   update-initramfs -u I assume the system is trying to find the root file system but this is not yet available since the raid array is taking too long to initialize after one of the disks has been wiped. After applying the above fix and wiping one of the disks the system booted normally after waiting for 60 seconds. This seems to be enough for the raid to initialize in a degraded state.
2023-02-01 16:25:13 Ubuntu Foundations Team Bug Bot tags array autoinstall boot curtin raid array autoinstall boot bot-comment curtin raid
2023-02-01 16:47:12 Bjarne Schmidt description The new autoinstall installer which deprecated the previously used debian-installer supports the creation of an raid array during setup (in my case raid1). Also as part of the installer the /dev/md0 device is registered as Physical Volume in LVM, a Volume Group is being created and a Logical Volume is created which will be mounted at "/" for the root filesystem. Creating such an array works fine and results in a flawlessly working system. After the installation I verified that the array is in sync by issuing "cat /proc/mdstat". To simulate a failure of one disk I erased one of the disks completely and rebooted the system. The expected behaviour would be: The system boot's normally since there is still one disk available. The actual behaviour: The system is stuck during boot displaying the following relevant messages: "Begin: Mounting root file system ... Begin: Running /scripts/local-top ... Volume Group "VG" not found" "Cannot process volume group VG" "Gave up waiting for root file system device" What helped resolving the issue (before removing wiping one of the disks):   echo "sleep 60" > /target/etc/initramfs-tools/scripts/init-premount/init-mdfix   chmod 744 /etc/initramfs-tools/scripts/init-premount/init-mdfix   update-grub   update-initramfs -u I assume the system is trying to find the root file system but this is not yet available since the raid array is taking too long to initialize after one of the disks has been wiped. After applying the above fix and wiping one of the disks the system booted normally after waiting for 60 seconds. This seems to be enough for the raid to initialize in a degraded state. The new autoinstall installer which deprecated the previously used debian-installer supports the creation of an raid array during setup (in my case raid1). Also as part of the installer the /dev/md0 device is registered as Physical Volume in LVM, a Volume Group is being created and a Logical Volume is created which will be mounted at "/" for the root filesystem. Creating such an array works fine and results in a flawlessly working system. After the installation I verified that the array is in sync by issuing "cat /proc/mdstat". To simulate a failure of one disk I erased one of the disks completely and rebooted the system. The expected behaviour would be: The system boot's normally since there is still one disk available. The actual behaviour: The system is stuck during boot displaying the following relevant messages: "Begin: Mounting root file system ... Begin: Running /scripts/local-top ... Volume Group "VG" not found" "Cannot process volume group VG" "Gave up waiting for root file system device" What helped resolving the issue (before removing wiping one of the disks):   echo "sleep 60" > /etc/initramfs-tools/scripts/init-premount/init-mdfix   chmod 744 /etc/initramfs-tools/scripts/init-premount/init-mdfix   update-grub   update-initramfs -u I assume the system is trying to find the root file system but this is not yet available since the raid array is taking too long to initialize after one of the disks has been wiped. After applying the above fix and wiping one of the disks the system booted normally after waiting for 60 seconds. This seems to be enough for the raid to initialize in a degraded state.
2023-02-01 16:47:29 Bjarne Schmidt description The new autoinstall installer which deprecated the previously used debian-installer supports the creation of an raid array during setup (in my case raid1). Also as part of the installer the /dev/md0 device is registered as Physical Volume in LVM, a Volume Group is being created and a Logical Volume is created which will be mounted at "/" for the root filesystem. Creating such an array works fine and results in a flawlessly working system. After the installation I verified that the array is in sync by issuing "cat /proc/mdstat". To simulate a failure of one disk I erased one of the disks completely and rebooted the system. The expected behaviour would be: The system boot's normally since there is still one disk available. The actual behaviour: The system is stuck during boot displaying the following relevant messages: "Begin: Mounting root file system ... Begin: Running /scripts/local-top ... Volume Group "VG" not found" "Cannot process volume group VG" "Gave up waiting for root file system device" What helped resolving the issue (before removing wiping one of the disks):   echo "sleep 60" > /etc/initramfs-tools/scripts/init-premount/init-mdfix   chmod 744 /etc/initramfs-tools/scripts/init-premount/init-mdfix   update-grub   update-initramfs -u I assume the system is trying to find the root file system but this is not yet available since the raid array is taking too long to initialize after one of the disks has been wiped. After applying the above fix and wiping one of the disks the system booted normally after waiting for 60 seconds. This seems to be enough for the raid to initialize in a degraded state. The new autoinstall installer which deprecated the previously used debian-installer supports the creation of an raid array during setup (in my case raid1). Also as part of the installer the /dev/md0 device is registered as Physical Volume in LVM, a Volume Group is being created and a Logical Volume is created which will be mounted at "/" for the root filesystem. Creating such an array works fine and results in a flawlessly working system. After the installation I verified that the array is in sync by issuing "cat /proc/mdstat". To simulate a failure of one disk I erased one of the disks completely and rebooted the system. The expected behaviour would be: The system boot's normally since there is still one disk available. The actual behaviour: The system is stuck during boot displaying the following relevant messages: "Begin: Mounting root file system ... Begin: Running /scripts/local-top ... Volume Group "VG" not found" "Cannot process volume group VG" "Gave up waiting for root file system device" What helped resolving the issue (before wiping one of the disks):   echo "sleep 60" > /etc/initramfs-tools/scripts/init-premount/init-mdfix   chmod 744 /etc/initramfs-tools/scripts/init-premount/init-mdfix   update-grub   update-initramfs -u I assume the system is trying to find the root file system but this is not yet available since the raid array is taking too long to initialize after one of the disks has been wiped. After applying the above fix and wiping one of the disks the system booted normally after waiting for 60 seconds. This seems to be enough for the raid to initialize in a degraded state.
2023-04-11 19:30:24 Paul White affects ubuntu subiquity (Ubuntu)
2023-04-11 19:30:49 Paul White tags array autoinstall boot bot-comment curtin raid array autoinstall boot bot-comment curtin jammy raid