md_d0 array fabricated, prevents mounting md0 partitions
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
mdadm (Ubuntu) |
In Progress
|
Undecided
|
Surbhi Palande |
Bug Description
Binary package hint: mdadm
I've had this problem happen a few times in the past with previous versions of Ubuntu. (I'm now on Lucid) I forget how I got rid of it then. What happened now is a just added a couple of eSATA disks (/dev/sdd and /dev/sde) (they function like regular SATA disks.) I had setup my /dev/md0 array from /dev/sdb and /dev/sdc months ago. Now all of a sudden it fabricates a /dev/md_d0 array somehow (based on /dev/md0p1) and the system fails to mount /dev/md0p1. This is the output from cat /proc/mdstat:
mike@hegemon:~$ cat /proc/mdstat
Personalities : [linear] [raid1] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md_d0 : inactive md0p1[1](S)
245111616 blocks
md0 : active raid1 sdc[1] sdb[0]
245117312 blocks [2/2] [UU]
unused devices: <none>
My workaround for now is to manually remove /md_d0 every time I reboot the system as follows:
mike@hegemon:~$ sudo mdadm --manage --stop /dev/md_d0
mdadm: stopped /dev/md_d0
I can then run mount -a and it successfully mounts /dev/md0p1
I get this output after deleting the array, I believe it is the same as before I delete it:
mike@hegemon:~$ sudo ls /dev/md*
/dev/md0 /dev/md_d0 /dev/md_d0p2 /dev/md_d0p4
/dev/md0p1 /dev/md_d0p1 /dev/md_d0p3
/dev/md:
d0 d0p1 d0p2 d0p3 d0p4
ProblemType: Bug
DistroRelease: Ubuntu 10.04
Package: mdadm 2.6.7.1-1ubuntu15
ProcVersionSign
Uname: Linux 2.6.32-24-generic x86_64
Architecture: amd64
Date: Sun Aug 8 21:05:33 2010
InstallationMedia: Ubuntu 10.04 LTS "Lucid Lynx" - Release amd64 (20100429)
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MDadmExamine.
MachineType: Gigabyte Technology Co., Ltd. 965P-S3
ProcCmdLine: BOOT_IMAGE=
ProcEnviron:
PATH=(custom, no user)
LANG=en_US.utf8
SHELL=/bin/bash
ProcMDstat:
Personalities : [linear] [raid1] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc[1] sdb[0]
245117312 blocks [2/2] [UU]
unused devices: <none>
SourcePackage: mdadm
dmi.bios.date: 06/25/2009
dmi.bios.vendor: Award Software International, Inc.
dmi.bios.version: F14
dmi.board.name: 965P-S3
dmi.board.vendor: Gigabyte Technology Co., Ltd.
dmi.chassis.type: 3
dmi.chassis.vendor: Gigabyte Technology Co., Ltd.
dmi.modalias: dmi:bvnAwardSof
dmi.product.name: 965P-S3
dmi.sys.vendor: Gigabyte Technology Co., Ltd.
etc.blkid.tab: Error: [Errno 2] No such file or directory: '/etc/blkid.tab'
tags: | added: patch |
Hie Michael DePaulo,
Thanks a lot for your bug report. I have created a mdadm test package, which I suppose should fix your bug. Will you please care to try this package? Remember that this is a test package. If this fixes the bug for you (and for others too), we will reflect these changes in mdadm updates.
JFYI, for hitherto Ubuntu releases the mdadm package shall stay at 2.7.1 However Natty would have mdadm at 3.4.1. This procedure is intended to test the mdadm fixes for 2.7.1. Here is the rough procedure that needs to be followed:
Testing auto-assembly of your md array when your rootfs lies on it: /edge.launchpad .net/~csurbhi/ +archive/ mdadm-autoassem bly mdadm/mkconf and ensure that your /etc/mdadm/ mdadm.conf has the array definition. old.img. version> . Store this iniramfs as /boot/initrd- new.img. We shall use this initramfs as a safety net. If you cannot boot with the auto-assembly fixes, then you should not land in a foot in your mouth situation. Through grub's edit menu, you can then resort to this safety net by editing the initrd= initrd- new.img (or if this does not work for some random reason then resort back to your older initrd= initrd- old.img) This way you will be sure that you can still boot your precious system. mdadm.conf and once again run the same “update-initramfs -c -k <your-kernel- version> ” to generate a brand new initramfs. -listed here>
1)Install the mdadm package and initramfs package kept at: https:/
2)Run /usr/share/
a) Save your original initramfs in /boot itself by say /boot/initrd-
b) Then run update-initramfs -c -k <your-kernel-
c) Now comment or remove the ARRAY definitions from your /etc/mdadm/
3)Run mdadm –detail –scan and note the UUIDs in the array. Note the hostname stored in your array. Does it not match with your real hostname? Then we can fix that at the initramfs prompt that you inevitably will land at if you try auto-assembly. Also note the device components that form the root md-device. Keep this paper for cross checking when you reboot
4)Reboot.
5)If you are at the initramfs prompt here are the things that you should first ensure:
a) ls /bin/hostname /etc/hostname - are these files present?
b) run “hostname”. Does this show you the hostname that your system is intended to have? Is it the same as the contents of /etc/hostname.
c) ls /var/run – Is this dir there?
If you answer yes to the above three questions, then things are so far so good. Now run the following command:
mdadm –assemble -U uuid /dev/<md-name> <dev-components
Your mdadm –detail –scan that you ran previously should have given you the component names if you dont know it right now. Hopefully you have them listed on your paper.
Eg in my case I ran:
mdadm –assemble -U uuid /dev/md0 /dev/sda1 /dev/sdb1
Again run:
mdadm –detail –scan <md-device> and verify that the uuids are indeed updated and the hostname reflects the hostname that is stored /etc/hostname. You can now press Ctr+D and you should come back to the root prompt. However you still need to test auto-assembly of your root md device. To do that simple reboot and you should not see the face of initramfs this time. You should land gently on your root prompt as you ex...