I use RAID1 everywhere, and I have seen both loose SATA cables and BIOS'es that are not set with enough delay for drives to spin up both lead to degraded RAID1 scenarios, so I am worried about the overall impact of this bug. My current use case is not one of these, but might be one used by anyone leveraging the flexibility of eSATA and RAID1 for replication across systems.
My current use case is that I have two laptops (one work, one personal) and I use RAID 1 to a disk attached by eSATA ports on each to keep a series of LVM volumes (home, virtual machines, etc.) synced between the devices. Typically my work laptop was the master, and whenever I plugged a newer external image into my personal laptop pre-boot, it would auto-rebuild on boot. My RAID1 was created with three devices (n=3), but I am not sure that actually affected the way it chooses to handle degraded disks, except that I suppose it is *always* degraded with only 2 of 3 disks ever active on one system.
I had to modify the original Intrepid udev when I first set this up, I believe to avoid some delay or prompt when starting degraded, and my changes are as follows:
#Original Intrepid (I believe) left commented in my custom 85-mdadm
#SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="linux_raid*", \
RUN+="watershed /sbin/mdadm --incremental --run /dev/%k"
# My current udev from current custom /etc/udev/rules.d/85-mdadm
SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="linux_raid*", \
RUN+="watershed /sbin/mdadm --assemble --scan"
Looking at the current /lib/udev rules, there appears to be little change that would have any effect
SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="linux_raid*", \
RUN+="/sbin/mdadm --incremental $env{DEVNAME}"
However, now on my home laptop, whenever I bring a new image from work, it starts up with both images active and I have a corrupted disk. Every time. Only since 10.10. Soo, I am now always logging in before attaching my eSATA, failing the local RAID1 disk and removing it, stopping the array and starting it degraded with the external one and re-adding internal. It's not really something I can continue to do efficiently. I was considering upgrading my raid superblock from v0.9 to v1.2 .. but from this bug report I am not sure that will help me. There is some sort of regression here from my perspective.
If I can change my udev to work again, great; Skimming through the thread it doesn't appear that I have an actual workaround. It just stopped working.
If I ever start my laptop up with an old eSATA image on my current RAID1 laptop image, I am screwed, and my home directory of that has survived many debian and now ubuntu distros and various hardware upgrades might actually come to an end.
I use RAID1 everywhere, and I have seen both loose SATA cables and BIOS'es that are not set with enough delay for drives to spin up both lead to degraded RAID1 scenarios, so I am worried about the overall impact of this bug. My current use case is not one of these, but might be one used by anyone leveraging the flexibility of eSATA and RAID1 for replication across systems.
My current use case is that I have two laptops (one work, one personal) and I use RAID 1 to a disk attached by eSATA ports on each to keep a series of LVM volumes (home, virtual machines, etc.) synced between the devices. Typically my work laptop was the master, and whenever I plugged a newer external image into my personal laptop pre-boot, it would auto-rebuild on boot. My RAID1 was created with three devices (n=3), but I am not sure that actually affected the way it chooses to handle degraded disks, except that I suppose it is *always* degraded with only 2 of 3 disks ever active on one system.
I had to modify the original Intrepid udev when I first set this up, I believe to avoid some delay or prompt when starting degraded, and my changes are as follows:
#Original Intrepid (I believe) left commented in my custom 85-mdadm ="block" , ACTION= ="add|change" , ENV{ID_ FS_TYPE} =="linux_ raid*", \
#SUBSYSTEM=
RUN+="watershed /sbin/mdadm --incremental --run /dev/%k"
# My current udev from current custom /etc/udev/ rules.d/ 85-mdadm ="add|change" , ENV{ID_ FS_TYPE} =="linux_ raid*", \
SUBSYSTEM=="block", ACTION=
RUN+="watershed /sbin/mdadm --assemble --scan"
Looking at the current /lib/udev rules, there appears to be little change that would have any effect ="add|change" , ENV{ID_ FS_TYPE} =="linux_ raid*", \
SUBSYSTEM=="block", ACTION=
RUN+="/sbin/mdadm --incremental $env{DEVNAME}"
However, now on my home laptop, whenever I bring a new image from work, it starts up with both images active and I have a corrupted disk. Every time. Only since 10.10. Soo, I am now always logging in before attaching my eSATA, failing the local RAID1 disk and removing it, stopping the array and starting it degraded with the external one and re-adding internal. It's not really something I can continue to do efficiently. I was considering upgrading my raid superblock from v0.9 to v1.2 .. but from this bug report I am not sure that will help me. There is some sort of regression here from my perspective.
If I can change my udev to work again, great; Skimming through the thread it doesn't appear that I have an actual workaround. It just stopped working.
If I ever start my laptop up with an old eSATA image on my current RAID1 laptop image, I am screwed, and my home directory of that has survived many debian and now ubuntu distros and various hardware upgrades might actually come to an end.