Comment 15 for bug 925280

Revision history for this message
iMac (imac-netstatz) wrote :

IMHO, this is the *new* expected behavior. If both the raid members left the array in a good state (i.e. you unplugged one while the system was off) then you need to zero the superblock to get it back into the array.

I suspect your test case would work with a disk that only had the structures, and not the clean data inside; Perhaps doing a live pull on the cable (simulate a controller failure) for your test, in an environment where you don't care about the data.

In that case, upon restart, I would expect the "dirty" and "old" md disk to be automatically rebuilt.

In one of my use cases, where I use mdadm slightly differently across two computers, it solves a problem where the older disk is sometimes mounted when both md members are clean; In this case the new data is overwritten by the old, which can be a real issue caused by the old behavior.

Factors that influence use cases where old data could overwrite new data previously are related to individual disk spin up times, and availability of disks at boot (especially with remote block devices), which is probably the reason for this 'feature'. My observations are in the dupe below.

The use case for you should probably include a real 'spare' rather then using an old member in a good state (which should probably be not overwritten by default)

https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/945786