pvmove wipes data when issue_discards=1 on SSD
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
lvm2 |
Fix Released
|
Critical
|
|||
lvm2 (Debian) |
Fix Released
|
Unknown
|
|||
lvm2 (Ubuntu) |
Fix Released
|
High
|
Dimitri John Ledkov | ||
Quantal |
Won't Fix
|
Medium
|
Unassigned |
Bug Description
[Impact]
* Setting issue_discards=1 in /etc/lvm.conf (non-default) results in data loss, if pvmove is performed
on a Logical Volume which is moved to or form an SSD or other block devices which supports discards
* As this bug *directly cause a loss of user data* this fix should be uploaded to quantal (lvm2 in precise is not effected, because it does not support the issue_discards option.
[Test Case]
* Enable issue_discards=1 in /etc/lvm.conf
* Create a volumegroup with two physical volumes (at least one of these must support discards (e.g. an SSD)
* Create a test logical volume
* Create a filesystem on this logical volume
* With pvmove, move the underlying logical volume to the other physical volume
=> experience Data loss (in my experiments the whole logical volume was zeroed, checked with hexdumd /dev/vgtest/lvtest)
[Regression Potential]
* The upstream fix is fairly self contained and separates discard and move operations.
The patches can be found at:
https:/
https:/
https:/
An SRU of just the upstream-*.patches from the -5 upload fix this bug.
Related branches
Changed in lvm2 (Ubuntu): | |
importance: | Undecided → High |
summary: |
- pvmove wipe data when issue_discards=1 + pvmove wipes data when issue_discards=1 on SSD |
Changed in lvm2 (Ubuntu): | |
assignee: | nobody → Dmitrijs Ledkovs (xnox) |
Changed in lvm2 (Debian): | |
status: | Unknown → Fix Released |
description: | updated |
Changed in lvm2 (Ubuntu Quantal): | |
status: | New → Triaged |
importance: | Undecided → Medium |
Changed in lvm2: | |
importance: | Unknown → Critical |
status: | Unknown → Fix Released |
Description of problem:
If user enables issue_discards=1 and runs PV move command, and underlaying devices supports discard he will loose data from move chunks.
Here is short log of what is going on:
(chunk is release and discarded prior it's move)
#metadata/ lv_manip. c:3013 Creating logical volume pvmove0 lv_manip. c:3938 Inserting layer pvmove0 for segments of lvol0 on /dev/loop0 lv_manip. c:3852 Matched PE range /dev/loop0:0-126 against /dev/loop0 0 len 1 lv_manip. c:3798 Inserting /dev/loop0:0-0 of test/lvol0 issue_discards to 1 device. c:428 Device /dev/loop0 queue/discard_ max_bytes is 4294966784 bytes. device. c:428 Device /dev/loop0 queue/discard_ granularity is 4096 bytes. pv_manip. c:223 Discarding 1 extents offset 2048 sectors on /dev/loop0. dev-io. c:577 Closed /dev/loop0 dev-io. c:524 Opened /dev/loop0 RW O_DIRECT dev-io. c:318 Discarding 4194304 bytes offset 1048576 bytes on /dev/loop0. lv_manip. c:432 Stack lvol0:0[0] on LV pvmove0:0 lv_manip. c:86 Adding lvol0:0 as an user of pvmove0 mirror_ logs_require_ separate_ pvs not found in config: defaulting to 0 maximise_ cling not found in config: defaulting to 1 pv_map. c:55 Allowing allocation on /dev/loop1 start PE 0 length 127 lv_manip. c:967 Parallel PVs at LE 0 length 1: /dev/loop0 lv_manip. c:2023 Trying allocation using contiguous policy. lv_manip. c:1635 Still need 1 total extents: lv_manip. c:1638 1 (1 data/0 parity) parallel areas of 1 extents each lv_manip. c:1640 0 mirror logs of 0 extents each lv_manip. c:1329 Considering allocation area 0 as /dev/loop1 start PE 0 length 1 leaving 126. lv_manip. c:1112 Allocating parallel area 0 on /dev/loop1 start PE 0 length 1.
#metadata/
#metadata/
#metadata/
#libdm-config.c:853 Setting devices/
#device/
#device/
#metadata/
#device/
#device/
#device/
#metadata/
#metadata/
#pvmove.c:164 Moving 1 extents of logical volume test/lvol0
#mm/pool-fast.c:59 Created fast mempool allocation at 0x16f1810
#libdm-config.c:866 allocation/
#libdm-config.c:866 allocation/
#metadata/
#metadata/
#metadata/
#metadata/
#metadata/
#metadata/
#metadata/
#metadata/
Version-Release number of selected component (if applicable):
lvm 2.02.96
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
As a workaround - set issue_discards=0