snapshot_volume_backed races, could result in data corruption
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Confirmed
|
Low
|
Unassigned |
Bug Description
snapshot_
if vm_state == ACTIVE:
quiesce()
snapshot()
if vm_state == ACTIVE:
unquiesce()
There is no exclusion here, though, which means a user could do:
quiesce()
snapshot()
unquiesce() --snapshot() now running after unquiesce -> corruption
or:
suspend()
snapshot()
NO QUIESCE (we're suspended)
snapshot()
--snapshot() now running after resume -> corruption
Same goes for stop/start.
Note that snapshot_
tags: | added: libvirt snapshot volumes |
Changed in nova: | |
status: | New → Confirmed |
importance: | Undecided → Low |
Changed in nova: | |
assignee: | nobody → Jianle He (hejianle1989) |
Changed in nova: | |
assignee: | Jianle He (hejianle1989) → nobody |
I believe this is an actual bug. I tested this on my debstack on in an ubuntu environment running on vmware 14.04 and when I ran the command to snapshot the instance booted from volume from two different command prompts at almost the same time. When I did this I had multiple 'cirros-rootfs' now booted on my sidebar yet when I delete the instance and it's associated images only one of the 'cirros-rootfs' volumes would be deleted.