Volume status will be changed to "available" in spite of still attached to VM instance
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Incomplete
|
Medium
|
Rikimaru Honjo |
Bug Description
* (06/06/2016)I corrected steps to reproduce by #3's description.
* (15/09/2016)I improved steps to reproduce by #5's description.
[Summary]
Volume status will be changed to "available" in spite of still attached to VM instance.
[Version]
Later than 13.0.0
[Impact]
Under specific condition, volume status will be changed to "available" in spite of still attached to VM instance.
In this case, guest OS of VM instance can I/O.
If this volume would be attached to other VM instance, volume data would corrupted by I/O from both VM instance.
[Steps to reproduce]
1. Create a volume named "volume-A".
2. Add following break-point to nova-compute.
And, Please restart nova-compute.
-------
diff --git a/nova/
index 9783d39..948a02e 100644
--- a/nova/
+++ b/nova/
def _build_
[...]
+ import pdb;pdb.set_trace()
-------
3. Launch "VM-A" without volume.
Please wait until "VM-A"'s status is changed to "ACTIVE".
4. Launch "VM-B" with "volume-A".
(Please specify "block-
5. Kill nova-compute process while that is stopped by break-point.
(Use "kill" command. This is instead of unexpected disaster.)
After killing, please restart nova-compute.
6. "VM-B"'s status is changed to "ERROR" from "BUILD" as a result.
"Volume-A"'s status is still "available".
7. Attach "volume-A" to "VM-A" by volume-attach API.
8. Volume-attach API is completed.
"volume-A"'s status is changed to "in-use" from "available".
9. Delete "VM-B".
10. Deleting "VM-B" is completed.
And "volume-A"'s status is changed to "available" from "in-use"!
Even "volume-A" is still attached to "VM-A"!
I think "volume-A"'s status should not be changed by deleting "VM-A" at step-10 because "volume-A" was attached to "VM-B".
summary: |
- Unexpected volume-detach happen under a specific condition + Volume status will be change "available" in spite of still attached to + VM instance |
summary: |
- Volume status will be change "available" in spite of still attached to - VM instance + Volume status will be changed to "available" in spite of still attached + to VM instance |
description: | updated |
description: | updated |
Changed in nova: | |
assignee: | nobody → Rikimaru Honjo (honjo-rikimaru-c6) |
Changed in nova: | |
assignee: | Rikimaru Honjo (honjo-rikimaru-c6) → nobody |
tags: | added: volumes |
tags: | added: compute |
Changed in nova: | |
assignee: | nobody → Rikimaru Honjo (honjo-rikimaru-c6) |
Changed in nova: | |
status: | Incomplete → Opinion |
status: | Opinion → Incomplete |
tags: | removed: needs-attention |
tags: | added: mitaka-backport-potential |
description: | updated |
Rikimaru San
The case you are talking about looks like unreproducible to me.
The code snippet posted by you
volume_bdm = self._create_ volume_ bdm(
context, instance, device, volume_id, disk_bus=disk_bus,
device_ type=device_ type)
+ raise Exception
would raise an exception at that point, which won't be handled as it has no corresponding exception handling code. As such nova wouldn't come further.
Assuming the first exception was not present and only the 2nd one was present, a proper cleanup would be done and the bdm database entry would be destroyed.
Can I please ask you whether you faced such a situation somewhere without the code hack ?