[Summary]
Unexpected volume-detach happen under a specific condition.
[Version]
Later than 13.0.0
[Steps to reproduce]
1. Attach "volume-A" to "VM-A" by volume-attach API.
e.g.
$ nova volume-attach <VM-A id> <volume-A id> /dev/vdb
2. Volume-attach API is failed in nova-api by unexpected reasons(e.g. terminating nova-api), and destroying block_device_mapping is failed too.
"volume-A"'s status is still "available".
* If you want to reproduce, you should apply following code instead of terminating nova-api.
(Terminating nova-api in good timing is so difficult.)
-------------------------------------------------------
--- a/nova/compute/api.py
+++ b/nova/compute/api.py
@@ -3108,10 +3108,12 @@ class API(base.Base):
volume_bdm = self._create_volume_bdm( context, instance, device, volume_id, disk_bus=disk_bus, device_type=device_type)
+ raise Exception
try: self._check_attach_and_reserve_volume(context, volume_id, instance) self.compute_rpcapi.attach_volume(context, instance, volume_bdm)
except Exception:
+ raise Exception
with excutils.save_and_reraise_exception(): volume_bdm.destroy()
-------------------------------------------------------
* Before do step-3, please remove the applied code and restart nova-api.
3. Attach "volume-A" to "VM-B" by volume-attach API.
e.g.
$ nova volume-attach <VM-B id> <volume-A id> /dev/vdb
4. Volume-attach API is completed.
"volume-A"'s status is changed to "in-use" from "available".
5. Delete "VM-A".
e.g.
$ nova delete <VM-A id>
6. Deleting "VM-A" is completed.
And "volume-A"'s status is changed to "available" from "in-use"!
I think "volume-A"'s status should not be changed by deleting "VM-A" at step-6 because "volume-A" was attached to "VM-B".
[Summary]
Unexpected volume-detach happen under a specific condition.
[Version]
Later than 13.0.0
[Steps to reproduce]
1. Attach "volume-A" to "VM-A" by volume-attach API.
e.g.
$ nova volume-attach <VM-A id> <volume-A id> /dev/vdb
2. Volume-attach API is failed in nova-api by unexpected reasons(e.g. terminating nova-api), and destroying block_device_ mapping is failed too.
"volume-A"'s status is still "available".
* If you want to reproduce, you should apply following code instead of terminating nova-api. ------- ------- ------- ------- ------- ------- ------ compute/ api.py compute/ api.py volume_ bdm(
context, instance, device, volume_id, disk_bus=disk_bus,
device_ type=device_ type)
self. _check_ attach_ and_reserve_ volume( context, volume_id, instance)
self. compute_ rpcapi. attach_ volume( context, instance, volume_bdm) save_and_ reraise_ exception( ):
volume_ bdm.destroy( ) ------- ------- ------- ------- ------- ------- ------
(Terminating nova-api in good timing is so difficult.)
-------
--- a/nova/
+++ b/nova/
@@ -3108,10 +3108,12 @@ class API(base.Base):
volume_bdm = self._create_
+ raise Exception
try:
except Exception:
+ raise Exception
with excutils.
-------
* Before do step-3, please remove the applied code and restart nova-api.
3. Attach "volume-A" to "VM-B" by volume-attach API.
e.g.
$ nova volume-attach <VM-B id> <volume-A id> /dev/vdb
4. Volume-attach API is completed.
"volume-A"'s status is changed to "in-use" from "available".
5. Delete "VM-A".
e.g.
$ nova delete <VM-A id>
6. Deleting "VM-A" is completed.
And "volume-A"'s status is changed to "available" from "in-use"!
I think "volume-A"'s status should not be changed by deleting "VM-A" at step-6 because "volume-A" was attached to "VM-B".