2016-05-31 07:28:13 |
Rikimaru Honjo |
bug |
|
|
added bug |
2016-06-03 06:34:26 |
Prateek Arora |
nova: status |
New |
Incomplete |
|
2016-06-06 06:45:05 |
Rikimaru Honjo |
summary |
Unexpected volume-detach happen under a specific condition |
Volume status will be change "available" in spite of still attached to VM instance |
|
2016-06-06 06:46:37 |
Rikimaru Honjo |
summary |
Volume status will be change "available" in spite of still attached to VM instance |
Volume status will be changed to "available" in spite of still attached to VM instance |
|
2016-06-06 06:55:07 |
Rikimaru Honjo |
description |
[Summary]
Unexpected volume-detach happen under a specific condition.
[Version]
Later than 13.0.0
[Steps to reproduce]
1. Attach "volume-A" to "VM-A" by volume-attach API.
e.g.
$ nova volume-attach <VM-A id> <volume-A id> /dev/vdb
2. Volume-attach API is failed in nova-api by unexpected reasons(e.g. terminating nova-api), and destroying block_device_mapping is failed too.
"volume-A"'s status is still "available".
* If you want to reproduce, you should apply following code instead of terminating nova-api.
(Terminating nova-api in good timing is so difficult.)
-------------------------------------------------------
--- a/nova/compute/api.py
+++ b/nova/compute/api.py
@@ -3108,10 +3108,12 @@ class API(base.Base):
volume_bdm = self._create_volume_bdm(
context, instance, device, volume_id, disk_bus=disk_bus,
device_type=device_type)
+ raise Exception
try:
self._check_attach_and_reserve_volume(context, volume_id, instance)
self.compute_rpcapi.attach_volume(context, instance, volume_bdm)
except Exception:
+ raise Exception
with excutils.save_and_reraise_exception():
volume_bdm.destroy()
-------------------------------------------------------
* Before do step-3, please remove the applied code and restart nova-api.
3. Attach "volume-A" to "VM-B" by volume-attach API.
e.g.
$ nova volume-attach <VM-B id> <volume-A id> /dev/vdb
4. Volume-attach API is completed.
"volume-A"'s status is changed to "in-use" from "available".
5. Delete "VM-A".
e.g.
$ nova delete <VM-A id>
6. Deleting "VM-A" is completed.
And "volume-A"'s status is changed to "available" from "in-use"!
I think "volume-A"'s status should not be changed by deleting "VM-A" at step-6 because "volume-A" was attached to "VM-B". |
[Summary]
Volume status will be changed to "available" in spite of still attached to VM instance.
[Version]
Later than 13.0.0
[Impact]
Under specific condition, volume status will be changed to "available" in spite of still attached to VM instance.
In this case, guest OS of VM instance can I/O.
If this volume would be attached to other VM instance, volume data would corrupted.
[Steps to reproduce]
* (06/06/2016)I corrected steps by #3's description.
1. Add following break-point to nova-api.
-------------------------------------------------------
--- a/nova/compute/api.py
+++ b/nova/compute/api.py
@@ -3108,10 +3108,12 @@ class API(base.Base):
volume_bdm = self._create_volume_bdm(
context, instance, device, volume_id, disk_bus=disk_bus,
device_type=device_type)
try:
+ import pdb;pdb.set_trace()
self._check_attach_and_reserve_volume(context, volume_id, instance)
self.compute_rpcapi.attach_volume(context, instance, volume_bdm)
except Exception:
with excutils.save_and_reraise_exception():
volume_bdm.destroy()
-------------------------------------------------------
2. Please launch two nova-api processes that's like High availability.(*1)
(Two processes reference same DB, and listen different address & port.)
3. Attach "volume-A" to "VM-A" by volume-attach API.
4. Kill nova-api process that was received volume-attach API while nova-api process was stopped by break-point.
"Volume-A"'s status is still "available" as a result.
5. Attach "volume-A" to "VM-B" by volume-attach API.
(Send API to the nova-api process that was not killed.)
6. Please press "c" at break-point for continuing volume-attach.
7. Volume-attach API is completed.
"volume-A"'s status is changed to "in-use" from "available".
8. Delete "VM-A".
(Send API to the nova-api process that was not killed.)
9. Deleting "VM-A" is completed.
And "volume-A"'s status is changed to "available" from "in-use"!
*1: If there is only one process, remaining bdm record will be cleaned when nova-api is restarted.
I think "volume-A"'s status should not be changed by deleting "VM-A" at step-9 because "volume-A" was attached to "VM-B". |
|
2016-06-06 07:44:35 |
Rikimaru Honjo |
description |
[Summary]
Volume status will be changed to "available" in spite of still attached to VM instance.
[Version]
Later than 13.0.0
[Impact]
Under specific condition, volume status will be changed to "available" in spite of still attached to VM instance.
In this case, guest OS of VM instance can I/O.
If this volume would be attached to other VM instance, volume data would corrupted.
[Steps to reproduce]
* (06/06/2016)I corrected steps by #3's description.
1. Add following break-point to nova-api.
-------------------------------------------------------
--- a/nova/compute/api.py
+++ b/nova/compute/api.py
@@ -3108,10 +3108,12 @@ class API(base.Base):
volume_bdm = self._create_volume_bdm(
context, instance, device, volume_id, disk_bus=disk_bus,
device_type=device_type)
try:
+ import pdb;pdb.set_trace()
self._check_attach_and_reserve_volume(context, volume_id, instance)
self.compute_rpcapi.attach_volume(context, instance, volume_bdm)
except Exception:
with excutils.save_and_reraise_exception():
volume_bdm.destroy()
-------------------------------------------------------
2. Please launch two nova-api processes that's like High availability.(*1)
(Two processes reference same DB, and listen different address & port.)
3. Attach "volume-A" to "VM-A" by volume-attach API.
4. Kill nova-api process that was received volume-attach API while nova-api process was stopped by break-point.
"Volume-A"'s status is still "available" as a result.
5. Attach "volume-A" to "VM-B" by volume-attach API.
(Send API to the nova-api process that was not killed.)
6. Please press "c" at break-point for continuing volume-attach.
7. Volume-attach API is completed.
"volume-A"'s status is changed to "in-use" from "available".
8. Delete "VM-A".
(Send API to the nova-api process that was not killed.)
9. Deleting "VM-A" is completed.
And "volume-A"'s status is changed to "available" from "in-use"!
*1: If there is only one process, remaining bdm record will be cleaned when nova-api is restarted.
I think "volume-A"'s status should not be changed by deleting "VM-A" at step-9 because "volume-A" was attached to "VM-B". |
[Summary]
Volume status will be changed to "available" in spite of still attached to VM instance.
[Version]
Later than 13.0.0
[Impact]
Under specific condition, volume status will be changed to "available" in spite of still attached to VM instance.
In this case, guest OS of VM instance can I/O.
If this volume would be attached to other VM instance, volume data would corrupted by I/O from both VM instance.
[Steps to reproduce]
* (06/06/2016)I corrected steps by #3's description.
1. Add following break-point to nova-api.
-------------------------------------------------------
--- a/nova/compute/api.py
+++ b/nova/compute/api.py
@@ -3108,10 +3108,12 @@ class API(base.Base):
volume_bdm = self._create_volume_bdm(
context, instance, device, volume_id, disk_bus=disk_bus,
device_type=device_type)
try:
+ import pdb;pdb.set_trace()
self._check_attach_and_reserve_volume(context, volume_id, instance)
self.compute_rpcapi.attach_volume(context, instance, volume_bdm)
except Exception:
with excutils.save_and_reraise_exception():
volume_bdm.destroy()
-------------------------------------------------------
2. Please launch two nova-api processes that's like High availability.(*1)
(Two processes reference same DB, and listen different address & port.)
3. Attach "volume-A" to "VM-A" by volume-attach API.
4. Kill nova-api process that was received volume-attach API while nova-api process was stopped by break-point.
"Volume-A"'s status is still "available" as a result.
5. Attach "volume-A" to "VM-B" by volume-attach API.
(Send API to the nova-api process that was not killed.)
6. Please press "c" at break-point for continuing volume-attach.
7. Volume-attach API is completed.
"volume-A"'s status is changed to "in-use" from "available".
8. Delete "VM-A".
(Send API to the nova-api process that was not killed.)
9. Deleting "VM-A" is completed.
And "volume-A"'s status is changed to "available" from "in-use"!
*1: If there is only one process, remaining bdm record will be cleaned when nova-api is restarted.
I think "volume-A"'s status should not be changed by deleting "VM-A" at step-9 because "volume-A" was attached to "VM-B". |
|
2016-06-06 10:17:25 |
Rikimaru Honjo |
nova: assignee |
|
Rikimaru Honjo (honjo-rikimaru-c6) |
|
2016-06-06 11:07:38 |
Rikimaru Honjo |
nova: assignee |
Rikimaru Honjo (honjo-rikimaru-c6) |
|
|
2016-06-08 02:41:51 |
Takashi Natsume |
tags |
|
volumes |
|
2016-06-08 02:43:25 |
Takashi Natsume |
tags |
volumes |
compute volumes |
|
2016-06-15 01:16:40 |
Rikimaru Honjo |
nova: assignee |
|
Rikimaru Honjo (honjo-rikimaru-c6) |
|
2016-08-10 01:59:00 |
Rikimaru Honjo |
nova: status |
Incomplete |
Opinion |
|
2016-08-10 01:59:11 |
Rikimaru Honjo |
nova: status |
Opinion |
Incomplete |
|
2016-08-12 09:42:38 |
OpenStack Infra |
nova: status |
Incomplete |
In Progress |
|
2016-08-12 20:42:43 |
Maciej Szankin |
tags |
compute volumes |
compute needs-attention volumes |
|
2016-08-15 14:44:55 |
John Garbutt |
nova: importance |
Undecided |
Low |
|
2016-08-15 14:45:21 |
John Garbutt |
nova: importance |
Low |
Medium |
|
2016-08-16 05:28:40 |
Maciej Szankin |
tags |
compute needs-attention volumes |
compute volumes |
|
2016-08-23 10:58:40 |
Rikimaru Honjo |
tags |
compute volumes |
compute mitaka-backport-potential volumes |
|
2016-09-15 06:14:58 |
Rikimaru Honjo |
description |
[Summary]
Volume status will be changed to "available" in spite of still attached to VM instance.
[Version]
Later than 13.0.0
[Impact]
Under specific condition, volume status will be changed to "available" in spite of still attached to VM instance.
In this case, guest OS of VM instance can I/O.
If this volume would be attached to other VM instance, volume data would corrupted by I/O from both VM instance.
[Steps to reproduce]
* (06/06/2016)I corrected steps by #3's description.
1. Add following break-point to nova-api.
-------------------------------------------------------
--- a/nova/compute/api.py
+++ b/nova/compute/api.py
@@ -3108,10 +3108,12 @@ class API(base.Base):
volume_bdm = self._create_volume_bdm(
context, instance, device, volume_id, disk_bus=disk_bus,
device_type=device_type)
try:
+ import pdb;pdb.set_trace()
self._check_attach_and_reserve_volume(context, volume_id, instance)
self.compute_rpcapi.attach_volume(context, instance, volume_bdm)
except Exception:
with excutils.save_and_reraise_exception():
volume_bdm.destroy()
-------------------------------------------------------
2. Please launch two nova-api processes that's like High availability.(*1)
(Two processes reference same DB, and listen different address & port.)
3. Attach "volume-A" to "VM-A" by volume-attach API.
4. Kill nova-api process that was received volume-attach API while nova-api process was stopped by break-point.
"Volume-A"'s status is still "available" as a result.
5. Attach "volume-A" to "VM-B" by volume-attach API.
(Send API to the nova-api process that was not killed.)
6. Please press "c" at break-point for continuing volume-attach.
7. Volume-attach API is completed.
"volume-A"'s status is changed to "in-use" from "available".
8. Delete "VM-A".
(Send API to the nova-api process that was not killed.)
9. Deleting "VM-A" is completed.
And "volume-A"'s status is changed to "available" from "in-use"!
*1: If there is only one process, remaining bdm record will be cleaned when nova-api is restarted.
I think "volume-A"'s status should not be changed by deleting "VM-A" at step-9 because "volume-A" was attached to "VM-B". |
* (06/06/2016)I corrected steps to reproduce by #3's description.
* (15/09/2016)I improved steps to reproduce by #5's description.
[Summary]
Volume status will be changed to "available" in spite of still attached to VM instance.
[Version]
Later than 13.0.0
[Impact]
Under specific condition, volume status will be changed to "available" in spite of still attached to VM instance.
In this case, guest OS of VM instance can I/O.
If this volume would be attached to other VM instance, volume data would corrupted by I/O from both VM instance.
[Steps to reproduce]
1. Create a volume named "volume-A".
2. Add following break-point to nova-compute.
And, Please restart nova-compute.
-------------------------------------------------------
diff --git a/nova/compute/manager.py b/nova/compute/manager.py
index 9783d39..948a02e 100644
--- a/nova/compute/manager.py
+++ b/nova/compute/manager.py
def _build_and_run_instance(self, context, instance, image, injected_files,
admin_password, requested_networks, security_groups,
block_device_mapping, node, limits, filter_properties):
[...]
self._validate_instance_group_policy(context, instance,
filter_properties)
image_meta = objects.ImageMeta.from_dict(image)
+ import pdb;pdb.set_trace()
with self._build_resources(context, instance,
requested_networks, security_groups, image_meta,
block_device_mapping) as resources:
-------------------------------------------------------
3. Launch "VM-A" without volume.
Please wait until "VM-A"'s status is changed to "ACTIVE".
4. Launch "VM-B" with "volume-A".
(Please specify "block-device-mapping" option.)
5. Kill nova-compute process while that is stopped by break-point.
(Use "kill" command. This is instead of unexpected disaster.)
After killing, please restart nova-compute.
6. "VM-B"'s status is changed to "ERROR" from "BUILD" as a result.
"Volume-A"'s status is still "available".
7. Attach "volume-A" to "VM-A" by volume-attach API.
8. Volume-attach API is completed.
"volume-A"'s status is changed to "in-use" from "available".
9. Delete "VM-B".
10. Deleting "VM-B" is completed.
And "volume-A"'s status is changed to "available" from "in-use"!
Even "volume-A" is still attached to "VM-A"!
I think "volume-A"'s status should not be changed by deleting "VM-A" at step-10 because "volume-A" was attached to "VM-B". |
|
2016-09-15 21:32:16 |
Chet Burgess |
bug |
|
|
added subscriber Chet Burgess |
2017-08-01 10:17:00 |
Sean Dague |
nova: status |
In Progress |
Incomplete |
|
2017-08-01 10:17:03 |
Sean Dague |
nova: assignee |
Rikimaru Honjo (honjo-rikimaru-c6) |
|
|
2017-08-02 09:40:02 |
Rikimaru Honjo |
nova: assignee |
|
Rikimaru Honjo (honjo-rikimaru-c6) |
|