Instance can't run normally after volume migration (with swap volume) fails.
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
New
|
Undecided
|
Unassigned | ||
OpenStack Compute (nova) |
Confirmed
|
Medium
|
Unassigned |
Bug Description
Reproducing method as following:
1.create a volume from image
[root@2C5_10_DELL05 ~(keystone_admin)]# cinder create --image-id fd8330b3-
+------
| Property | Value |
+------
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-02-
| description | None |
| encrypted | False |
| id | a0dae16a-
| metadata | {} |
| multiattach | False |
| name | test_image_volume |
| os-vol-
| os-vol-
| os-vol-
| os-vol-
| os-volume-
| os-volume-
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| user_id | 8b34e1ab75024fc
| volume_type | KSIP |
+------
2、boot a instance from the step 1 volume.
[root@2C5_10_DELL05 ~(keystone_admin)]# nova boot --flavor 1 --block-device id=a0dae16a-
+------
| Property | Value |
+------
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-STS:vm_state | building |
| OS-SRV-
| OS-SRV-
| accessIPv4 | |
| accessIPv6 | |
| adminPass | JEeW4BR4WL3a |
| autostart | TRUE |
| boot_index_type | |
| config_drive | |
| created | 2016-02-
| flavor | m1.tiny (1) |
| hostId | |
| id | a740b3da-
| image | Attempt to boot from volume - no image supplied |
| key_name | - |
| metadata | {} |
| move | TRUE |
| name | test_vm |
| novnc | TRUE |
| os-extended-
| priority | 50 |
| progress | 0 |
| qos | |
| security_groups | default |
| status | BUILD |
| tenant_id | 181a578bc97642f
| updated | 2016-02-
| user_id | 8b34e1ab75024fc
+------
3. migrate the in-use status volume
[root@2C5_10_DELL05 ~(keystone_admin)]# cinder migrate a0dae16a-
4. migrate volume fail, nova-compute.log as following:
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
2016-02-27 11:32:47.986 29370 TRACE oslo_messaging.
5. Then, the instance is still running ,active. but login to the virtual machine system, find the guest OS changed read-only file system.
Changed in nova: | |
assignee: | nobody → YaoZheng_ZTE (zheng-yao1) |
summary: |
- After migrate volume being attached instance, the instance cann't run - normally + Instance can't run normally after volume migration (with swap volume) + fails. |
Changed in nova: | |
status: | Incomplete → Confirmed |
importance: | Undecided → Medium |
Changed in nova: | |
status: | Confirmed → In Progress |
The fourth step does not need to care about is abnormal, environmental problems of my own. But i want to express is that even if migrate volume is abnormal, we should be able to roll back to normal, the maximum guarantee the normal use of the virtual machine storage. so, to solve this problem, we should capture the exception handling in disconnect_volume.