on most of the xen deploy's I've seen, the 'root_device_name' attribute on the instance is None, and instance_block_mapping short-circuit the return path with-out evaluating bdms.
This is the attach request, after do_reserve has already returned the selected "mountpoint" (it's not really a *mount* is it?).
on most of the xen deploy's I've seen, the 'root_device_name' attribute on the instance is None, and instance_ block_mapping short-circuit the return path with-out evaluating bdms.
This is the attach request, after do_reserve has already returned the selected "mountpoint" (it's not really a *mount* is it?).
2012-10-04 23:07:49 DEBUG nova.openstack. common. rpc.amqp [-] received {u'_context_roles': [u'admin'], u'_context_ request_ id': u'req-45592fb5- 143b-4f40- a168-d1ff0c0c0a a4', u'_context_ quota_class' : None, u'_context_ project_ name': u'openStackCast leLab', u'_context_ service_ catalog' : None, u'_context_ user_name' : u'clayg', u'_context_ auth_token' : '<SANITIZED>', u'args': {u'mountpoint': u'/dev/xvdb', u'instance': {u'vm_state': u'active', u'availability_ zone': None, u'terminated_at': None, u'ephemeral_gb': 0, u'instance_ type_id' : 1, u'user_data': None, u'vm_mode': u'xen', u'deleted_at': None, u'reservation_id': u'r-pax0p2jn', u'id': 1, u'security_groups': [{u'deleted_at': None, u'user_id': u'ed4bd089a2f44 9dca0828a2c42db fb77', u'name': u'default', u'deleted': False, u'created_at': u'2012- 10-04T01: 29:31.000000' , u'updated_at': None, u'rules': [], u'project_id': u'3f4884d9c31d4 393a11158e09b81 6a5b', u'id': 1, u'description': u'default'}], u'disable_ terminate' : False, u'user_id': u'ed4bd089a2f44 9dca0828a2c42db fb77', u'uuid': u'6819fd4d- e1db-4cdf- 86f3-3cb8594eb1 ca', u'server_name': None, u'default_ swap_device' : None, u'info_cache': {u'instance_uuid': u'6819fd4d- e1db-4cdf- 86f3-3cb8594eb1 ca', u'deleted': False, u'created_at': u'2012- 10-04T01: 29:31.000000' , u'updated_at': u'2012- 10-04T01: 29:34.000000' , u'network_info': u'[{"network": {"bridge": "xenbr0", "subnets": [{"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "10.127.0.130"}], "version": 4, "meta": {}, "dns": [{"meta": {}, "version": 4, "type": "dns", "address": "10.6.23.4"}, {"meta": {}, "version": 4, "type": "dns", "address": "10.6.23.5"}], "routes": [], "cidr": "10.127.0.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "10.127.0.1"}}, {"ips": [], "version": null, "meta": {}, "dns": [], "routes": [], "cidr": null, "gateway": {"meta": {}, "version": null, "type": "gateway", "address": null}}], "meta": {"tenant_id": null}, "id": "130460c1- 8dd6-45d0- 94bb-870777dc1f 21", "label": "public"}, "meta": {}, "id": "0b8f347d- b301-439e- 8b88-274a43f7cc 2e", "address": "fa:16: 3e:2a:89: f7"}]', u'deleted_at': None, u'id': 1}, u'hostname': u'test01', u'launched_on': u'localhost. localdomain' , u'display_ description' : u'test01', u'key_data': None, u'kernel_id': u'', u'power_state': 1, u'default_ ephemeral_ device' : None, u'progress': 100, u'project_id': u'3f4884d9c31d4 393a11158e09b81 6a5b', u'launched_at': u'2012- 10-04T01: 34:19.000000' , u'scheduled_at': u'2012- 10-04T01: 29:31.000000' , u'ramdisk_id': u'', u'access_ip_v6': None, u'access_ip_v4': None, u'deleted': False, u'key_name': None, u'updated_at': u'2012- 10-04T01: 35:30.000000' , u'host': u'localhost. localdomain' , u'display_name': u'test01', u'task_state': None, u'shutdown_ terminate' : False, u'architecture': None, u'root_gb': 40, u'locked': False, u'name': u'instance- 6819fd4d- e1db-4cdf- 86f3-3cb8594eb1 ca', u'created_at': u'2012- 10-04T01: 29:31.000000' , u'launch_index': 0, u'metadata': [], u'memory_mb': 4096, u'instance_type': {u'disabled': False, u'root_gb': 40, u'deleted_at': None, u'name': u'm1.medium', u'deleted': False, u'created_at': None, u'ephemeral_gb': 0, u'updated_at': None, u'memory_mb': 4096, u'vcpus': 2, u'swap': 0, u'rxtx_factor': 1.0, u'is_public': True, u'flavorid': u'3', u'vcpu_weight': None, u'id': 1}, u'vcpus': 2, u'image_ref': u'6a86087a- 2c71-4f50- 90ae-1b884eb6bb 22', u'root_ device_ name': None, u'auto_ disk_config' : None, u'os_type': None, u'config_drive': u''}, u'volume_id': u'bf5ddd27- 6f8e-427b- b0a1-e66ad67c6b ce'}, u'_context_ instance_ lock_checked' : False, u'_context_ is_admin' : True, u'version': u'2.0', u'_context_ project_ id': u'3f4884d9c31d4 393a11158e09b81 6a5b', u'_context_ timestamp' : u'2012- 10-04T23: 07:49.337942' , u'_context_ read_deleted' : u'no', u'_context_ user_id' : u'ed4bd089a2f44 9dca0828a2c42db fb77', u'method': u'attach_volume', u'_context_ remote_ address' : u'10.6.61.108'} from (pid=4897) _safe_log /opt/nova/ nova/openstack/ common/ rpc/common. py:195
Even though this instance already had a "auto device_name" volume attached, the do_reserve is returning/attach call is sending:
'mountpoint': u'/dev/xvdb'
And the problem seems to mostly be the instance has:
'root_device_name': None
Which unsurprisingly leads to a traceback:
2012-10-04 23:08:59 ERROR nova.virt. xenapi. volumeops [req-c6af01f8- 1d42-4f93- ada0-4fc217c56b dc ed4bd089a2f449d ca0828a2c42dbfb 77 3f4884d9c31d439 3a11158e09b816a 5b] ['DEVICE_ ALREADY_ EXISTS' , '1'] xenapi. volumeops Traceback (most recent call last): xenapi. volumeops File "/opt/nova/ nova/virt/ xenapi/ volumeops. py", line 178, in attach_volume xenapi. volumeops dev_number, bootable=False) xenapi. volumeops File "/opt/nova/ nova/virt/ xenapi/ vm_utils. py", line 332, in create_vbd xenapi. volumeops vbd_ref = session. call_xenapi( 'VBD.create' , vbd_rec) xenapi. volumeops File "/opt/nova/ nova/virt/ xenapi/ driver. py", line 714, in call_xenapi xenapi. volumeops return session. xenapi_ request( method, args) xenapi. volumeops File "/usr/local/ lib/python2. 6/dist- packages/ XenAPI. py", line 133, in xenapi_request xenapi. volumeops result = _parse_ result( getattr( self, methodname) (*full_ params) ) xenapi. volumeops File "/usr/local/ lib/python2. 6/dist- packages/ XenAPI. py", line 203, in _parse_result xenapi. volumeops raise Failure( result[ 'ErrorDescripti on']) xenapi. volumeops Failure: ['DEVICE_ ALREADY_ EXISTS' , '1'] xenapi. volumeops
2012-10-04 23:08:59 TRACE nova.virt.
2012-10-04 23:08:59 TRACE nova.virt.
2012-10-04 23:08:59 TRACE nova.virt.
2012-10-04 23:08:59 TRACE nova.virt.
2012-10-04 23:08:59 TRACE nova.virt.
2012-10-04 23:08:59 TRACE nova.virt.
2012-10-04 23:08:59 TRACE nova.virt.
2012-10-04 23:08:59 TRACE nova.virt.
2012-10-04 23:08:59 TRACE nova.virt.
2012-10-04 23:08:59 TRACE nova.virt.
2012-10-04 23:08:59 TRACE nova.virt.
2012-10-04 23:08:59 TRACE nova.virt.
2012-10-04 23:08:59 TRACE nova.virt.
There's some more in the error handling:
2012-10-04 23:09:00 ERROR nova.compute. manager [req-c6af01f8- 1d42-4f93- ada0-4fc217c56b dc ed4bd089a2f449d ca0828a2c42dbfb 77 3f4884d9c31d439 3a11158e09b816a 5b] [instance: 6819fd4d- e1db-4cdf- 86f3-3cb8594eb1 ca] Failed to attach volume 8d8b78b8- bff3-48fe- be34-a3d5890760 28 at /dev/xvdb manager [instance: 6819fd4d- e1db-4cdf- 86f3-3cb8594eb1 ca] Traceback (most recent call last): manager [instance: 6819fd4d- e1db-4cdf- 86f3-3cb8594eb1 ca] File "/opt/nova/ nova/compute/ manager. py", line 2001, in _attach_volume manager [instance: 6819fd4d- e1db-4cdf- 86f3-3cb8594eb1 ca] mountpoint) manager [instance: 6819fd4d- e1db-4cdf- 86f3-3cb8594eb1 ca] File "/opt/nova/ nova/virt/ xenapi/ driver. py", line 381, in attach_volume manager [instance: 6819fd4d- e1db-4cdf- 86f3-3cb8594eb1 ca] mountpoint) manager [instance: 6819fd4d- e1db-4cdf- 86f3-3cb8594eb1 ca] File "/opt/nova/ nova/virt/ xenapi/ volumeops. py", line 183, in attach_volume manager [instance: 6819fd4d- e1db-4cdf- 86f3-3cb8594eb1 ca] ' instance %(instance_name)s') % locals()) manager [instance: 6819fd4d- e1db-4cdf- 86f3-3cb8594eb1 ca] Exception: Unable to use SR OpaqueRef: fd0c55a7- 7584-2907- 4791-654dd06428 ed for instance instance- 6819fd4d- e1db-4cdf- 86f3-3cb8594eb1 ca manager [instance: 6819fd4d- e1db-4cdf- 86f3-3cb8594eb1 ca]
2012-10-04 23:09:00 TRACE nova.compute.
2012-10-04 23:09:00 TRACE nova.compute.
2012-10-04 23:09:00 TRACE nova.compute.
2012-10-04 23:09:00 TRACE nova.compute.
2012-10-04 23:09:00 TRACE nova.compute.
2012-10-04 23:09:00 TRACE nova.compute.
2012-10-04 23:09:00 TRACE nova.compute.
2012-10-04 23:09:00 TRACE nova.compute.
2012-10-04 23:09:00 TRACE nova.compute.