That's one of the TestMultiAttachVolumeSwap servers being deleted and its volume bdm that represents the multiattach volume is showing up as deleted already, so we don't detach it, but it's not yet clear to me what is deleting it.
I have a suspicion that in the original scenario:
a) volume1 attached to server1 and server2
b) swap volume1 to volume2 on server1
c) delete server1 and server2
d) delete volume1 fails because it's not disconnected
There is something going on with step (b) where server1 is linked backed to both volume1 and volume2 and when deleting server1, we delete the volume1 bdm even though we fail to disconnect it (because it's still attached to server2).
With the debug patch I see that we're definitely having some weird DB issue with BDMs during server delete:
logs.openstack. org/78/ 606978/ 5/check/ tempest- slow/5a90cad/ controller/ logs/screen- n-api.txt. gz#_Dec_ 10_20_29_ 39_767102
Dec 10 20:29:39.767102 ubuntu- xenial- rax-ord- 0001105586 <email address hidden>[23323]: ERROR nova.compute.api [None req-04f71fae- 15dd-4cc0- b211-4a4f53a7cb c8 tempest- TestMultiAttach VolumeSwap- 1722594678 tempest- TestMultiAttach VolumeSwap- 1722594678] [instance: c3c9407c- e2af-4d04- 94ed-f334844ea6 bf] No volume BDMs found for server. xenial- rax-ord- 0001105586 <email address hidden>[23323]: ERROR nova.compute.api [None req-04f71fae- 15dd-4cc0- b211-4a4f53a7cb c8 tempest- TestMultiAttach VolumeSwap- 1722594678 tempest- TestMultiAttach VolumeSwap- 1722594678] [instance: c3c9407c- e2af-4d04- 94ed-f334844ea6 bf] BDMs were already deleted: [BlockDeviceMap ping(attachment _id=None, boot_index= 0,connection_ info=None, created_ at=2018- 12-10T20: 28:24Z, delete_ on_termination= True,deleted= False,deleted_ at=None, destination_ type='local' ,device_ name='/ dev/vda' ,device_ type='disk' ,disk_bus= None,guest_ format= None,id= 1,image_ id='863afc54- 1096-4382- b8f2-6103641a65 c1',instance= <?>,instance_ uuid=c3c9407c- e2af-4d04- 94ed-f334844ea6 bf,no_device= False,snapshot_ id=None, source_ type='image' ,tag=None, updated_ at=2018- 12-10T20: 28:25Z, uuid=be69a559- 05ee-43bd- 8170-2c65cc2a51 8c,volume_ id=None, volume_ size=None, volume_ type=None) , BlockDeviceMapp ing(attachment_ id=be11fe1f- 5c65-4a64- a5c6-caa934f564 c9,boot_ index=None, connection_ info='{ "status" : "reserved", "multiattach": true, "detached_at": "", "volume_id": "26af085c- f977-4508- 8bb1-46a57c8f34 ed", "attach_mode": "null", "driver_ volume_ type": "iscsi", "instance": "c3c9407c- e2af-4d04- 94ed-f334844ea6 bf", "attached_at": "", "serial": "26af085c- f977-4508- 8bb1-46a57c8f34 ed", "data": {"access_mode": "rw", "target_ discovered" : false, "encrypted": false, "qos_specs": null, "target_iqn": "iqn.2010- 10.org. openstack: volume- 26af085c- f977-4508- 8bb1-46a57c8f34 ed", "target_portal": "10.210.4.21:3260", "volume_id": "26af085c- f977-4508- 8bb1-46a57c8f34 ed", "target_lun": 1, "device_path": "/dev/sda", "auth_password": "***", "auth_username": "P2uFcW9nEpz3Gc uKwqHv" , "auth_method": "CHAP"} }',created_ at=2018- 12-10T20: 28:37Z, delete_ on_termination= False,deleted= True,deleted_ at=2018- 12-10T20: 29:39Z, destination_ type='volume' ,device_ name='/ dev/vdb' ,device_ type=None, disk_bus= None,guest_ format= None,id= 4,image_ id=None, instance= <?>,instance_ uuid=c3c9407c- e2af-4d04- 94ed-f334844ea6 bf,no_device= False,snapshot_ id=None, source_ type='volume' ,tag=None, updated_ at=2018- 12-10T20: 28:44Z, uuid=15aba0c7- c xenial- rax-ord- 0001105586 <email address hidden>[23323]: 612-40a4- 9445-dfe964764b 02,volume_ id='26af085c- f977-4508- 8bb1-46a57c8f34 ed',volume_ size=1, volume_ type=None) ]
Dec 10 20:29:39.775519 ubuntu-
Dec 10 20:29:39.776294 ubuntu-
That's one of the TestMultiAttach VolumeSwap servers being deleted and its volume bdm that represents the multiattach volume is showing up as deleted already, so we don't detach it, but it's not yet clear to me what is deleting it.
I have a suspicion that in the original scenario:
a) volume1 attached to server1 and server2
b) swap volume1 to volume2 on server1
c) delete server1 and server2
d) delete volume1 fails because it's not disconnected
There is something going on with step (b) where server1 is linked backed to both volume1 and volume2 and when deleting server1, we delete the volume1 bdm even though we fail to disconnect it (because it's still attached to server2).