Comment 6 for bug 1415778

Revision history for this message
prameela kapuganti (prameela) wrote :

I request the bug reporter to close this bug as it was fixed in Mitaka version and below is the detailed analysis and the delta between the Kilo and Mitaka version related to the bug.

My Analysis:

1) Existing Code snippet in Kilo:

IN _local_delete method: (/opt/stack/nova/nova/compute/api.py)

connector = {'ip': '127.0.0.1', 'initiator': 'iqn.fake'}
                try:
                    self.volume_api.terminate_connection(context,bdm.volume_id,connector)
                    self.volume_api.detach(elevated, bdm.volume_id)

#Here, we are passing the connector ip as 127.0.0.1 to the terminate_connection and detach methods, as this is a loop back ip, it has no effect on the server side. So, the volume was not being detached from the instance and so status is remaining the same(which is not expected).

2) Already fixed in Mitaka in the follwoing way:

#A new method was written here to get the connector ip and the fake ip(127.0.0.1) given was removed here.

IN _get_stashed_volume_connector method: (/opt/stack/nova/nova/compute/api.py)
connector = jsonutils.loads(bdm.connection_info).get('connector')
            if connector:
                if connector.get('host') == instance.host:
                    return connector

# Gets the stashed connector dict out of the bdm.connection_info if set and the connector host matches the instance host. This method was called in the _local_cleanup_bdm_volumes method to perform terminate_connection and detach methods.

REFFERED FILES:

 /opt/stack/nova/nova/compute/api.py
 /opt/stack/nova/nova/compute/manager.py