RemoteFsConnector doesn't unmount volume on disconnect
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
New
|
Medium
|
Unassigned |
Bug Description
I am developing a Cinder driver and noticed that every time I create a volume from an image I get an NFS mount of the backing volume. This causes troubles because my driver manages all mounts and it doesn't know about this one, so when the volume is deleted we get a dangling mount.
After some digging around I found a problem - it is in os_brick.
Here is the problem:
def connect_
"""Ensure that the filesystem containing the volume is mounted.
:param connection_
export - remote filesystem device (e.g. '172.18.
name - file name within the filesystem
:type connection_
:returns: dict
options - options to pass to mount
"""
path = self._get_
return {'path': path}
def disconnect_
"""No need to do anything to disconnect a volume in a filesystem.
:param connection_
:type connection_
:param device_info: historical difference, but same as connection_props
:type device_info: dict
"""
So connect_volume mounts the volume but disconnect_volume doesn't unmount it so it is left in the mounted state forever which is wrong for any driver that maintains its own mounts since the driver doesn't know anything about this mount.
IMO disconnect_volume should actually unmount it.
tags: | added: os-brick remotefs |
Changed in cinder: | |
importance: | Undecided → Medium |
Changed in cinder: | |
assignee: | nobody → Sachin Yede (yede-sachin45) |
The main reason the NFS exports are left mounted is because it is rather difficult to determine when it is safe to unmount them in a non-racy way.
Nova has a solution to this by attempting an unmount and letting it silently fail if the mount is in-use, but I think doing that in Cinder may introduce more issues than it fixes due to there being a race between the check and another operation attempting to use the mount point.