not sure that the information I'm reporting here is really related to this bug but the problem I'm facing is very similar, at least in the observable results.
I created an Instance (Ubuntu 14.04 OS) and a Volume with cinder.
I attached the volume to the VM.
Then, in the instance, I created a logical volume using the attached cinder volume.
In order to do that, I submitted the following commands at the instance level:
The situation at the instance level is the following:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 20G 0 disk
vda1 253:1 0 20G 0 part /
vdb 253:16 0 1G 0 disk
VG-LV_DATA (dm-0) 252:0 0 500M 0 lvm /db/dbdata
The logical volume I've just created (VG-LV_DATA) is mounted under /db/dbdata and resides on vdb (the external volume I've created through cinder).
Now the trick begins.
I delete the instance using horizon (or the CLI, it doesn't matter).
As soon as I delete the instance, if i look at the physical node where the cinder volume is hosted, I see the following (pay attention to the latest row of the output - my comments follow...):
A new device (VG-LV_DATA) has popped out as soon as I deleted the instance! The cinder volume (that actually 'contained' VG-LV_DATA) is still there but when I try to delete it, I get an error and cinder log says (after this log find what I did to fix the problem):
stderr: ' Logical volume cinder-volumes/volume-c45f2dec-af94-4ba5-9f1b-7b491f215580 is used by another device.\n' to caller
2015-07-15 16:26:32.470 2990 ERROR oslo.messaging._drivers.common [req-3bbf6fea-3a65-408c-91ed-e016aa7f9dca 8e063d17cd0f44ff9e69a34f670e46a8 37bdd06718e54a90a14f4400ea6876d5 - - -] ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply\n incoming.message))\n', ' File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', ' File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch\n result = getattr(endpoint, method)(ctxt, **new_args)\n', ' File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 144, in lvo_inner1\n return lvo_inner2(inst, context, volume_id, **kwargs)\n', ' File "/usr/lib/python2.7/dist-packages/cinder/openstack/common/lockutils.py", line 233, in inner\n retval = f(*args, **kwargs)\n', ' File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 143, in lvo_inner2\n return f(*_args, **_kwargs)\n', ' File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 416, in delete_volume\n {\'status\': \'error_deleting\'})\n', ' File "/usr/lib/python2.7/dist-packages/cinder/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', ' File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 405, in delete_volume\n self.driver.delete_volume(volume_ref)\n', ' File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py", line 233, in delete_volume\n self._delete_volume(volume)\n', ' File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py", line 133, in _delete_volume\n self.vg.delete(name)\n', ' File "/usr/lib/python2.7/dist-packages/cinder/brick/local_dev/lvm.py", line 610, in delete\n root_helper=self._root_helper, run_as_root=True)\n', ' File "/usr/lib/python2.7/dist-packages/cinder/utils.py", line 136, in execute\n return processutils.execute(*cmd, **kwargs)\n', ' File "/usr/lib/python2.7/dist-packages/cinder/openstack/common/processutils.py", line 173, in execute\n cmd=\' \'.join(cmd))\n', "ProcessExecutionError: Unexpected error while running command.\nCommand: sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvremove -f cinder-volumes/volume-c45f2dec-af94-4ba5-9f1b-7b491f215580\nExit code: 5\nStdout: ''\nStderr: ' Logical volume cinder-volumes/volume-c45f2dec-af94-4ba5-9f1b-7b491f215580 is used by another device.\\n'\n"]
2015-07-15 16:27:24.512 2990 INFO cinder.volume.manager [-] Updating volume status
In order to fix the problem, on the node where the cinder volume was hosted I submitted:
# dmsetup remove /dev/mapper/VG-LV_DATA
and then, as cloud administrator (usually "admin"), I submitted
# cinder force-delete <VOLUME-ID-OR-VOLUME-NAME>
where <VOLUME-ID-OR-VOLUME-NAME> is either the name or the ID of the cinder volume.
It seems like the logical volume I created at the 'virtual level' has 'leaked' down at physical level when I deleted the VM.
Hello,
not sure that the information I'm reporting here is really related to this bug but the problem I'm facing is very similar, at least in the observable results.
I created an Instance (Ubuntu 14.04 OS) and a Volume with cinder.
I attached the volume to the VM.
Then, in the instance, I created a logical volume using the attached cinder volume.
In order to do that, I submitted the following commands at the instance level:
$ sudo su nodiratime, attr2 /dev/VG/LV_DATA /db/dbdata
# apt-get -y update
# apt-get -y install lvm2
# apt-get -y install xfsprogs
# vgcreate VG /dev/vdb
# lvcreate -L 500M -n LV_DATA VG
# mkfs.xfs -d agcount=8 /dev/VG/LV_DATA
# mkdir -p -m 0700 /db/dbdata
# mount -t xfs -o noatime,
The situation at the instance level is the following:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 20G 0 disk
vda1 253:1 0 20G 0 part /
vdb 253:16 0 1G 0 disk
VG-LV_DATA (dm-0) 252:0 0 500M 0 lvm /db/dbdata
The logical volume I've just created (VG-LV_DATA) is mounted under /db/dbdata and resides on vdb (the external volume I've created through cinder).
Now the trick begins.
I delete the instance using horizon (or the CLI, it doesn't matter).
As soon as I delete the instance, if i look at the physical node where the cinder volume is hosted, I see the following (pay attention to the latest row of the output - my comments follow...):
root@storm02: /home/openstack # ls -la /dev/mapper -volumes- _snapshot- -5a085d24- -ca10-- 4444--a9b1- -cb1397a54a8c -> ../dm-3 -volumes- _snapshot- -5a085d24- -ca10-- 4444--a9b1- -cb1397a54a8c- cow -> ../dm-2 -volumes- volume- -0fbfd371- -ec0a-- 4b25--93c0- -53caa153f973 -> ../dm-6 -volumes- volume- -65e02f67- -8e2f-- 471d--a1fe- -ff4d1a14962a -> ../dm-1 -volumes- volume- -65e02f67- -8e2f-- 471d--a1fe- -ff4d1a14962a- real -> ../dm-0 -volumes- volume- -c45f2dec- -af94-- 4ba5--9f1b- -7b491f215580 -> ../dm-4 -volumes- volume- -de3691a9- -6e27-- 4ce8--9c6e- -f86b9458398d -> ../dm-5
total 0
drwxr-xr-x 2 root root 260 Jul 15 16:19 .
drwxr-xr-x 19 root root 4800 Jul 15 16:19 ..
lrwxrwxrwx 1 root root 7 Jul 14 10:53 cinder-
lrwxrwxrwx 1 root root 7 Jul 14 10:53 cinder-
lrwxrwxrwx 1 root root 7 Jul 14 10:53 cinder-
lrwxrwxrwx 1 root root 7 Jul 14 10:53 cinder-
lrwxrwxrwx 1 root root 7 Jul 14 10:53 cinder-
lrwxrwxrwx 1 root root 7 Jul 15 16:19 cinder-
lrwxrwxrwx 1 root root 7 Jul 14 10:53 cinder-
crw------- 1 root root 10, 236 Jul 14 10:53 control
lrwxrwxrwx 1 root root 7 Jul 14 10:53 storm02--vg-root -> ../dm-8
lrwxrwxrwx 1 root root 7 Jul 14 10:53 storm02--vg-swap_1 -> ../dm-9
lrwxrwxrwx 1 root root 7 Jul 15 16:19 VG-LV_DATA -> ../dm-7
A new device (VG-LV_DATA) has popped out as soon as I deleted the instance! The cinder volume (that actually 'contained' VG-LV_DATA) is still there but when I try to delete it, I get an error and cinder log says (after this log find what I did to fix the problem):
stderr: ' Logical volume cinder- volumes/ volume- c45f2dec- af94-4ba5- 9f1b-7b491f2155 80 is used by another device.\n' to caller _drivers. common [req-3bbf6fea- 3a65-408c- 91ed-e016aa7f9d ca 8e063d17cd0f44f f9e69a34f670e46 a8 37bdd06718e54a9 0a14f4400ea6876 d5 - - -] ['Traceback (most recent call last):\n', ' File "/usr/lib/ python2. 7/dist- packages/ oslo/messaging/ rpc/dispatcher. py", line 133, in _dispatch_ and_reply\ n incoming. message) )\n', ' File "/usr/lib/ python2. 7/dist- packages/ oslo/messaging/ rpc/dispatcher. py", line 176, in _dispatch\n return self._do_ dispatch( endpoint, method, ctxt, args)\n', ' File "/usr/lib/ python2. 7/dist- packages/ oslo/messaging/ rpc/dispatcher. py", line 122, in _do_dispatch\n result = getattr(endpoint, method)(ctxt, **new_args)\n', ' File "/usr/lib/ python2. 7/dist- packages/ cinder/ volume/ manager. py", line 144, in lvo_inner1\n return lvo_inner2(inst, context, volume_id, **kwargs)\n', ' File "/usr/lib/ python2. 7/dist- packages/ cinder/ openstack/ common/ lockutils. py", line 233, in inner\n retval = f(*args, **kwargs)\n', ' File "/usr/lib/ python2. 7/dist- packages/ cinder/ volume/ manager. py", line 143, in lvo_inner2\n return f(*_args, **_kwargs)\n', ' File "/usr/lib/ python2. 7/dist- packages/ cinder/ volume/ manager. py", line 416, in delete_volume\n {\'status\': \'error_ deleting\ '})\n', ' File "/usr/lib/ python2. 7/dist- packages/ cinder/ openstack/ common/ excutils. py", line 68, in __exit__\n six.reraise( self.type_ , self.value, self.tb)\n', ' File "/usr/lib/ python2. 7/dist- packages/ cinder/ volume/ manager. py", line 405, in delete_volume\n self.driver. delete_ volume( volume_ ref)\n' , ' File "/usr/lib/ python2. 7/dist- packages/ cinder/ volume/ drivers/ lvm.py" , line 233, in delete_volume\n self._delete_ volume( volume) \n', ' File "/usr/lib/ python2. 7/dist- packages/ cinder/ volume/ drivers/ lvm.py" , line 133, in _delete_volume\n self.vg. delete( name)\n' , ' File "/usr/lib/ python2. 7/dist- packages/ cinder/ brick/local_ dev/lvm. py", line 610, in delete\n root_helper= self._root_ helper, run_as_ root=True) \n', ' File "/usr/lib/ python2. 7/dist- packages/ cinder/ utils.py" , line 136, in execute\n return processutils. execute( *cmd, **kwargs)\n', ' File "/usr/lib/ python2. 7/dist- packages/ cinder/ openstack/ common/ processutils. py", line 173, in execute\n cmd=\' \'.join(cmd))\n', "ProcessExecuti onError: Unexpected error while running command.\nCommand: sudo cinder-rootwrap /etc/cinder/ rootwrap. conf lvremove -f cinder- volumes/ volume- c45f2dec- af94-4ba5- 9f1b-7b491f2155 80\nExit code: 5\nStdout: ''\nStderr: ' Logical volume cinder- volumes/ volume- c45f2dec- af94-4ba5- 9f1b-7b491f2155 80 is used by another device.\\n'\n"] volume. manager [-] Updating volume status
2015-07-15 16:26:32.470 2990 ERROR oslo.messaging.
2015-07-15 16:27:24.512 2990 INFO cinder.
In order to fix the problem, on the node where the cinder volume was hosted I submitted:
# dmsetup remove /dev/mapper/ VG-LV_DATA
and then, as cloud administrator (usually "admin"), I submitted
# cinder force-delete <VOLUME- ID-OR-VOLUME- NAME>
where <VOLUME- ID-OR-VOLUME- NAME> is either the name or the ID of the cinder volume.
It seems like the logical volume I created at the 'virtual level' has 'leaked' down at physical level when I deleted the VM.
Very mysterious behavior...