2014-05-27 15:22:14 |
Aleksandr Didenko |
description |
{
"api": "1.0",
"astute_sha": "a7eac46348dc77fc2723c6fcc3dbc66cc1a83152",
"build_id": "2014-05-26_18-06-28",
"build_number": "24",
"fuellib_sha": "2f79c0415159651fc1978d99bd791079d1ae4a06",
"fuelmain_sha": "d7f86968880a484d51f99a9fc439ef21139ea0b0",
"mirantis": "yes",
"nailgun_sha": "bd09f89ef56176f64ad5decd4128933c96cb20f4",
"ostf_sha": "89bbddb78132e2997d82adc5ae5db9dcb7a35bcd",
"production": "docker",
"release": "5.0"
}
Environment:
multinode, 1 controller+ceph-osd, 1 compute+ceph-osd, 1 mongodb.
"volumes_lvm": False,
"volumes_ceph": True,
"images_ceph": True,
"murano": True,
"sahara": True,
"ceilometer": True,
"net_provider": 'neutron',
"net_segment_type": 'gre',
"libvirt_type": "kvm"
After several re-deployments of the same environment I've hit the following deployment error on controller+cephosd node:
/usr/bin/ceph-deploy osd prepare node-2:/dev/sda5
raise Error('Device is in use by a device-mapper mapping (dm-crypt?)' % dev, ','.join(holders)
It looks like this is happening because we crate /dev/sda5 partition after all lvm metadata cleaning and it may ratain previous lvm metadata:
2014-05-27T09:48:41.291249+00:00 notice: find /dev ( -type l -o -type b ) -exec ls -l {} ;
2014-05-27T09:48:41.294841+00:00 notice: brw------- 1 root root 8, 4 May 27 09:48 /dev/sda4
2014-05-27T09:48:41.296618+00:00 notice: brw------- 1 root root 8, 3 May 27 09:48 /dev/sda3
2014-05-27T09:48:41.298284+00:00 notice: brw------- 1 root root 8, 2 May 27 09:48 /dev/sda2
2014-05-27T09:48:41.300282+00:00 notice: brw------- 1 root root 8, 1 May 27 09:48 /dev/sda1
...
2014-05-27T09:48:42.368051+00:00 notice: === before additional cleaning ===
2014-05-27T09:48:42.370608+00:00 notice: vgs -a --noheadings
2014-05-27T09:48:42.380603+00:00 notice: No volume groups found
....
2014-05-27T09:48:42.598125+00:00 notice: parted -a none -s $(readlink -f $( (ls /dev/disk/by-id/wwn-0x50014ee1033d5fd6 ||
2014-05-27T09:48:42.599108+00:00 notice: ls /dev/disk/by-id/scsi-SATA_WDC_WD2502ABYS-_WD-WCAT1H270422 || ls /dev/disk/by
2014-05-27T09:48:42.600087+00:00 notice: -id/ata-WDC_WD2502ABYS-18B7A0_WD-WCAT1H270422 || ls /dev/disk/by-path/pci-0000:0
2014-05-27T09:48:42.601049+00:00 notice: 0:1f.2-scsi-0:0:0:0) 2>/dev/null) ) unit MiB mkpart primary 60195 238078
....
2014-05-27T09:48:53.260547+00:00 notice: find /dev ( -type l -o -type b ) -exec ls -l {} ;
2014-05-27T09:48:53.264081+00:00 notice: lrwxrwxrwx 1 root root 7 May 27 09:48 /dev/mongo/mongodb -> ../dm-0
2014-05-27T09:48:53.267014+00:00 notice: brw------- 1 root root 252, 0 May 27 09:48 /dev/dm-0
2014-05-27T09:48:53.268727+00:00 notice: brw------- 1 root root 8, 5 May 27 09:48 /dev/sda5
2014-05-27T09:48:53.271018+00:00 notice: brw------- 1 root root 8, 4 May 27 09:48 /dev/sda4
2014-05-27T09:48:53.272770+00:00 notice: brw------- 1 root root 8, 3 May 27 09:48 /dev/sda3
2014-05-27T09:48:53.274522+00:00 notice: brw------- 1 root root 8, 2 May 27 09:48 /dev/sda2
2014-05-27T09:48:53.276428+00:00 notice: brw------- 1 root root 8, 1 May 27 09:48 /dev/sda1
As you can see in the logs above, we had no LVM data on our disks, but it appeared right after we created /dev/sda5 partition. |
{
"api": "1.0",
"astute_sha": "a7eac46348dc77fc2723c6fcc3dbc66cc1a83152",
"build_id": "2014-05-26_18-06-28",
"build_number": "24",
"fuellib_sha": "2f79c0415159651fc1978d99bd791079d1ae4a06",
"fuelmain_sha": "d7f86968880a484d51f99a9fc439ef21139ea0b0",
"mirantis": "yes",
"nailgun_sha": "bd09f89ef56176f64ad5decd4128933c96cb20f4",
"ostf_sha": "89bbddb78132e2997d82adc5ae5db9dcb7a35bcd",
"production": "docker",
"release": "5.0"
}
Environment:
multinode, 1 controller+ceph-osd, 1 compute+ceph-osd, 1 mongodb.
"volumes_lvm": False,
"volumes_ceph": True,
"images_ceph": True,
"murano": True,
"sahara": True,
"ceilometer": True,
"net_provider": 'neutron',
"net_segment_type": 'gre',
"libvirt_type": "kvm"
After several re-deployments of the same environment I've hit the following deployment error on controller+cephosd node:
/usr/bin/ceph-deploy osd prepare node-2:/dev/sda5
raise Error('Device is in use by a device-mapper mapping (dm-crypt?)' % dev, ','.join(holders)
Checking the problem controller+cephosd node showed that we have "mongo" LVM on /dev/sda5 that is supposed to be used for ceph-osd (we're running mongodb on the different node):
# lvdisplay
--- Logical volume ---
LV Name /dev/mongo/mongodb
VG Name mongo
LV UUID EKaLrw-AYUR-e241-vqzo-zRMp-XxVZ-OMSrvB
LV Write Access read/write
LV Status available
# open 0
LV Size 173.66 GiB
Current LE 5557
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0
# pvdisplay
--- Physical volume ---
PV Name /dev/sda5
VG Name mongo
PV Size 173.71 GiB / not usable 27.00 MiB
Allocatable yes
PE Size 32.00 MiB
Total PE 5558
Free PE 1
Allocated PE 5557
PV UUID aEx26m-XxSQ-8eUK-63Io-SqBn-vZAZ-UTnokS
It looks like this is happening because we create /dev/sda5 partition during provisioning after lvm metadata cleaning and it may retain old lvm metadata:
2014-05-27T09:48:41.291249+00:00 notice: find /dev ( -type l -o -type b ) -exec ls -l {} ;
2014-05-27T09:48:41.294841+00:00 notice: brw------- 1 root root 8, 4 May 27 09:48 /dev/sda4
2014-05-27T09:48:41.296618+00:00 notice: brw------- 1 root root 8, 3 May 27 09:48 /dev/sda3
2014-05-27T09:48:41.298284+00:00 notice: brw------- 1 root root 8, 2 May 27 09:48 /dev/sda2
2014-05-27T09:48:41.300282+00:00 notice: brw------- 1 root root 8, 1 May 27 09:48 /dev/sda1
...
2014-05-27T09:48:42.368051+00:00 notice: === before additional cleaning ===
2014-05-27T09:48:42.370608+00:00 notice: vgs -a --noheadings
2014-05-27T09:48:42.380603+00:00 notice: No volume groups found
....
2014-05-27T09:48:42.598125+00:00 notice: parted -a none -s $(readlink -f $( (ls /dev/disk/by-id/wwn-0x50014ee1033d5fd6 ||
2014-05-27T09:48:42.599108+00:00 notice: ls /dev/disk/by-id/scsi-SATA_WDC_WD2502ABYS-_WD-WCAT1H270422 || ls /dev/disk/by
2014-05-27T09:48:42.600087+00:00 notice: -id/ata-WDC_WD2502ABYS-18B7A0_WD-WCAT1H270422 || ls /dev/disk/by-path/pci-0000:0
2014-05-27T09:48:42.601049+00:00 notice: 0:1f.2-scsi-0:0:0:0) 2>/dev/null) ) unit MiB mkpart primary 60195 238078
....
2014-05-27T09:48:53.260547+00:00 notice: find /dev ( -type l -o -type b ) -exec ls -l {} ;
2014-05-27T09:48:53.264081+00:00 notice: lrwxrwxrwx 1 root root 7 May 27 09:48 /dev/mongo/mongodb -> ../dm-0
2014-05-27T09:48:53.267014+00:00 notice: brw------- 1 root root 252, 0 May 27 09:48 /dev/dm-0
2014-05-27T09:48:53.268727+00:00 notice: brw------- 1 root root 8, 5 May 27 09:48 /dev/sda5
2014-05-27T09:48:53.271018+00:00 notice: brw------- 1 root root 8, 4 May 27 09:48 /dev/sda4
2014-05-27T09:48:53.272770+00:00 notice: brw------- 1 root root 8, 3 May 27 09:48 /dev/sda3
2014-05-27T09:48:53.274522+00:00 notice: brw------- 1 root root 8, 2 May 27 09:48 /dev/sda2
2014-05-27T09:48:53.276428+00:00 notice: brw------- 1 root root 8, 1 May 27 09:48 /dev/sda1
As you can see in the logs above, we had no LVM data on our disks, but it appeared right after we created /dev/sda5 partition. |
|