1. Configure a ceph cluster using an alternative cluster name (e.g. 'my-ceph') instead of the default (ceph)
2. Configure two ceph backends using this cluster to use in cinder
3. Add the appropriate volume type(s)
4. Create a volume
5. Change the volume type of the volume
Expected result:
Migration of the volume
Actual result:
Exception in cinder-volume: Error: error calling conf_read_file: errno EINVAL
Cause:
The os-brick RBDConnector initiates a RBDClient only specifying user and pool. This causes the client to default to configuration file "/etc/ceph/ceph.conf"
The cinder rbd volume driver specifies all required information for a connection (while to me it seems to be tailored to nova/libvirt) but it is not used by os-brick. If the data specified by the volume driver cannot be used by the os-brick RBDClient, either the ceph cluster name or full path to the appropriate ceph configfile should be added to the connection data which in turn should be used by the os-brick connector.
Steps to reproduce:
1. Configure a ceph cluster using an alternative cluster name (e.g. 'my-ceph') instead of the default (ceph)
2. Configure two ceph backends using this cluster to use in cinder
3. Add the appropriate volume type(s)
4. Create a volume
5. Change the volume type of the volume
Expected result:
Migration of the volume
Actual result:
Exception in cinder-volume: Error: error calling conf_read_file: errno EINVAL
Cause:
The os-brick RBDConnector initiates a RBDClient only specifying user and pool. This causes the client to default to configuration file "/etc/ceph/ ceph.conf"
The cinder rbd volume driver specifies all required information for a connection (while to me it seems to be tailored to nova/libvirt) but it is not used by os-brick. If the data specified by the volume driver cannot be used by the os-brick RBDClient, either the ceph cluster name or full path to the appropriate ceph configfile should be added to the connection data which in turn should be used by the os-brick connector.