I would rather break up the request into two separate steps:
1, implement the missing "rbd default data pool" option into cinder-ceph's ceph conf, because at least it allows to use the erasure coding for pre-created data and metadata pools (external ceph clusters, or pools created as a post configuration step)
- add the rbd-default-data-pool variable to config.yaml
- render the following conditional part into ceph.conf:
[client]
rbd default data pool = {{ rbd_default_data_pool }}
In this case, the [client.{{poolname}}] entry is obsolote, because the cinder-ceph app defines a single ceph pool only. Multiple pools can be used by multiple cinder-ceph entries with different configurations. A properly configured cinder-ceph can result the following EC pool volume:
$ rbd ls cinder-ceph-ec-meta
volume-d6d12723-46f0-4e40-8941-90a754b59f51
$ rbd info cinder-ceph-ec-meta/volume-d6d12723-46f0-4e40-8941-90a754b59f51
rbd image 'volume-d6d12723-46f0-4e40-8941-90a754b59f51':
size 1GiB in 256 objects
order 22 (4MiB objects)
data_pool: cinder-ceph-ec-data <--- the data pool name is here block_name_prefix: rbd_data.16.16da6b8b4567
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten, data-pool <--- the data pool flag is there
flags: create_timestamp: Thu Feb 20 17:35:20 2020
So for the proper EC feature, additional config variables must be added to the cinder-ceph charm code, and properly invoke either the add_op_create_replicated_pool() or the add_op_create_erasure_pool() calls.
I would rather break up the request into two separate steps: data-pool variable to config.yaml
1, implement the missing "rbd default data pool" option into cinder-ceph's ceph conf, because at least it allows to use the erasure coding for pre-created data and metadata pools (external ceph clusters, or pools created as a post configuration step)
- add the rbd-default-
- render the following conditional part into ceph.conf:
[client] data_pool }}
rbd default data pool = {{ rbd_default_
In this case, the [client. {{poolname} }] entry is obsolote, because the cinder-ceph app defines a single ceph pool only. Multiple pools can be used by multiple cinder-ceph entries with different configurations. A properly configured cinder-ceph can result the following EC pool volume:
$ rbd ls cinder-ceph-ec-meta d6d12723- 46f0-4e40- 8941-90a754b59f 51 ceph-ec- meta/volume- d6d12723- 46f0-4e40- 8941-90a754b59f 51 d6d12723- 46f0-4e40- 8941-90a754b59f 51':
block_ name_prefix: rbd_data. 16.16da6b8b4567
create_ timestamp: Thu Feb 20 17:35:20 2020
volume-
$ rbd info cinder-
rbd image 'volume-
size 1GiB in 256 objects
order 22 (4MiB objects)
data_pool: cinder-ceph-ec-data <--- the data pool name is here
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten, data-pool <--- the data pool flag is there
flags:
2, implement the proper EC pool creation for the cinder ceph charm. Basics seems to be already there for ceph-proxy, cinder-ceph, however for pool creation it calls the deprecated add_op_create_pool function: /github. com/openstack/ charm-cinder- ceph/blob/ master/ hooks/cinder_ hooks.py# L112
https:/
The add_op_create_pool *always* creates a replicated pool: /github. com/openstack/ charm-cinder- ceph/blob/ master/ charmhelpers/ contrib/ storage/ linux/ceph. py#L1219
https:/
So for the proper EC feature, additional config variables must be added to the cinder-ceph charm code, and properly invoke either the add_op_ create_ replicated_ pool() or the add_op_ create_ erasure_ pool() calls.