Add support for erasure-coded pool backend
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph RADOS Gateway Charm |
Fix Released
|
Wishlist
|
Unassigned | ||
OpenStack Ceph-FS Charm |
Fix Released
|
Wishlist
|
Unassigned | ||
OpenStack Cinder-Ceph charm |
Fix Released
|
Wishlist
|
Unassigned | ||
OpenStack Glance Charm |
Fix Released
|
Wishlist
|
Unassigned | ||
OpenStack Nova Compute Charm |
Fix Released
|
Wishlist
|
Unassigned |
Bug Description
This is a feature request for support erasure-coded pools.
Erasure-coded pools require less storage space compared to replicated pools. Currently, cinder-ceph supports only replicated pools.
The goal is to set up erasure-coded pool as a backend for Cinder. The problem is that images can only be created in a replicated pool due to lack of omap support in a an erasure coded pool. To overcome this issue, one must create two pools:
1. Replicated metadata pool, e.g 'cinder-
2. Erasure-coded data pool,e.g 'cinder-
With such setup, the following configuration should be rendered by the charm:
/etc/cinder/
[cinder-
volume_backend_name = cinder-
volume_driver = cinder.
rbd_pool = cinder-
rbd_user = cinder-
rbd_secret_uuid = <uuid>
rbd_ceph_conf = /var/lib/
[...]
/var/lib/
[client.
rbd default data pool = cinder-ceph-ec-data
Currently, rendering configuration to ceph.conf is not implemented.
For the initial implementation, cinder-ceph could support already existing, erasure-coded pools. As a next step, cinder-ceph could support creating metadata and data pools.
Changed in charm-cinder-ceph: | |
importance: | Undecided → Wishlist |
status: | New → Confirmed |
Changed in charm-cinder-ceph: | |
status: | Expired → Triaged |
Changed in charm-ceph-radosgw: | |
status: | New → Triaged |
importance: | Undecided → Wishlist |
Changed in charm-glance: | |
status: | New → Triaged |
Changed in charm-nova-compute: | |
status: | New → Triaged |
Changed in charm-ceph-fs: | |
status: | New → Triaged |
importance: | Undecided → Wishlist |
Changed in charm-nova-compute: | |
importance: | Undecided → Wishlist |
Changed in charm-glance: | |
importance: | Undecided → Wishlist |
Changed in charm-ceph-fs: | |
milestone: | none → 20.08 |
Changed in charm-ceph-radosgw: | |
milestone: | none → 20.08 |
Changed in charm-cinder-ceph: | |
milestone: | none → 20.08 |
Changed in charm-glance: | |
milestone: | none → 20.08 |
Changed in charm-nova-compute: | |
milestone: | none → 20.08 |
Changed in charm-cinder-ceph: | |
milestone: | 20.08 → none |
Changed in charm-ceph-radosgw: | |
milestone: | 20.08 → none |
Changed in charm-glance: | |
milestone: | 20.08 → none |
Changed in charm-nova-compute: | |
milestone: | 20.08 → none |
Changed in charm-ceph-fs: | |
milestone: | 20.08 → none |
Changed in charm-ceph-fs: | |
status: | Triaged → Fix Committed |
Changed in charm-ceph-radosgw: | |
status: | Triaged → Fix Committed |
Changed in charm-cinder-ceph: | |
status: | Triaged → Fix Committed |
Changed in charm-glance: | |
status: | Triaged → Fix Committed |
Changed in charm-nova-compute: | |
status: | Triaged → Fix Committed |
Changed in charm-ceph-fs: | |
milestone: | none → 20.10 |
Changed in charm-ceph-radosgw: | |
milestone: | none → 20.10 |
Changed in charm-cinder-ceph: | |
milestone: | none → 20.10 |
Changed in charm-glance: | |
milestone: | none → 20.10 |
Changed in charm-nova-compute: | |
milestone: | none → 20.10 |
Changed in charm-cinder-ceph: | |
status: | Fix Committed → Fix Released |
Changed in charm-ceph-radosgw: | |
status: | Fix Committed → Fix Released |
Changed in charm-glance: | |
status: | Fix Committed → Fix Released |
Changed in charm-nova-compute: | |
status: | Fix Committed → Fix Released |
Changed in charm-ceph-fs: | |
status: | Fix Committed → Fix Released |
I would rather break up the request into two separate steps: data-pool variable to config.yaml
1, implement the missing "rbd default data pool" option into cinder-ceph's ceph conf, because at least it allows to use the erasure coding for pre-created data and metadata pools (external ceph clusters, or pools created as a post configuration step)
- add the rbd-default-
- render the following conditional part into ceph.conf:
[client] data_pool }}
rbd default data pool = {{ rbd_default_
In this case, the [client. {{poolname} }] entry is obsolote, because the cinder-ceph app defines a single ceph pool only. Multiple pools can be used by multiple cinder-ceph entries with different configurations. A properly configured cinder-ceph can result the following EC pool volume:
$ rbd ls cinder-ceph-ec-meta d6d12723- 46f0-4e40- 8941-90a754b59f 51 ceph-ec- meta/volume- d6d12723- 46f0-4e40- 8941-90a754b59f 51 d6d12723- 46f0-4e40- 8941-90a754b59f 51':
block_ name_prefix: rbd_data. 16.16da6b8b4567
create_ timestamp: Thu Feb 20 17:35:20 2020
volume-
$ rbd info cinder-
rbd image 'volume-
size 1GiB in 256 objects
order 22 (4MiB objects)
data_pool: cinder-ceph-ec-data <--- the data pool name is here
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten, data-pool <--- the data pool flag is there
flags:
2, implement the proper EC pool creation for the cinder ceph charm. Basics seems to be already there for ceph-proxy, cinder-ceph, however for pool creation it calls the deprecated add_op_create_pool function: /github. com/openstack/ charm-cinder- ceph/blob/ master/ hooks/cinder_ hooks.py# L112
https:/
The add_op_create_pool *always* creates a replicated pool: /github. com/openstack/ charm-cinder- ceph/blob/ master/ charmhelpers/ contrib/ storage/ linux/ceph. py#L1219
https:/
So for the proper EC feature, additional config variables must be added to the cinder-ceph charm code, and properly invoke either the add_op_ create_ replicated_ pool() or the add_op_ create_ erasure_ pool() calls.