When adding storage-backend relation to previous all-in-one cinder charm, cinder disables old service/host
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Cinder Charm |
Triaged
|
Medium
|
Unassigned |
Bug Description
When adding a storage-backend relation to a previously production cinder set up in "All-in-one using Ceph-backed RBD volumes" as in the README, cinder stops servicing volumes from the previous all-in-one backend.
Repro:
Configure cinder with a ceph RBD backend within the cinder charm. You should now see new volumes created in ceph's "cinder" pool if you run 'rbd list -p cinder' after creating volumes.
deploy cinder-ceph as name other-pool. You'll now see an other-pool pool in ceph osd pool ls.
Run cinder service-list and you'll see Host "cinder@other-pool" is State Up, but cinder-volume binary for host 'cinder' is now down.
All new volumes created will land in other-pool. I also suspect, you won't be able to add/delete/
It appears that this results in the enabled_
This was discovered on 17.08 charms on a trusty/mitaka cloud.
It appears that this doc may help with how the configuration can be modified to allow the all-in-one config to migrate to a named backend stanza once a storage-backend relation is added:
http://
Changed in charm-cinder: | |
assignee: | nobody → Shane Peters (shaner) |
Changed in charm-cinder: | |
status: | In Progress → New |
Changed in charm-cinder: | |
status: | New → Triaged |
Agree this needs some untangling - at Ocata, the cinder charm does move to sectional config for directly related ceph so I'm wondering whether we need to move that back to mitaka as well (which still uses in DEFAULT configuration as outlined).