When adding storage-backend relation to previous all-in-one cinder charm, cinder disables old service/host

Bug #1768922 reported by Drew Freiberger
This bug affects 3 people
Affects Status Importance Assigned to Milestone
OpenStack Cinder Charm

Bug Description

When adding a storage-backend relation to a previously production cinder set up in "All-in-one using Ceph-backed RBD volumes" as in the README, cinder stops servicing volumes from the previous all-in-one backend.

Configure cinder with a ceph RBD backend within the cinder charm. You should now see new volumes created in ceph's "cinder" pool if you run 'rbd list -p cinder' after creating volumes.
deploy cinder-ceph as name other-pool. You'll now see an other-pool pool in ceph osd pool ls.
Run cinder service-list and you'll see Host "cinder@other-pool" is State Up, but cinder-volume binary for host 'cinder' is now down.

All new volumes created will land in other-pool. I also suspect, you won't be able to add/delete/modify/attach/etc volumes in the old 'cinder' pool.

It appears that this results in the enabled_volume_backends = other-pool, but the original All-in-one ceph backend is still configured in the DEFAULT stanza in cinder.conf. Since the DEFAULT stanza doesn't appear to be a named volume_backend, it cannot be enabled via the enabled_volume_backends from what I can tell. You also cannot 'thaw-host' and the status is enabled, so service-enable doesn't help either.

This was discovered on 17.08 charms on a trusty/mitaka cloud.

It appears that this doc may help with how the configuration can be modified to allow the all-in-one config to migrate to a named backend stanza once a storage-backend relation is added:

Revision history for this message
James Page (james-page) wrote :

Agree this needs some untangling - at Ocata, the cinder charm does move to sectional config for directly related ceph so I'm wondering whether we need to move that back to mitaka as well (which still uses in DEFAULT configuration as outlined).

Changed in charm-cinder:
status: New → Triaged
importance: Undecided → Medium
Shane Peters (shaner)
Changed in charm-cinder:
assignee: nobody → Shane Peters (shaner)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-cinder (master)

Fix proposed to branch: master
Review: https://review.openstack.org/567921

Changed in charm-cinder:
status: Triaged → In Progress
Revision history for this message
Frode Nordahl (fnordahl) wrote :

There is a subtle but critical difference in how Cinder represents a backend located in the [DEFAULT] section to one in a separate section to the rest of OpenStack; `host` vs `host`@`volume_backend_name`.

Any change to this representation affects consumers and their reference to these volumes. In effect they will stop working until the reference is updated to reflect the new location.

This makes it a bit harder to port the sectional config change backwards as it would lead to operational impact on a mere charm upgrade.

There are actions to help migrate once upgraded to Ocata or newer.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on charm-cinder (master)

Change abandoned by Shane Peters (<email address hidden>) on branch: master
Review: https://review.openstack.org/567921
Reason: Abandoning this in favor of the above approach. The solution was tested and successfully deployed in a production environment. Note, you'll need to define volume types to utilize this solution.

Revision history for this message
Shane Peters (shaner) wrote :

A cleaner approach is to deploy a second cinder charm that only runs the 'volume' service. Then, deploy and relate a cinder-ceph charm to it. This second cinder charm would then use a sectional config and wouldn't interrupt existing volumes defined under the 'DEFAULT' section of the primary cinder charm.

Changed in charm-cinder:
assignee: Shane Peters (shaner) → nobody
Changed in charm-cinder:
status: In Progress → New
Changed in charm-cinder:
status: New → Triaged
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.