external ceph cinder volume config breaks volumes on ussuri upgrade
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
kolla-ansible |
In Progress
|
High
|
Michal Nasiadka | ||
Ussuri |
Triaged
|
High
|
Unassigned | ||
Victoria |
Triaged
|
High
|
Unassigned | ||
Wallaby |
In Progress
|
High
|
Michal Nasiadka |
Bug Description
**Bug Report**
What happened:
When refactoring to use the new external-ceph templates in ussuri, cinder-volume agents came up under their own hosts, which results in 3 "different" storage hosts.
This results in all pre-ussuri volumes being unmanagable, as they are still tied to rbd:volumes@rbd-1, and new volumes will also become unmanagable if their host agent goes down.
What you expected to happen:
cinder-volume services to come up under a single host, so that a single node failure, does not result in unmanagable volumes.
How to fix:
cinder.conf needs backend_
This will make existing deployments work without changes, and fix the single-node-failure condition of the current settings.
How to reproduce it (minimal and precise):
**Environment**:
* Kolla-Ansible version: stable/ussuri
Changed in kolla-ansible: | |
status: | New → Triaged |
importance: | Undecided → High |
tags: | added: docs |
Train external ceph docs: https:/ /docs.openstack .org/kolla- ansible/ train/reference /storage/ external- ceph-guide. html#cinder
[rbd-1] conf=/etc/ ceph/ceph. conf host=rbd: volumes backend_ name=rbd- 1 driver= cinder. volume. drivers. rbd.RBDDriver rbd_secret_ uuid }}
rbd_ceph_
rbd_user=cinder
backend_
rbd_pool=volumes
volume_
volume_
rbd_secret_uuid = {{ cinder_
Ussuri made the integration simpler, adding the following to ceph.conf:
{% if cinder_backend_ceph | bool %} volume. drivers. rbd.RBDDriver pool_name }} volume_ from_snapshot = false chunk_size = 4 timeout = 5 rbd_secret_ uuid }} discard_ supported = True use_cinder_ backend = True
[rbd-1]
volume_driver = cinder.
volume_backend_name = rbd-1
rbd_pool = {{ ceph_cinder_
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_
rbd_max_clone_depth = 5
rbd_store_
rados_connect_
rbd_user = {{ ceph_cinder_user }}
rbd_secret_uuid = {{ cinder_
report_
image_upload_
{% endif %}
This is missing backend_ host=rbd: volumes. There is a related Tripleo bug [1], which explains that this option is used to set the same host for all backends in an environment with multiple cinder-volume services representing a single storage cluster.
[1] https:/ /bugs.launchpad .net/bugs/ 1753596