Config change doesn't restart ceph-mon service
Bug #1979330 reported by
Facundo Ciccioli
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph Monitor Charm |
Triaged
|
Medium
|
Unassigned |
Bug Description
A production ceph cluster was in HEALTH_WARN state showing a mon low on available space. Both warn and crit thresholds were at their defaults, 30 and 5 resp.
We appled a config change via juju:
juju config ceph-mon monitor-
juju config ceph-mon monitor-
The ceph.conf file reflected the changes but we noticed that ceph was still reporting HEALTH_WARN.
Looking at ceph-mon@*.service we noticed that it hadn't been restarted. After roll-restarting all three mons, the cluster went back to HEALTH_OK.
Shouldn't the ceph-mon services be restarted after ceph.conf is modified?
Changed in charm-ceph-mon: | |
importance: | Undecided → Medium |
status: | New → Triaged |
To post a comment you must log in.
Give that we render the ceph.conf in one place [1], it's easy to confirm that we don't restart the mons. That said, we'd have to be _very_ careful in restarting the mons due to a config=changed as we have to ensure that we maintain quorum during a rolling restart. There is some code in the charm, today, to handle a rolling upgrade & restart of the charm that could be leveraged to do this.
1: https:/ /github. com/openstack/ charm-ceph- mon/blob/ d3b2494ee8a23fe 58cff4eb3308306 a24a8f1434/ hooks/ceph_ hooks.py# L225