Hi,
I ran into this issue as well. In my case, for some reason the ceph config on the faulty cinder-ceph unit wasn't generated correctly and was lacking the necessary entries to connect to ceph.
I fixed that by running manually running a config changed hook (juju run -u cinder-ceph/0 hooks/config-changed) on the affected unit.
That generated the config (I restarted all cinder services on the unit as well) but the unit was still stuck in waiting.
My guess from skimming the code: It is waiting for a response from ceph that was never really sent. I commented out the check if the response had already been sent (https://opendev.org/openstack/charm-cinder-ceph/src/commit/a973d9351ed6123d2be4dce909acca91bcca245d/charmhelpers/contrib/storage/linux/ceph.py#L2220) and therefore forcing it to create a new request when I manually ran the ceph-relation-changed hook. I think just removing the "broker-rsp-cinder-ceph-0" relation data for ceph-mon might also have worked, without hacking the code.
Anyway, I hope this helps any future travelers coming across this issue.
Hi, changed) on the affected unit. /opendev. org/openstack/ charm-cinder- ceph/src/ commit/ a973d9351ed6123 d2be4dce909acca 91bcca245d/ charmhelpers/ contrib/ storage/ linux/ceph. py#L2220) and therefore forcing it to create a new request when I manually ran the ceph-relation- changed hook. I think just removing the "broker- rsp-cinder- ceph-0" relation data for ceph-mon might also have worked, without hacking the code.
I ran into this issue as well. In my case, for some reason the ceph config on the faulty cinder-ceph unit wasn't generated correctly and was lacking the necessary entries to connect to ceph.
I fixed that by running manually running a config changed hook (juju run -u cinder-ceph/0 hooks/config-
That generated the config (I restarted all cinder services on the unit as well) but the unit was still stuck in waiting.
My guess from skimming the code: It is waiting for a response from ceph that was never really sent. I commented out the check if the response had already been sent (https:/
Anyway, I hope this helps any future travelers coming across this issue.