juju ceph-mon status Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count

Bug #1810760 reported by Vidmantas
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Canonical Juju
Invalid
Undecided
Unassigned
Ceph Monitor Charm
Invalid
Undecided
Unassigned

Bug Description

18.04 LTS Openstack Queens bundle yaml
sudo juju --version 2.6-beta1-artful-amd64

juju stuck at:

Unit Workload Agent Machine Public address Ports Message
ceph-mon/10 waiting idle 15/lxd/0 Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)
ceph-mon/11 waiting idle 14/lxd/0 Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)
ceph-mon/13 waiting idle 18/lxd/0 Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)
ceph-osd/5* active idle 14 Unit is ready (1 OSD)
ceph-osd/6 active idle 15 Unit is ready (1 OSD)
ceph-osd/9 active idle 18 Unit is ready (1 OSD)
ceph-radosgw/0* active idle 1/lxd/0 80/tcp Unit is ready
cinder/0* active idle 1/lxd/1 8776/tcp Unit is ready
  cinder-ceph/0* waiting idle Incomplete relations: ceph

OSD's are ready.

From ceph-mon:

ceph mon stat
e3: 3 mons at {juju-b054bf-14-lxd-0=10.202.0.33:6789/0,juju-b054bf-15-lxd-0=10.202.0.28:6789/0,juju-b054bf-18-lxd-0=10.202.0.41:6789/0}, election epoch 108, leader 0 juju-b054bf-15-lxd-0, quorum 0,1,2 juju-b054bf-15-lxd-0,juju-b054bf-14-lxd-0,juju-b054bf-18-lxd-0

 ceph df
GLOBAL:
    SIZE AVAIL RAW USED %RAW USED
    75.1TiB 75.1TiB 3.01GiB 0
POOLS:
    NAME ID USED %USED MAX AVAIL OBJECTS

ceph quorum_status
{"election_epoch":108,"quorum":[0,1,2],"quorum_names":["juju-b054bf-15-lxd-0","juju-b054bf-14-lxd-0","juju-b054bf-18-lxd-0"],"quorum_leader_name":"juju-b054bf-15-lxd-0","monmap":{"epoch":3,"fsid":"62133458-0e96-11e9-aa87-00163efa872e","modified":"2019-01-04 11:39:00.084200","created":"2019-01-03 11:03:29.696033","features":{"persistent":["kraken","luminous"],"optional":[]},"mons":[{"rank":0,"name":"juju-b054bf-15-lxd-0","addr":"10.202.0.28:6789/0","public_addr":"10.202.0.28:6789/0"},{"rank":1,"name":"juju-b054bf-14-lxd-0","addr":"10.202.0.33:6789/0","public_addr":"10.202.0.33:6789/0"},{"rank":2,"name":"juju-b054bf-18-lxd-0","addr":"10.202.0.41:6789/0","public_addr":"10.202.0.41:6789/0"}]}}

root@juju-b054bf-15-lxd-0:~# ceph health
HEALTH_OK

As much as I understand, ceph is all good, but juju not detecting it.

Revision history for this message
Richard Harding (rharding) wrote :

This looks more like the charms aren't updating to the events of the required OSD count. The message there about waiting is coming from the charm logic. Pinging the openstack folks on this one.

Revision history for this message
Vidmantas (vidmantasvgtu) wrote :

What I could do to help?

Revision history for this message
Ryan Beisner (1chb1n) wrote :

The Artful series is end-of-life. If you are experiencing this issue on a currently-supported version of Ubuntu, such as Bionic 18.04 or Xenial 16.04, and you are using the latest stable version of the charms from cs:ceph-mon and cs:ceph-osd, then please re-open this bug. Thank you.

Changed in juju:
status: New → Invalid
Changed in charm-ceph-mon:
status: New → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.