Just after an OpenStack deployment, Ceph cluster stays in HEALTH_WARN. It breaks a status monitoring for example by NRPE.
xenial + cloud:xenial-pike
$ sudo ceph version
sudo: unable to resolve host juju-3ea7d7-1
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)
$ sudo ceph health detail
sudo: unable to resolve host juju-3ea7d7-1
HEALTH_WARN application not enabled on 4 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 4 pool(s)
application not enabled on pool 'default.rgw.control'
application not enabled on pool '.rgw.root'
application not enabled on pool 'glance'
application not enabled on pool 'cinder-ceph'
use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Just after an OpenStack deployment, Ceph cluster stays in HEALTH_WARN. It breaks a status monitoring for example by NRPE.
xenial + cloud:xenial-pike
$ sudo ceph version c9a5cd0feafd42f bca27f9c38e) luminous (stable)
sudo: unable to resolve host juju-3ea7d7-1
ceph version 12.2.1 (3e7492b9ada8bd
$ sudo ceph health detail NOT_ENABLED application not enabled on 4 pool(s) rgw.control'
sudo: unable to resolve host juju-3ea7d7-1
HEALTH_WARN application not enabled on 4 pool(s)
POOL_APP_
application not enabled on pool 'default.
application not enabled on pool '.rgw.root'
application not enabled on pool 'glance'
application not enabled on pool 'cinder-ceph'
use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
http:// docs.ceph. com/docs/ master/ rados/operation s/health- checks/ #pool-app- not-enabled docs.ceph. com/docs/ master/ rados/operation s/pools/ #associate- pool-to- application
http://