What was found:
from ceph.log from one of the controller nodes:
...
2015-01-29 23:20:51.670859 mon.0 192.168.0.3:6789/0 1 : [INF] mon.node-1@0 won leader election with quorum 0
...
2015-01-29 23:44:03.273977 mon.0 192.168.0.3:6789/0 31 : [INF] mon.node-1@0 won leader election with quorum 0,1,2
2015-01-29 23:44:03.276078 mon.0 192.168.0.3:6789/0 32 : [INF] monmap e3: 3 mons at {node-1=192.168.0.3:6789/0,node-44=192.168.0.46:6789/0,node-49=192.168.0.51:6789/0}
...
2015-01-29 23:53:01.390172 mon.0 192.168.0.3:6789/0 124 : [INF] osd.42 192.168.0.41:6800/13854 boot
2015-01-29 23:53:01.390511 mon.0 192.168.0.3:6789/0 125 : [INF] osdmap e22: 47 osds: 47 up, 47 in
...
2015-01-29 23:54:59.033482 osd.26 192.168.0.6:6800/14137 1 : [WRN] 3 slow requests, 3 included below; oldest blocked for > 30.781338 secs
...
2015-01-29 23:59:10.623681 osd.0 192.168.0.29:6800/13524 3 : [WRN] 6 slow requests, 6 included below; oldest blocked for > 384.902800 secs
...
2015-01-30 00:00:09.645041 mon.0 192.168.0.3:6789/0 212 : [INF] pgmap v109: 8384 pgs: 8384 active+clean; 694 bytes data, 98193 MB used, 43169 GB / 43265 GB avail
I see that ceph cluster initialization took about 30 minutes from 2015-01-29 23:20:51 to 2015-01-29 23:53:01.
Last log record mentioned above tells up about ceph health state, we could see that health is ok.
Conlusion:
I believe that image was imported into glance too early before ceph cluster became operational.
What was found: 192.168. 0.3:6789/ 0,node- 44=192. 168.0.46: 6789/0, node-49= 192.168. 0.51:6789/ 0} 0.41:6800/ 13854 boot 0.6:6800/ 14137 1 : [WRN] 3 slow requests, 3 included below; oldest blocked for > 30.781338 secs 0.29:6800/ 13524 3 : [WRN] 6 slow requests, 6 included below; oldest blocked for > 384.902800 secs
from ceph.log from one of the controller nodes:
...
2015-01-29 23:20:51.670859 mon.0 192.168.0.3:6789/0 1 : [INF] mon.node-1@0 won leader election with quorum 0
...
2015-01-29 23:44:03.273977 mon.0 192.168.0.3:6789/0 31 : [INF] mon.node-1@0 won leader election with quorum 0,1,2
2015-01-29 23:44:03.276078 mon.0 192.168.0.3:6789/0 32 : [INF] monmap e3: 3 mons at {node-1=
...
2015-01-29 23:53:01.390172 mon.0 192.168.0.3:6789/0 124 : [INF] osd.42 192.168.
2015-01-29 23:53:01.390511 mon.0 192.168.0.3:6789/0 125 : [INF] osdmap e22: 47 osds: 47 up, 47 in
...
2015-01-29 23:54:59.033482 osd.26 192.168.
...
2015-01-29 23:59:10.623681 osd.0 192.168.
...
2015-01-30 00:00:09.645041 mon.0 192.168.0.3:6789/0 212 : [INF] pgmap v109: 8384 pgs: 8384 active+clean; 694 bytes data, 98193 MB used, 43169 GB / 43265 GB avail
I see that ceph cluster initialization took about 30 minutes from 2015-01-29 23:20:51 to 2015-01-29 23:53:01.
Last log record mentioned above tells up about ceph health state, we could see that health is ok.
Conlusion:
I believe that image was imported into glance too early before ceph cluster became operational.