Volumes can't be created without cinder nodes
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Mirantis OpenStack |
Fix Committed
|
High
|
Maksim Malchuk |
Bug Description
Steps:
1. Create cluster with Neutron
2. Add 3 nodes with controller role
3. Add 3 nodes with compute and ceph-osd role
4. Deploy the cluster
5. Check ceph status
6. Run OSTF tests
Expected OSTF tests to pass.
Actual result is Failed 2 OSTF tests:
- Create volume and boot instance from it (failure) Failed to get to expected status. In error state. Please refer to OpenStack logs for more details.
- Create volume and attach it to instance (failure) Time limit exceeded while waiting for volume becoming 'in-use' to finish. Please refer to OpenStack logs for more details.
Failed CI jobs:
https:/
https:/
Fuel snapshot attached.
tags: | added: bvt-fail |
Changed in mos: | |
status: | Invalid → Confirmed |
The logs (in particular, node-1/ commands/ ceph_s. txt) indicate that ceph cluster is OK:
[10.109.0.4] out: cluster b64e046f- 653f-4e95- 848e-1794b8298e 98 10.109. 2.3:6789/ 0,node- 4=10.109. 2.2:6789/ 0,node- 5=10.109. 2.5:6789/ 0} node-1, node-5
[10.109.0.4] out: health HEALTH_WARN
[10.109.0.4] out: too many PGs per OSD (352 > max 300)
[10.109.0.4] out: monmap e3: 3 mons at {node-1=
[10.109.0.4] out: election epoch 8, quorum 0,1,2 node-4,
[10.109.0.4] out: osdmap e33: 6 osds: 6 up, 6 in
[10.109.0.4] out: pgmap v101: 704 pgs, 10 pools, 22052 kB data, 52 objects
[10.109.0.4] out: 12727 MB used, 283 GB / 296 GB avail
[10.109.0.4] out: 704 active+clean
[10.109.0.4] out:
Also there's nothing unusual in OSDs' logs (node-2/ var/log/ ceph/ceph- osd.{1, 4}.log, node-3/ var/log/ ceph/ceph- osd.{0, 3}.log, etc), same for the monitors.
Last but not least just because the log file is called "fail_error_ ceph_radosgw- blah-blah. tar.gz" does NOT mean the problem has something to do with ceph