[SWARM][8.0] RabbitMQ availability test failure after removing rabbit node
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Confirmed
|
Medium
|
MOS Maintenance QA team | ||
Mitaka |
New
|
Undecided
|
Fuel QA Team |
Bug Description
Steps to reproduce:
1. Revert snapshot separate_
2. Add one rabbit node and re-deploy cluster
3. Run network verification
4. Run OSTF
5. Check hiera hosts are the same for different group of roles
6. Delete one rabbit node
7. Run network verification
8. Run OSTF
better use separate_
test fail at line:
https:/
Expected results: ostf tests passed successfully
Actual result: 2 ostf tests failed:
- RabbitMQ availability (failure) Number of RabbitMQ nodes is not equal to number of cluster nodes.
- RabbitMQ replication (failure) Failed to establish AMQP connection to 5673/tcp port on 10.109.1.11 from controller node!
Reproducibility: rarely
Snapshot: https:/
Changed in fuel: | |
milestone: | none → 8.0-updates |
assignee: | nobody → MOS Maintenance (mos-maintenance) |
importance: | Undecided → Medium |
Changed in fuel: | |
status: | New → Confirmed |
Changed in fuel: | |
assignee: | MOS Maintenance (mos-maintenance) → MOS Maintenance QA team (mos-maintenance-qa) |
tags: | added: non-release |
The similar issue is present on 9.x SWARM test(s)
https:/ /product- ci.infra. mirantis. net/job/ 9.x.system_ test.ubuntu. repetitive_ restart/ 15/testReport/ (root)/ ceph_partitions _repetitive_ cold_restart/ ceph_partitions _repetitive_ cold_restart/
Test scenario: load_ceph_ ha'
1. Revert snapshot 'prepare_
2. Wait until MySQL Galera is UP on some controller
3. Check Ceph status
4. Run ostf
5. Fill ceph partitions on all nodes up to 30%
6. Check Ceph status
7. Disable UMM
8. Run RALLY
9. 100 times repetitive reboot:
10. Cold restart of all nodes
11. Wait for HA services ready
12. Wait until MySQL Galera is UP on some controller
13. Run ostf
Note: there is one more similar bug /bugs.launchpad .net/fuel/ +bug/1495885
https:/