Nailgun node status wasn't changed to "error" after a deletion of node's networkgroup
Bug #1644630 reported by
Sergey Novikov
This bug affects 3 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Fix Released
|
High
|
Dmitry | ||
Newton |
Fix Committed
|
High
|
Georgy Kibardin |
Bug Description
Detailed bug description:
the issue was found by https:/
Steps to reproduce:
1. Deploy cluster with custom nodegroup
3. Reset cluster
4. Remove custom nodegroup
5. Check nodes from custom nodegroup have 'error' status
6. Re-create custom nodegroup and upload saved network configuration
7. Assign 'error' nodes to new nodegroup
8. Check nodes from custom nodegroup are in 'discover' state
Expected results: all is fine
Actual result: step #5 fails
Description of the environment:
snapshot #549
Changed in fuel: | |
assignee: | nobody → Fuel Sustaining (fuel-sustaining-team) |
importance: | Undecided → Medium |
status: | New → Confirmed |
tags: | added: area-library |
tags: |
added: area-python removed: area-library |
Changed in fuel: | |
importance: | Medium → High |
tags: | added: swarm-fail |
Changed in fuel: | |
status: | Confirmed → Incomplete |
Changed in fuel: | |
assignee: | Fuel Sustaining (fuel-sustaining-team) → Fuel QA Team (fuel-qa) |
status: | Incomplete → Confirmed |
Changed in fuel: | |
assignee: | Vladimir Kuklin (vkuklin) → Georgy Kibardin (gkibardin) |
Changed in fuel: | |
milestone: | 9.2 → 9.3 |
Changed in fuel: | |
assignee: | Georgy Kibardin (gkibardin) → Bulat Gaifullin (bulat.gaifullin) |
Changed in fuel: | |
assignee: | Bulat Gaifullin (bulat.gaifullin) → Georgy Kibardin (gkibardin) |
Changed in fuel: | |
status: | Fix Committed → In Progress |
Changed in fuel: | |
milestone: | 9.x-updates → 9.2-mu-2 |
Changed in fuel: | |
status: | In Progress → Fix Committed |
tags: | added: on-verification |
Changed in fuel: | |
assignee: | Georgy Kibardin (gkibardin) → Alexey Stupnikov (astupnikov) |
Changed in fuel: | |
status: | Confirmed → Fix Committed |
Changed in fuel: | |
status: | Fix Committed → Fix Released |
To post a comment you must log in.
I am not sure that this behaviour is not expected. From what I see - if we have a cluster that was reset - there is no need to mark nodes as error at all. Error is the status of some real 'action' not of juggling the nodes in the UI within almost brand new cluster.