overcloud node delete needs to support deployed-server delete by hostname
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
tripleo |
Incomplete
|
High
|
Rabi Mishra |
Bug Description
The instructions for deployed-server scale-down[1] currently require doing a nested stack list then eye-parsing the output to figure out what uuid to pass to the node delete command.
When using baremetal provisioning, this is automated by _translate_
Currently it appears that node delete doesn't work for deployed-server when specifying by uuid[3] or hostname[4]. This bug can track supporting both.
[1] https:/
[2] https:/
[3] http://
[4] http://
Changed in tripleo: | |
assignee: | nobody → Rabi Mishra (rabi) |
status: | Triaged → In Progress |
Changed in tripleo: | |
milestone: | ussuri-3 → ussuri-rc3 |
Changed in tripleo: | |
milestone: | ussuri-rc3 → victoria-1 |
Changed in tripleo: | |
milestone: | victoria-1 → victoria-3 |
Changed in tripleo: | |
milestone: | victoria-3 → wallaby-1 |
Changed in tripleo: | |
milestone: | wallaby-1 → wallaby-2 |
Changed in tripleo: | |
milestone: | wallaby-2 → wallaby-3 |
Changed in tripleo: | |
milestone: | wallaby-3 → wallaby-rc1 |
Changed in tripleo: | |
milestone: | wallaby-rc1 → xena-1 |
Would not it be better to change the documentation for deployed server scale down[1] above to create an environment with RemovalPolicies (with the to be blacklisted indexes) and use it rather than making an inefficient search for 'name' atrribute of the Deployedserver heat resource?
IMO, we should probably deprecate and remove the node delete command and ask users to use environments with RemovalPolicies.
The key issue with node delete is that, it internally calculates the role count and is different from the ones provided in env files or role_data and users have issues most of the time doing an update after scale down and failing if there are not enough baremetal nodes to provision.