load balancers created for Kube API via the loadbalancer endpoint are not deleted
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
AWS Integrator Charm |
Triaged
|
Medium
|
Unassigned | ||
Canonical Juju |
Incomplete
|
High
|
Unassigned | ||
OpenStack Octavia Charm |
Invalid
|
Undecided
|
Unassigned | ||
Openstack Integrator Charm |
Triaged
|
Medium
|
Unassigned |
Bug Description
Using openstack-
The load balancer is created properly, and the appropriate members (k8s-masters) are joined to the load balancer.
When the kubernetes model is destroyed, the load balancer is still present in openstack.
Trying to manually delete the load balancer shows that it still has members.
trying to delete the pool associated with the load balancer (in an effort to manually clean it up), gives the following error:
---
$ openstack loadbalancer pool delete e045da82-
Load Balancer f6f01eee-
---
summary: |
- load balancers created via the loadbalancer endpoint are not deleted + load balancers created for Kube API via the loadbalancer endpoint are + not deleted |
Changed in charm-aws-integrator: | |
status: | New → Triaged |
Changed in charm-openstack-integrator: | |
status: | New → Triaged |
Changed in charm-aws-integrator: | |
importance: | Undecided → Medium |
Changed in charm-openstack-integrator: | |
importance: | Undecided → Medium |
Changed in juju: | |
status: | Triaged → Incomplete |
milestone: | 2.8.1 → none |
There is explicit cleanup logic in the charm ([1] and [2]) but it seems like the stop hook may not be having a chance to complete that. Options there are to move the cleanup earlier, to the relation hooks, and to provide an action to do an explicit manual cleanup, like the AWS integrator charm has. If it's a hook race condition during model destroy, then it may still require a manual intervention in the teardown to ensure that the cleanup is run before the destroy-model command is run, but it should be escalated up to the Juju team if that's the case.
Alternatively, if the cleanup is failing, we need to figure out why. As discussed on IRC, you're going to try running the specific cleanup command that the charm uses (openstack loadbalancer delete --cascade $lb_name) to see if that same "immutable" error occurs.
[1]: https:/ /github. com/juju- solutions/ charm-openstack -integrator/ blob/master/ reactive/ openstack. py#L125- L128
[2]: https:/ /github. com/juju- solutions/ charm-openstack -integrator/ blob/master/ lib/charms/ layer/openstack .py#L140- L154