Unable to delete K8s cluster from horizon dashboard , VM not deleting backend cinder volumes

Bug #1983983 reported by Narinder Gupta
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Magnum Dashboard Charm
Expired
Undecided
Unassigned

Bug Description

After deployment of COE cluster with volume backend for instances. Unable to delete K8s cluster from horizon dashboard, VM not deleting backend cinder volumes.

Revision history for this message
Billy Olsen (billy-olsen) wrote :

Can you please provide a bit more information about what you are seeing and what the problem is? Specific reproduction steps and logs would be appreciated.

Changed in charm-magnum-dashboard:
status: New → Incomplete
Revision history for this message
TCSECP (tcsecp) wrote :

Please check the below logs for volume.

Note: This error occurs very rarely due to this we were unable to delete the Kubernetes cluster .

test2-67jknqibxfr5-kube_minions-7v2p3nzsshsu    84fd50b6-fc24-4f2a-84cb-c359d5084df1      1 week, 6 days    Delete Failed     Resource DELETE failed: Error: resources[0].resources.kube_node_volume: Volume in use

test2-67jknqibxfr5-kube_minions-7v2p3nzsshsu    84fd50b6-fc24-4f2a-84cb-c359d5084df1      1 week, 6 days    Delete Failed     Resource DELETE failed: Error: resources[0].resources.kube_node_volume: Volume in use
0     efcb26c0-1926-45f9-ab14-1db141f8a5d0       1 week, 6 days      Delete Failed      Error: resources[0].resources.kube_node_volume: Volume in use
0     efcb26c0-1926-45f9-ab14-1db141f8a5d0       1 week, 6 days      Delete In Progress      state changed
test2-67jknqibxfr5-kube_minions-7v2p3nzsshsu    84fd50b6-fc24-4f2a-84cb-c359d5084df1      1 week, 6 days    Delete In Progress      Stack DELETE started
test2-67jknqibxfr5-kube_minions-7v2p3nzsshsu    84fd50b6-fc24-4f2a-84cb-c359d5084df1      1 week, 6 days    Delete Failed     Resource DELETE failed: Error: resources[0].resources.kube_node_volume: Volume in use
0      efcb26c0-1926-45f9-ab14-1db141f8a5d0  1 week, 6 days    Delete Failed     Error: resources[0].resources.kube_node_volume: Volume in use

Revision history for this message
TCSECP (tcsecp) wrote :
Download full text (3.3 KiB)

Hi Team,

Below are the commands I used to delete the k8s cluster. After multiple try , vm and volumes are deleted but still magnum showing cluster status has DELETE_FAILED.

Reg: cluster ID 31ea7b45-04d4-49d0-b847-1a06668bb506

openstack coe cluster list
+--------------------------------------+----------------------+--------------+------------+--------------+-----------------+
| uuid | name | keypair | node_count | master_count | status |
+--------------------------------------+----------------------+--------------+------------+--------------+-----------------+
| ffa56e2e-6ae8-4cf1-9183-74e8dc44e3a6 | test2 | jumphost-key | 1 | 1 | DELETE_FAILED |
| 31ea7b45-04d4-49d0-b847-1a06668bb506 | k8s-cluster-sri24-24 | jumphost-key | 1 | 1 | DELETE_FAILED |
| 12a704cf-a89e-4153-903c-574b4545ed74 | k8s-narinder | jumphost-key | 1 | 1 | CREATE_COMPLETE |
| 9743f6d6-ed91-4ddb-adb7-34c2e199c07c | k8s-cluster-masterha | jumphost-key | 1 | 2 | CREATE_FAILED |
+--------------------------------------+----------------------+--------------+------------+--------------+-----------------+

openstack coe cluster delete 31ea7b45-04d4-49d0-b847-1a06668bb506

openstack coe cluster list
+--------------------------------------+----------------------+--------------+------------+--------------+--------------------+
| uuid | name | keypair | node_count | master_count | status |
+--------------------------------------+----------------------+--------------+------------+--------------+--------------------+
| ffa56e2e-6ae8-4cf1-9183-74e8dc44e3a6 | test2 | jumphost-key | 1 | 1 | DELETE_FAILED |
| 31ea7b45-04d4-49d0-b847-1a06668bb506 | k8s-cluster-sri24-24 | jumphost-key | 1 | 1 | DELETE_IN_PROGRESS |
| 12a704cf-a89e-4153-903c-574b4545ed74 | k8s-narinder | jumphost-key | 1 | 1 | CREATE_COMPLETE |
| 9743f6d6-ed91-4ddb-adb7-34c2e199c07c | k8s-cluster-masterha | jumphost-key | 1 | 2 | CREATE_FAILED |
+--------------------------------------+----------------------+--------------+------------+--------------+--------------------+

openstack coe cluster list
+--------------------------------------+----------------------+--------------+------------+--------------+-----------------+
| uuid | name | keypair | node_count | master_count | status |
+--------------------------------------+----------------------+--------------+------------+--------------+-----------------+
| ffa56e2e-6ae8-4cf1-9183-74e8dc44e3a6 | test2 | jumphost-key | 1 | 1 | DELETE_FAILED |
| 31ea7b45-04d4-49d0-b847-1a06668bb506 | k8s-cluster-sri24-24 | jumphost-key | 1 | 1 | DELETE_FAILED |
| 12a704cf-a89e-4153-903c-574b4545ed74 | k8s-narinder | jumphost-key | 1 | 1 | CREATE_COMPLETE |
| 9743f6d6-ed91-4ddb-adb7-34c2e199c07c | k8s-...

Read more...

Revision history for this message
TCSECP (tcsecp) wrote :

unable to take volume logs but I could see this log in Heat stack which says volume in use .

test2-67jknqibxfr5-kube_minions-7v2p3nzsshsu 84fd50b6-fc24-4f2a-84cb-c359d5084df1 4 hours, 53 minutes Delete Failed Resource DELETE failed: Error: resources[0].resources.kube_node_volume: Volume in use
0 efcb26c0-1926-45f9-ab14-1db141f8a5d0 4 hours, 53 minutes Delete Failed Error: resources[0].resources.kube_node_volume: Volume in use
0 efcb26c0-1926-45f9-ab14-1db141f8a5d0 4 hours, 53 minutes Delete In Progress state changed
test2-67jknqibxfr5-kube_minions-7v2p3nzsshsu 84fd50b6-fc24-4f2a-84cb-c359d5084df1 4 hours, 53 minutes Delete In Progress Stack DELETE started
test2-67jknqibxfr5-kube_minions-7v2p3nzsshsu 84fd50b6-fc24-4f2a-84cb-c359d5084df1 4 hours, 53 minutes Delete Failed Resource DELETE failed: Error: resources[0].resources.kube_node_volume: Volume in use
0 efcb26c0-1926-45f9-ab14-1db141f8a5d0 4 hours, 53 minutes Delete Failed Error: resources[0].resources.kube_node_volume: Volume in use
0 efcb26c0-1926-45f9-ab14-1db141f8a5d0 4 hours, 53 minutes Delete In Progress state changed
test2-67jknqibxfr5-kube_minions-7v2p3nzsshsu 84fd50b6-fc24-4f2a-84cb-c359d5084df1 4 hours, 53 minutes Delete In Progress Stack DELETE started
test2-67jknqibxfr5-kube_minions-7v2p3nzsshsu 84fd50b6-fc24-4f2a-84cb-c359d5084df1 4 hours, 53 minutes Delete Failed Resource DELETE failed: Error: resources[0].resources.kube_node_volume: Volume in use
0 efcb26c0-1926-45f9-ab14-1db141f8a5d0 4 hours, 53 minutes Delete Failed Error: resources[0].resources.kube_node_volume: Volume in use
0 efcb26c0-1926-45f9-ab14-1db141f8a5d0 4 hours, 53 minutes Delete In Progress state changed
test2-67jknqibxfr5-kube_minions-7v2p3nzsshsu 84fd50b6-fc24-4f2a-84cb-c359d5084df1 4 hours, 53 minutes Delete In Progress Stack DELETE started
test2-67jknqibxfr5-kube_minions-7v2p3nzsshsu 84fd50b6-fc24-4f2a-84cb-c359d5084df1 4 hours, 53 minutes Delete Failed Resource DELETE failed: Error: resources[0].resources.kube_node_volume: Volume in use
0 efcb26c0-1926-45f9-ab14-1db141f8a5d0 4 hours, 53 minutes Delete Failed Error: resources[0].resources.kube_node_volume: Volume in use
0 efcb26c0-1926-45f9-ab14-1db141f8a5d0 4 hours, 53 minutes Delete In Progress state changed
test2-67jknqibxfr5-kube_minions-7v2p3nzsshsu 84fd50b6-fc24-4f2a-84cb-c359d5084df1 4 hours, 53 minutes Delete In Progress Stack DELETE started
test2-67jknqibxfr5-kube_minions-7v2p3nzsshsu 84fd50b6-fc24-4f2a-84cb-c359d5084df1 4 hours, 53 minutes Delete Failed Resource DELETE failed: Error: resources[0].resources.kube_node_volume: Volume in use
0 efcb26c0-1926-45f9-ab14-1db141f8a5d0 4 hours, 53 minutes Delete Failed Error: resources[0].resources.kube_node_volume: Volume in use
0 efcb26c0-1926-45f9-ab14-1db141f8a5d0 4 hours, 53 minutes Delete In Progress state changed
test2-67jknqibxfr5-kube_minions-7v2p3nzsshsu 84fd50b6-fc24-4f2a-84cb-c359d5084df1 4 hours, 53 minutes Delete In Progress Stack DELETE started

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for OpenStack Magnum Dashboard Charm because there has been no activity for 60 days.]

Changed in charm-magnum-dashboard:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.