On deploy, pods go into CrashLoopBackOff and leak mounts on workers
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Openstack Integrator Charm |
Incomplete
|
High
|
Unassigned |
Bug Description
Pod status:
$ kubectl get po --all-namespaces | grep csi-cinder
kube-system csi-cinder-
kube-system csi-cinder-
kube-system csi-cinder-
Pod descriptions:
csi-cinder-
https:/
csi-cinder-
https:/
csi-cinder-
https:/
Pod logs, for controllerplugin:
https:/
And for nodeplugin (both seem identical):
https:/
Mount leak on worker/0:
$ wc -l /proc/self/mounts
34914 /proc/self/mounts
And on worker/1:
$ wc -l /proc/self/mounts
49210 /proc/self/mounts
And details from both - suspiciously round numbers (this part may make more sense as a separate bug against kubernetes itself, or some component of it?):
https:/
Changed in charm-openstack-integrator: | |
importance: | Undecided → High |
status: | New → Triaged |
Changed in charm-openstack-integrator: | |
status: | New → Triaged |
tags: | added: canonical-is |
@Barry any chance you could provider information about the openstack release (version used and charm versions) and also the juju bundle used?
It would be interesting to see if the same workload runs on a newer release of Openstack as Tom mentioned this is running on icehouse.