Deploying a new kubernetes-master unit makes pods lose connectivity to cephfs volumes
Bug #1891757 reported by
Tiago Pasqualini da Silva
This bug affects 2 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Kubernetes Control Plane Charm |
In Progress
|
High
|
Joseph Borg |
Bug Description
Title says it all. I can consistently reproduce this with the following steps:
- Start with a simple deployment with cephfs, 3 ceph-monitors, 3 kubernetes-masters
- Deploy some pod with volumes backed by cephfs and verify that they are working
- Add new ceph-monitor and kubernetes-master units
- Check the volume on running pod and verify that they show an error: "Transport endpoint is not connected"
I tried isolating the steps to remove the manipulation of ceph-monitor units, but then I got inconsistent results with that (sometimes I got the error, sometimes I didn't). I believe there is some sort of race condition on the kubernetes-master when it updates the ceph-mon addresses.
tags: | added: sts |
Changed in charm-kubernetes-master: | |
assignee: | nobody → Joseph Borg (joeborg) |
status: | Triaged → In Progress |
To post a comment you must log in.
Thanks for the report. I believe this is enough information for us to at least look into it.
If you can, please provide the charm revisions and kubernetes version you were running when you encountered this.