/root/.kube/config file only refers to one kubernetes-master IP causing API timeouts between kubelet and offline clustered k8s-master
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Kubernetes Control Plane Charm |
Invalid
|
High
|
Unassigned | ||
Kubernetes Worker Charm |
Invalid
|
High
|
Unassigned | ||
Openstack Integrator Charm |
Fix Released
|
High
|
Unassigned |
Bug Description
I am running Kubernetes 1.19 with these charms:
cs:~containers/
cs:~containers/
When one of three deployed kubernetes-master units is offline, there is potential that some kubelet processes as well as the node checks on those same nodes, may disconnect from the kubernetes cluster and appear NotReady.
When investigating this further, I found that /root/.kube/config only lists one of the kubernetes-master unit IPs.
It would be beneficial for this connectivity between kubelet and controllers to make use of the loadbalancer provided by the relation to openstack-
To replicate, deploy 3 kubernetes-master units, and several kubernetes-worker units. add relation to openstack-
Changed in charm-kubernetes-master: | |
importance: | Undecided → High |
Changed in charm-kubernetes-worker: | |
importance: | Undecided → High |
Changed in charm-openstack-integrator: | |
importance: | Undecided → High |
no longer affects: | charm-kubernetes-master |
no longer affects: | charm-kubernetes-worker |
Changed in charm-openstack-integrator: | |
status: | New → Triaged |
tags: | added: review-needed |
Changed in charm-openstack-integrator: | |
status: | Triaged → Fix Committed |
milestone: | none → 1.23+ck1 |
tags: |
added: backport-needed removed: review-needed |
Changed in charm-openstack-integrator: | |
milestone: | 1.23+ck1 → 1.24 |
tags: | removed: backport-needed |
Changed in charm-openstack-integrator: | |
status: | Fix Committed → Fix Released |
Subscribing field-high as this causes kubelet/pod outages/failover when a single kubernetes-master in a cluster fails.