kubernetes-control-plane installation hook failed: "coordinator-relation-changed"
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Kubernetes Control Plane Charm |
Triaged
|
Medium
|
Unassigned | ||
Kubernetes Worker Charm |
Triaged
|
Medium
|
Unassigned |
Bug Description
In testrun https:/
=======
kubernetes-
containerd/7 active idle 10.246.167.159 Container runtime available
filebeat/23 blocked idle 10.246.167.159 filebeat service not running
kube-ovn/7 waiting idle 10.246.167.159 Waiting to retry configuring Kube-OVN
nrpe/29 active idle 10.246.167.159 icmp,5666/tcp Ready
ntp/10 active idle 10.246.167.159 123/udp chrony: Ready, OK: offset is 0.000046
telegraf/23 active idle 10.246.167.159 9103/tcp Monitoring kubernetes-
kubernetes-
containerd/6 active idle 10.246.164.138 Container runtime available
filebeat/21 blocked idle 10.246.164.138 filebeat service not running
kube-ovn/6 active idle 10.246.164.138
nrpe/27 active idle 10.246.164.138 icmp,5666/tcp Ready
ntp/9 active idle 10.246.164.138 123/udp chrony: Ready, OK: offset is 0.000018
telegraf/20 active idle 10.246.164.138 9103/tcp Monitoring kubernetes-
kubernetes-
containerd/8 active idle 10.246.167.181 Container runtime available
filebeat/24 blocked idle 10.246.167.181 filebeat service not running
kube-ovn/8 active idle 10.246.167.181
nrpe/30 active idle 10.246.167.181 icmp,5666/tcp Ready
ntp/11 active idle 10.246.167.181 123/udp chrony: Ready, OK: offset is 0.000023
telegraf/24 active idle 10.246.167.181 9103/tcp Monitoring kubernetes-
=======
The logs show:
=======
nit-kubernetes-
unit-kubernetes
unit-kubernetes
Traceback (most recent call last):
File "/var/lib/
bus.
File "/var/lib/
_invoke(
File "/var/lib/
handler.
File "/var/lib/
self.
File "/var/lib/
create_
File "/var/lib/
check_
File "/usr/lib/
raise CalledProcessEr
subprocess.
=======
Crashdumps and configs can be found here:
https:/
tags: | added: cdo-qa foundations-engine |
Looks like an internal error coming from systemd. Not much we can do to prevent it, but we can make the charm handle it better.
This failed kubectl call occurred in create_kubeconfig. We could perhaps avoid the kubectl call entirely and just render our own kubeconfig via yaml.safe_dump and file writes. That or we have to handle failed kubectl calls and retry appropriately.