Kubernetes 1.20 running on AWS.
Crashdump here: https://oil-jenkins.canonical.com/artifacts/d00463db-1c80-4a35-9786-3e051d6393e4/generated/generated/kubernetes/juju-crashdump-kubernetes-2021-02-23-21.23.58.tar.gz
Bundle here: https://oil-jenkins.canonical.com/artifacts/d00463db-1c80-4a35-9786-3e051d6393e4/generated/generated/kubernetes/bundle.yaml
Full artifacts here: https://oil-jenkins.canonical.com/artifacts/d00463db-1c80-4a35-9786-3e051d6393e4/index.html
The kubernetes-worker charms are stuck waiting on kube-proxy, however you can see in their journal that it is actually an issue with the kublet talking to the docker daemon:
kubernetes-worker_0 $ journalctl -xe --file var/log/journal/ec2ef349f15c05e1f46c14bc2ef82339/system.journal
...
Feb 23 16:25:14 ip-172-31-47-177 systemd-logind[550]: Removed session 15.
-- Subject: Session 15 has been terminated
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat
--
-- A session with the ID 15 has been terminated.
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: W0223 21:25:16.922473 225722 server.go:473] No api server defined - no events will be sent to API server.
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: I0223 21:25:16.922504 225722 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: I0223 21:25:16.922906 225722 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: I0223 21:25:16.922922 225722 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSys>
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: I0223 21:25:16.923028 225722 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: I0223 21:25:16.923042 225722 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: I0223 21:25:16.923048 225722 container_manager_linux.go:315] Creating device plugin manager: true
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: W0223 21:25:16.923129 225722 kubelet.go:300] Using dockershim is deprecated, please consider using a full-fledged CRI implementation
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: I0223 21:25:16.923148 225722 client.go:77] Connecting to docker on unix:///var/run/docker.sock
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: I0223 21:25:16.923159 225722 client.go:94] Start docker client with request timeout=2m0s
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: F0223 21:25:16.923366 225722 server.go:269] failed to run Kubelet: failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: goroutine 1 [running]:
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000124001, 0xc000d2c000, 0xc4, 0xd0)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x73014e0, 0xc000000003, 0x0, 0x0, 0xc000cc23f0, 0x6f52842, 0x9, 0x10d, 0x411200)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x73014e0, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0001ef530, 0x1, 0x1)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc000266000, 0xc0001121a0, 0x0, 0x0)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:269 +0x845
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000266000, 0xc0001121a0, 0x0, 0x0, 0xc000266000, 0xc0001121a0)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000266000, 0x16667d81905fd4ec, 0x73010a0, 0x4091a5)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: main.main()
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: _output/local/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: goroutine 19 [chan receive]:
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x73014e0)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: goroutine 104 [chan receive]:
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.SetupSignalContext.func1(0xc0002e19b0)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/signal.go:48 +0x36
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.SetupSignalContext
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/signal.go:47 +0xf3
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: goroutine 132 [chan receive]:
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1(0x4f38260, 0xc000a49500, 0xc0002035a0)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:301 +0xaa
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:299 +0x6e
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: goroutine 131 [chan receive]:
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.(*Broadcaster).loop(0xc0000e3200)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/mux.go:219 +0x66
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.NewBroadcaster
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/mux.go:73 +0xf7
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: goroutine 61 [select]:
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc0002a6640)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: goroutine 74 [select]:
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a8bf18, 0x4f27660, 0xc000492e70, 0x1, 0xc0001020c0)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a8bf18, 0x12a05f200, 0x0, 0xc00040e101, 0xc0001020c0)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a8bf18, 0x12a05f200)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /build/kubelet/parts/kubelet/go/src/github.com/kubernetes/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: goroutine 77 [syscall]:
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: os/signal.signal_recv(0x0)
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /snap/go/7013/src/runtime/sigqueue.go:147 +0x9d
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: os/signal.loop()
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /snap/go/7013/src/os/signal/signal_unix.go:23 +0x25
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: created by os/signal.Notify.func1.1
Feb 23 16:25:16 ip-172-31-47-177 kubelet.daemon[225722]: /snap/go/7013/src/os/signal/signal.go:150 +0x45
Feb 23 16:25:16 ip-172-31-47-177 systemd[1]: snap.kubelet.daemon.service: Main process exited, code=exited, status=255/EXCEPTION
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- An ExecStart= process belonging to unit snap.kubelet.daemon.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 255.
Feb 23 16:25:16 ip-172-31-47-177 systemd[1]: snap.kubelet.daemon.service: Failed with result 'exit-code'.
...
Thanks for the report. You can ignore the stuff in kubelet logs about failing to connect to Docker. That's just a sign that kubernetes-worker has not configured kubelet yet. Kube-proxy also has not been configured.
The kubernetes-worker charms aren't running start_worker. They're missing the kube-control. auth.available flag.
Looking at debug-log for kubernetes-master, it looks like it tried to auth the workers, but failed:
2021-02-23 17:33:53 INFO juju-log grafana:59: Invoking reactive handler: reactive/ kubernetes_ master. py:569: setup_leader_ authentication relation- changed The Secret "auth-admin- ip.w9q- .9f" is invalid: metadata.name: Invalid value: "auth-admin- ip.w9q- .9f": a lowercase RFC 1123 subdomain must co 9]([-a- z0-9]*[ a-z0-9] )?(\.[a- z0-9]([ -a-z0-9] *[a-z0- 9])?)*' ) relation- changed The Secret "auth-admin- vml..xq7si" is invalid: metadata.name: Invalid value: "auth-admin- vml..xq7si" : a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0- 9]([-a- z0-9]*[ a-z0-9] )?(\.[a- z0-9]([ -a-z0-9] *[a-z0- 9])?)*' )
2021-02-23 17:33:54 WARNING grafana-
nsist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-
2021-02-23 17:33:56 WARNING grafana-
2021-02-23 17:33:57 INFO juju-log grafana:59: Missing required tokens for kubelet startup; will retry
After this, setup_leader_ authentication was never called again.