System host status different from kubernetes node status
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
StarlingX |
Triaged
|
Low
|
Unassigned |
Bug Description
Brief Description
-----------------
After initial unlock of a worker the following status can be observed. Most of the time this does not last long and is corrected however I did observe it taking a few minutes. Initially when looking at this I saw that not all the containers had started on the worker. Suggest having 'system host-list' verify that k8's is in the "Ready" state before showing "available".
$ system host-list
+----+-
| id | hostname | personality | administrative | operational | availability |
+----+-
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
| 3 | tm0 | worker | unlocked | enabled | available |
+----+-
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
controller-0 Ready master 4d20h v1.15.3
controller-1 Ready master 4d18h v1.15.3
tm0 NotReady <none> 4d18h v1.15.3
Severity
--------
Minor
Steps to Reproduce
------------------
Install the system as normal and unlock a worker node.
Reproducibility
---------------
Mostly this is intermittent; it seems to happen during the initial unlock.
System Configuration
-------
Multi-node controller storage system.
Branch/Pull Time/Commit
-------
2019-09-23_20-00-00
Test Activity
-------------
Evaluation
low priority / not gating - doesn't seem to have any system impact. We would need input from containers TL on whether we want to tie the node mtce state to k8s