NO_MONITOR for openstack-integrator provisioned Octavia LB in front of kubernetes-master
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Openstack Integrator Charm |
Fix Released
|
High
|
Samuel Walladge |
Bug Description
By having the following relation, openstack-
- ['openstack-
However, the provisioned loadbalancer doesn't have any monitor to the backend k8s master units so member list won't be managed properly and haproxy in amphora keeps retrying the requests including the failed backend all the time. There seems to be no obvious failure from an API user point of view.
$ openstack loadbalancer list -c name -c provisioning_status -c operating_status -c provider
+------
| name | provisioning_status | operating_status | provider |
+------
| openstack-
+------
$ openstack loadbalancer member list openstack-
+------
| id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight |
+------
| 741c3862-
+------
^^^ NO_MONITOR
$ openstack loadbalancer healthmonitor list
-> (empty)
Changed in charm-openstack-integrator: | |
assignee: | nobody → Samuel Walladge (swalladge) |
status: | New → In Progress |
Changed in charm-openstack-integrator: | |
importance: | Undecided → High |
tags: | added: review-needed |
Changed in charm-openstack-integrator: | |
milestone: | none → 1.24 |
Changed in charm-openstack-integrator: | |
status: | Fix Committed → Fix Released |
It looks like the health-check endpoints are not allowed by an unauthorized user by default so we could use a port based status check for the time being.
https:/ /kubernetes. io/docs/ reference/ access- authn-authz/ _print/ #other- component- roles
> Allows read access to control-plane monitoring endpoints (i.e.
> kube-apiserver liveness and readiness endpoints (/healthz, /livez,
> /readyz), the individual health-check endpoints (/healthz/*, /livez/*,
> /readyz/*), and /metrics). Note that individual health check endpoints
> and the metric endpoint may expose sensitive information.
$ kubectl get --raw='/livez'
ok
$ kubectl get --raw=' /livez? verbose' k/start- kube-apiserver- admission- initializer ok k/generic- apiserver- start-informers ok k/priority- and-fairness- config- consumer ok k/priority- and-fairness- filter ok k/start- apiextensions- informers ok k/start- apiextensions- controllers ok k/crd-informer- synced ok k/bootstrap- controller ok k/rbac/ bootstrap- roles ok k/scheduling/ bootstrap- system- priority- classes ok k/priority- and-fairness- config- producer ok k/start- cluster- authentication- info-controller ok k/aggregator- reload- proxy-client- cert ok k/start- kube-aggregator -informers ok k/apiservice- registration- controller ok k/apiservice- status- available- controller ok k/kube- apiserver- autoregistratio n ok -completion ok k/apiservice- openapi- controller ok
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthoo
[+]poststarthoo
[+]poststarthoo
[+]poststarthoo
[+]poststarthoo
[+]poststarthoo
[+]poststarthoo
[+]poststarthoo
[+]poststarthoo
[+]poststarthoo
[+]poststarthoo
[+]poststarthoo
[+]poststarthoo
[+]poststarthoo
[+]poststarthoo
[+]poststarthoo
[+]poststarthoo
[+]autoregister
[+]poststarthoo
livez check passed
$ curl -ks https:/ /192.168. 151.76: 6443/livez
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}