Activity log for bug #1878954

Date Who What changed Old value New value Message
2020-05-15 16:20:55 Peter De Sousa bug added bug
2020-05-15 16:23:47 Peter De Sousa description Hello, Problem: When relating the kubernetes-workers and masters to the load-balancer the services are being configured twice as "target_service". Detail: As a resutl is nginx will fail to start and the hacluster will show a failed resource with the nginx config looking like this: # /etc/nginx/sites-enabled/apilb upstream target_service { server 172.25.94.125:6443; server 172.25.94.126:6443; } upstream target_service { server 172.25.94.125:9103; server 172.25.94.126:9103; server 172.25.94.127:9103; server 172.25.94.128:9103; server 172.25.94.129:9103; server 172.25.94.130:9103; server 172.25.94.131:9103; server 172.25.94.132:9103; server 172.25.94.133:9103; server 172.25.94.134:9103; } Workaround: Remove relations: - kubeapi-load-balancer:loadbalancer kubernetes-master:loadbalancer - kubeapi-load-balancer:apiserver kubernetes-master:kube-api-endpoint - kubernetes-worker kubeapi-load-balancer Wait for all three relations to be removed, then re-add the relations. The file should come back without the duplicate entry. Cheers, Peter Hello, Problem: When relating the kubernetes-workers and masters to the load-balancer the services are being configured twice as "target_service". Detail: As a result is nginx will fail to start and the hacluster will show a failed resource with the nginx config looking like this: # /etc/nginx/sites-enabled/apilb upstream target_service {   server 172.25.94.125:6443;   server 172.25.94.126:6443; } upstream target_service {   server 172.25.94.125:9103;   server 172.25.94.126:9103;   server 172.25.94.127:9103;   server 172.25.94.128:9103;   server 172.25.94.129:9103;   server 172.25.94.130:9103;   server 172.25.94.131:9103;   server 172.25.94.132:9103;   server 172.25.94.133:9103;   server 172.25.94.134:9103; } Workaround: Remove relations: - kubeapi-load-balancer:loadbalancer kubernetes-master:loadbalancer - kubeapi-load-balancer:apiserver kubernetes-master:kube-api-endpoint - kubernetes-worker kubeapi-load-balancer Wait for all three relations to be removed, then re-add the relations. The file should come back without the duplicate entry. Cheers, Peter
2020-05-18 17:00:27 George Kraft charm-kubeapi-load-balancer: status New Incomplete
2020-05-19 12:28:48 Peter De Sousa attachment added clean-bundle.yaml https://bugs.launchpad.net/charm-kubeapi-load-balancer/+bug/1878954/+attachment/5374113/+files/clean-bundle.yaml
2020-05-19 19:00:41 George Kraft charm-kubeapi-load-balancer: importance Undecided High
2020-05-19 19:00:43 George Kraft charm-kubeapi-load-balancer: status Incomplete Triaged
2020-05-20 15:50:17 George Kraft charm-kubeapi-load-balancer: importance High Medium