haproxy ceilometer backends missing when no public-address is found
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Ceilometer Charm |
Fix Released
|
High
|
Shane Peters |
Bug Description
When using eilometer in an HA configuration with two VIPs (public private) only the local units public IP is being rendered as an available back-end in haproxy.
Relevant Config:
=================
"openstack-origin": "cloud:
"os-admin-network": 192.168.1.0/22
"os-internal-
"os-public-
vip: "192.168.1.208 1.2.1.247"
ceilometer-
corosync_transport: unicast
Snippet from haproxy.cfg
=======
backend ceilometer_
balance leastconn
server ceilometer-3 1.2.1.13:8767 check
backend ceilometer_
balance leastconn
server ceilometer-6 192.168.1.134:8767 check
server ceilometer-4 192.168.1.45:8767 check
server ceilometer-3 192.168.1.47:8767 check
Note the missing public backends for 'ceilometer-4' and 'ceilometer-6'
Since connections aren't being balanced, the single (by default) ceilometer api-worker gets overwhelmed. Raising the api-worker value to match that of the cpu count is confirmed to help smooth things out.
description: | updated |
Changed in charm-ceilometer: | |
status: | Confirmed → Triaged |
milestone: | none → 17.05 |
Changed in charm-ceilometer: | |
assignee: | nobody → Shane Peters (shaner) |
Changed in charm-ceilometer: | |
status: | Fix Committed → Fix Released |
milestone: | 17.05 → 17.02 |
Hello,
I can confirm that this bug is reproducible. I configured a mitaka cloud with
the latest stable release of the charms.
I configured 3 units of ceilometer relating it to hacluster, with the following settings:
juju set ceilometer vip="10.5.0.200 10.7.0.190"
juju set ceilometer os-internal- network= "10.5.0. 0/16" network= "10.7.0. 0/24"
juju set ceilometer os-public-
The resulting haproxy configuration lacks on the backend for the configured os-public-network:
root@juju- niedbalski- xenial- machine- 23:/home/ ubuntu# more /etc/haproxy/ haproxy. cfg
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 20000
user haproxy
group haproxy
spread-checks 0
defaults
log global
mode tcp
option tcplog
option dontlognull
retries 3
timeout queue 5000
timeout connect 5000
timeout client 30000
timeout server 30000
listen stats 5WNPzTbZq6ddz5f Vdz4Z2jj
bind 127.0.0.1:8888
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth admin:djT4pZm9L
frontend tcp-in_ ceilometer_ api 100/255. 255.255. 0 api_10. 7.0.100 if net_10.7.0.100 96/255. 255.0.0 api_10. 5.0.96 if net_10.5.0.96 api_10. 5.0.96
bind *:8777
acl net_10.7.0.100 dst 10.7.0.
use_backend ceilometer_
acl net_10.5.0.96 dst 10.5.0.
use_backend ceilometer_
default_backend ceilometer_
backend ceilometer_ api_10. 7.0.100
balance leastconn
server ceilometer-1 10.7.0.100:8767 check
backend ceilometer_ api_10. 5.0.96
balance leastconn
server ceilometer-2 10.5.0.97:8767 check
server ceilometer-0 10.5.0.73:8767 check
server ceilometer-1 10.5.0.96:8767 check
The reason seems to be related to the lack of public-address on the cluster relation:
ubuntu@ niedbalski- xenial- bastion: ~/openstack- charm-testing$ juju run --unit ceilometer/1 "relation-get -r cluster:0 - ceilometer/0" niedbalski- xenial- bastion: ~/openstack- charm-testing$ juju run --unit ceilometer/1 "relation-get -r cluster:0 - ceilometer/1" niedbalski- xenial- bastion: ~/openstack- charm-testing$ juju run --unit ceilometer/1 "relation-get -r cluster:0 - ceilometer/2"
private-address: 10.5.0.73
ubuntu@
private-address: 10.5.0.96
ubuntu@
private-address: 10.5.0.97