This bug was lacking the proper subscriptions and has been long standing, so I've added field-high and canonical-boostack-cres.
The effect is that we have a persistent false alerts in Nagios for this particular cinder configuration.
Workaround: In the meantime, we can manually remove the nrpe check for haproxy_servers from the relation: $ juju run -u cinder-ssd/1 -- relation-ids nrpe-external-master nrpe-external-master:170 $ juju run -u cinder-ssd/1 -- relation-list -r170 nrpe/23 juju run -u nrpe/23 -- relation-get -r170 - cinder-ssd/1 monitors: | monitors: remote: nrpe: apache2: {command: check_apache2} cinder-volume: {command: check_cinder-volume} haproxy: {command: check_haproxy} haproxy_queue: {command: check_haproxy_queue} haproxy_servers: {command: check_haproxy_servers} memcached: {command: check_memcached} primary: "True" private-address: 10.36.1.231 #write monitors section to a file $ juju run -u cinder-ssd/1 -- relation-set -r170 monitors="$(cat monitors.txt)"
This bug was lacking the proper subscriptions and has been long standing, so I've added field-high and canonical- boostack- cres.
The effect is that we have a persistent false alerts in Nagios for this particular cinder configuration.
Workaround: master master: 170
cinder- volume: {command: check_cinder- volume}
haproxy_ queue: {command: check_haproxy_ queue}
haproxy_ servers: {command: check_haproxy_ servers}
In the meantime, we can manually remove the nrpe check for haproxy_servers from the relation:
$ juju run -u cinder-ssd/1 -- relation-ids nrpe-external-
nrpe-external-
$ juju run -u cinder-ssd/1 -- relation-list -r170
nrpe/23
juju run -u nrpe/23 -- relation-get -r170 - cinder-ssd/1
monitors: |
monitors:
remote:
nrpe:
apache2: {command: check_apache2}
haproxy: {command: check_haproxy}
memcached: {command: check_memcached}
primary: "True"
private-address: 10.36.1.231
#write monitors section to a file
$ juju run -u cinder-ssd/1 -- relation-set -r170 monitors="$(cat monitors.txt)"