Getting message "vip not yet configured" on all Openstack Cluster based services
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
Triaged
|
High
|
Unassigned | ||
OpenStack HA Cluster Charm |
Invalid
|
Undecided
|
Unassigned |
Bug Description
I am trying to deploy Charm based Openstack and Contrail Cluster but each time deployment is getting stuck "vip not yet configured" for all Openstack Cluster based services.
glance-
glance/1 active idle 1/lxd/2 172.30.204.169 9292/tcp Unit is ready
glance-
glance/2 active idle 2/lxd/2 172.30.204.157 9292/tcp Unit is ready
glance-
heat/0* active idle 0/kvm/3 172.30.204.149 8000/tcp,8004/tcp Unit is ready
contrail-
heat-hacluster/0* blocked idle 172.30.204.149 Resource: res_heat_
ntp/6 active idle 172.30.204.149 123/udp chrony: Ready
heat/1 active idle 1/kvm/3 172.30.204.172 8000/tcp,8004/tcp Unit is ready
contrail-
heat-hacluster/2 waiting idle 172.30.204.172 Resource: res_heat_
ntp/12 active idle 172.30.204.172 123/udp chrony: Ready
heat/2 active idle 2/kvm/3 172.30.204.164 8000/tcp,8004/tcp Unit is ready
contrail-
heat-hacluster/1 waiting idle 172.30.204.164 Resource: res_heat_
ntp/11 active idle 172.30.204.164 123/udp chrony: Ready
keystone/0* active idle 0/lxd/4 172.30.204.155 5000/tcp Unit is ready
keystone-
keystone/1 active idle 1/lxd/3 172.30.204.165 5000/tcp Unit is ready
keystone-
keystone/2 active idle 2/lxd/3 172.30.204.160 5000/tcp Unit is ready
keystone-
memcached/0* active idle 0/lxd/5 172.30.205.178 11211/tcp Unit is ready and clustered
memcached/1 active idle 1/lxd/4 172.30.205.163 11211/tcp Unit is ready and clustered
memcached/2 active idle 2/lxd/4 172.30.205.206 11211/tcp Unit is ready and clustered
mysql/0* active idle 0/lxd/6 172.30.205.197 3306/tcp Unit is ready
mysql-
mysql/1 active idle 1/lxd/5 172.30.205.216 3306/tcp Unit is ready
mysql-hacluster/1 active idle 172.30.205.216 Unit is ready and clustered
mysql/2 active idle 2/lxd/5 172.30.205.169 3306/tcp Unit is ready
mysql-hacluster/2 active idle 172.30.205.169 Unit is ready and clustered
neutron-api/0* active idle 0/kvm/4 172.30.204.153 9696/tcp Unit is ready
contrail-
neutron-
ntp/5 active idle 172.30.204.153 123/udp chrony: Ready
neutron-api/1 active idle 1/kvm/4 172.30.204.171 9696/tcp Unit is ready
contrail-
neutron-
ntp/10 active idle 172.30.204.171 123/udp chrony: Ready
neutron-api/2 active idle 2/kvm/4 172.30.204.162 9696/tcp Unit is ready
contrail-
neutron-
ntp/8 active idle 172.30.204.162 123/udp chrony: Ready
nova-cloud-
ncc-hacluster/0* blocked idle 172.30.204.151 Resource: res_nova_
nova-cloud-
ncc-hacluster/2 waiting idle 172.30.204.167 Resource: res_nova_
nova-cloud-
ncc-hacluster/1 waiting idle 172.30.204.158 Resource: res_nova_
Changed in juju: | |
status: | New → Triaged |
importance: | Undecided → High |
tags: | added: cdo-qa |
series: bionic /wiki.ubuntu. com/OpenStack/ CloudArchive origin: &openstack-origin cloud:bionic-rocky
variables:
# https:/
# packages for an LTS release come in a form of SRUs
# do not use cloud:<pocket> for an LTS version as
# installation hooks will fail. Example:
openstack-origin: &openstack-origin distro
#openstack-
openstack-region: &openstack-region RegionOne
# !> Important <! multiplier: &worker-multiplier 0.25
# configure that value for the API services as if they
# spawn too many workers you will get inconsistent failures
# due to CPU overcommit
worker-
# Number of MySQL connections in the env. Default is not enough connections: &mysql-connections 2000
# for environment of this size. So, bundle declares default of
# 2000. There's hardly a case for higher than this
mysql-
# MySQL tuning level. Charm default is "safest", this however tuning- level: &mysql-tuning-level safest
# impacts performance. For spinning platters consider setting this
# to "fast"
mysql-
# Configure RAM allocation params for nova. For hyperconverged host-memory: &reserved- host-memory 16384 n-ratio: &ram-allocation -ratio 0.999999 # XXX bug 1613839 n-ratio: &cpu-allocation -ratio 4.0
# nodes, we need to have plenty reserves for service containers,
# Ceph OSDs, and swift-storage daemons. Those processes will not
# only directly allocate RAM but also indirectly via pagecache, file
# system caches, system buffers usage. Adjust for higher density
# clouds, e.g. high OSD/host ratio or when running >2 service
# containers/host adapt appropriately.
reserved-
ram-allocatio
cpu-allocatio
# This is Management network, unrelated to OpenStack and other applications
# OAM - Operations, Administration and Maintenance
oam-space: &oam-space oam-space
# This is OpenStack Admin network; for adminURL endpoints
admin-space: &admin-space oam-space
# This is OpenStack Public network; for publicURL endpoints
public-space: &public-space external-space
# This is OpenStack Internal network; for internalURL endpoints
internal-space: &internal-space oam-space
# CEPH configuration public- space: &ceph-public-space ceph-access-space
# CEPH access network
ceph-
# CEPH replication network cluster- space: &ceph-cluster-space ceph-replica-space
ceph-
sdn-transport: &sdn-transport sdn-transport
# Workaround for 'only one default binding supported' access- constr: &ceph-access-constr spaces= ceph-access- space access- constr: &combi- access- constr spaces= ceph-access- space,oam- space
oam-space-constr: &oam-space-constr spaces=oam-space
ceph-
combi-
# Various VIPs
aodh-vip: &aodh-vip "172.30.204.132 172.30.205.132"
cinder-vip: &cinder-vip "172.30.204.133 172.30.205.133"
dashboard-vip: &dashboard-vip "172.30.205.144"
glance-vip: &glance-vip "172.30.204.134 172.30.205.134"
gnocchi-vip: &gnocchi-vip "172.30.204.135 172.30.205.135"
heat-vip: &heat-vip "172.30.204.136 172.30.205.136"
...