Comment 7 for bug 2008509

Revision history for this message
Trent Lloyd (lathiat) wrote :

The test bundle is using the hacluster charm, which needs to setup a Virtual IP on the interface (using corosync/pacemaker).

To do this it requires the following options set specific to the deployed environment. Apologies I didn't think about that in my original description.

(1) vip_iface is set to the real interface (for AWS: eth0)
(2) It also requires that all 3 machines have an IP address in the same subnet, and an IP in that subnet is set as the config option "vip". Since AWS uses different subnets for each AZ, we must lock the application to a single region's availability zone.

I made it work on AWS by doing the following

(1) Change the keystone constraints to include "zones=s-east-1b" or any specific AZ (e.g. ap-southeast-2a). For my usage, I also set instance-type=t2.micro as constraints for both mysql-innodb-cluster and keystone.

(2) Find the private VPC subnet for that specific AZ in your AWS console under VPC -> Subnets.. for me it was:
172.31.32.0/20 (172.31.32.0 - 172.31.47.255, you can calculate this with sipcalc)

(3) Modify 'vip' to be an IP in this subnet that is not in use, e.g. in my case I could use: 172.31.47.240

Then the model should deploy and reproduce the issue. Otherwise I'd suggest to create and publish a test subordinate charm with 20.04 support only, and try with that.

Note: This VIP won't actually work in an AWS environment, but that doesn't matter. We just have to convince the charm to deploy so that the juju issue can be reproduced. There is no need to access or use the deployed application.

If that doesn't work I'd suggest publishing a charm that supports 20.04 only. But one without base-specific builds (otherwise you'll separately hit Bug #2008248)