This restored the quorum. The only thing left was to rejoin instance on the second non-leader instance:
2. juju ssh into the second non-leader instance
$ mysql-shell.mysqlsh
mysql-py> shell.connect('clusteruser:<cluster-password>@<leader-ip>')
mysql-py []> cluster = dba.get_cluster()
mysql-py []> cluster.force_quorum_using_partition_of('clusteruser:<cluster-password>@<leader-ip>')
mysql-py []> cluster.rejoin_instance('clusteruser:<cluster-password>@<leader-ip>')
<exit>
3. After a couple of seconds the cluster is back up and running:
$ juju status mysql-innodb-cluster
Model Controller Cloud/Region Version SLA Timestamp
neutron-work przemeklal-serverstack serverstack/serverstack 2.8.8 unsupported 09:22:29Z
App Version Status Scale Charm Store Rev OS Notes
mysql-innodb-cluster 8.0.23 active 3 mysql-innodb-cluster jujucharms 5 ubuntu
Unit Workload Agent Machine Public address Ports Message
mysql-innodb-cluster/0* active idle 0 10.5.0.7 Unit is ready: Mode: R/W
mysql-innodb-cluster/1 active idle 1 10.5.0.18 Unit is ready: Mode: R/O
mysql-innodb-cluster/2 active idle 2 10.5.0.9 Unit is ready: Mode: R/O
Machine State DNS Inst id Series AZ Message
0 started 10.5.0.7 7268ef34-31d8-492d-af7b-950d8f48f156 focal nova ACTIVE
1 started 10.5.0.18 489ae28a-43e3-4386-a90b-24eed1e04d3a focal nova ACTIVE
2 started 10.5.0.9 6e9d1f71-5580-4ae5-8841-d1151ea8a7a5 focal nova ACTIVE
Note: cluster-password can be obtained from:
$ juju run --unit mysql-innodb-cluster/leader leader-get
I managed to restore the quorum manually using mysql-shell. Here are the steps:
1. juju ssh into the first non-leader instance
$ mysql-shell.mysqlsh 'clusteruser: <cluster- password> @<leader- ip>') force_quorum_ using_partition _of('clusteruse r:<cluster- password> @<leader- ip>') rejoin_ instance( 'clusteruser: <cluster- password> @<leader- ip>')
mysql-py> shell.connect(
mysql-py []> cluster = dba.get_cluster()
mysql-py []> cluster.
mysql-py []> cluster.
<exit>
This restored the quorum. The only thing left was to rejoin instance on the second non-leader instance:
2. juju ssh into the second non-leader instance 'clusteruser: <cluster- password> @<leader- ip>') force_quorum_ using_partition _of('clusteruse r:<cluster- password> @<leader- ip>') rejoin_ instance( 'clusteruser: <cluster- password> @<leader- ip>')
$ mysql-shell.mysqlsh
mysql-py> shell.connect(
mysql-py []> cluster = dba.get_cluster()
mysql-py []> cluster.
mysql-py []> cluster.
<exit>
3. After a couple of seconds the cluster is back up and running: cluster serverstack serverstack/ serverstack 2.8.8 unsupported 09:22:29Z
$ juju status mysql-innodb-
Model Controller Cloud/Region Version SLA Timestamp
neutron-work przemeklal-
App Version Status Scale Charm Store Rev OS Notes cluster 8.0.23 active 3 mysql-innodb- cluster jujucharms 5 ubuntu
mysql-innodb-
Unit Workload Agent Machine Public address Ports Message cluster/ 0* active idle 0 10.5.0.7 Unit is ready: Mode: R/W cluster/ 1 active idle 1 10.5.0.18 Unit is ready: Mode: R/O cluster/ 2 active idle 2 10.5.0.9 Unit is ready: Mode: R/O
mysql-innodb-
mysql-innodb-
mysql-innodb-
Machine State DNS Inst id Series AZ Message 31d8-492d- af7b-950d8f48f1 56 focal nova ACTIVE 43e3-4386- a90b-24eed1e04d 3a focal nova ACTIVE 5580-4ae5- 8841-d1151ea8a7 a5 focal nova ACTIVE
0 started 10.5.0.7 7268ef34-
1 started 10.5.0.18 489ae28a-
2 started 10.5.0.9 6e9d1f71-
Note: cluster-password can be obtained from: cluster/ leader leader-get
$ juju run --unit mysql-innodb-