departing relation fails on leader if removed instance is already down.

Bug #2022092 reported by Alex Kavanagh
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
MySQL InnoDB Cluster Charm
Status tracked in Trunk
Jammy
Triaged
High
Alex Kavanagh
Trunk
In Progress
High
Alex Kavanagh

Bug Description

Traceback is:

unit-mysql-innodb-cluster-0: 15:53:38 INFO unit.mysql-innodb-cluster/0.juju-log cluster:3: Remove instance: 172.20.0.36.
unit-mysql-innodb-cluster-0: 15:53:42 ERROR unit.mysql-innodb-cluster/0.juju-log cluster:3: Failed removing instance 172.20.0.36: Cannot set LC_ALL to locale en_US.UTF-8: No
such file or directory
NOTE: MySQL Error 2003 (HY000): Can't connect to MySQL server on '172.20.0.36' (113)
ERROR: The instance 172.20.0.36:3306 is not reachable and does not belong to the cluster either. Please ensure the member is either connectable or remove it through the exact
 address as shown in the cluster status output.
Traceback (most recent call last):
  File "<string>", line 3, in <module>
mysqlsh.Error: Shell Error (51104): Cluster.remove_instance: Metadata for instance 172.20.0.36:3306 not found

unit-mysql-innodb-cluster-0: 15:53:42 ERROR unit.mysql-innodb-cluster/0.juju-log cluster:3: Hook error:
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-0/.venv/lib/python3.10/site-packages/charms/reactive/__init__.py", line 74, in main
    bus.dispatch(restricted=restricted_mode)
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-0/.venv/lib/python3.10/site-packages/charms/reactive/bus.py", line 390, in dispatch
    _invoke(other_handlers)
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-0/.venv/lib/python3.10/site-packages/charms/reactive/bus.py", line 359, in _invoke
    handler.invoke()
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-0/.venv/lib/python3.10/site-packages/charms/reactive/bus.py", line 181, in invoke
    self._action(*args)
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-0/charm/reactive/mysql_innodb_cluster_handlers.py", line 490, in scale_in
    instance.remove_instance(
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-0/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 1152, in remove_instance
    raise e
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-0/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 1128, in remove_instance
    output = self.run_mysqlsh_script(_script).decode("UTF-8")
  File "/var/lib/juju/agents/unit-mysql-innodb-cluster-0/charm/lib/charm/openstack/mysql_innodb_cluster.py", line 1865, in run_mysqlsh_script
    return subprocess.check_output(cmd, stderr=subprocess.PIPE)
  File "/usr/lib/python3.10/subprocess.py", line 420, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/usr/lib/python3.10/subprocess.py", line 524, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/snap/bin/mysqlsh', '--no-wizard', '--python', '-f', '/root/snap/mysql-shell/common/tmpquhyqtxt.py']' returned non-zero exit status
1.

Need to trap the error in "remove_instance()" and propagate the failure to the action, but just handle it gracefully for the departing relation handler.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-mysql-innodb-cluster (master)
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.