Tempest tests in periodic-tripleo-ci-centos-9-standalone-full-tempest-scenario-master and periodic-tripleo-ci-centos-9-ovb-1ctlr_2comp-featureset020-master are failing with [1]:
```
{0} neutron_tempest_plugin.scenario.test_dhcp.DHCPTest.test_extra_dhcp_opts [411.917754s] ... FAILED
Captured traceback:
~~~~~~~~~~~~~~~~~~~
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/tempest/lib/common/ssh.py", line 131, in _get_ssh_connection
ssh.connect(self.host, port=self.port, username=self.username,
File "/usr/lib/python3.9/site-packages/paramiko/client.py", line 435, in connect
self._auth(
File "/usr/lib/python3.9/site-packages/paramiko/client.py", line 764, in _auth
raise saved_exception
File "/usr/lib/python3.9/site-packages/paramiko/client.py", line 664, in _auth
self._transport.auth_publickey(username, pkey)
File "/usr/lib/python3.9/site-packages/paramiko/transport.py", line 1580, in auth_publickey
return self.auth_handler.wait_for_response(my_event)
File "/usr/lib/python3.9/site-packages/paramiko/auth_handler.py", line 250, in wait_for_response
raise e
paramiko.ssh_exception.AuthenticationException: Authentication failed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/tempest/common/utils/__init__.py", line 89, in wrapper
return func(*func_args, **func_kwargs)
File "/usr/lib/python3.9/site-packages/neutron_tempest_plugin/scenario/test_dhcp.py", line 89, in test_extra_dhcp_opts
vm_resolv_conf = ssh_client.exec_command(
File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
return self(f, *args, **kw)
File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
do = self.iter(retry_state=retry_state)
File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
return fut.result()
File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
result = fn(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/neutron_tempest_plugin/common/ssh.py", line 171, in exec_command
return super(Client, self).exec_command(cmd=cmd, encoding=encoding)
File "/usr/lib/python3.9/site-packages/tempest/lib/common/ssh.py", line 182, in exec_command
ssh = self._get_ssh_connection()
File "/usr/lib/python3.9/site-packages/tempest/lib/common/ssh.py", line 150, in _get_ssh_connection
raise exceptions.SSHTimeout(host=self.host,
tempest.lib.exceptions.SSHTimeout: Connection to the 192.168.24.152 via SSH timed out.
```
While taking a look at error.txt file [2]
```
2022-03-08 03:13:10.758 ERROR /var/log/containers/cinder/cinder-api.log: 6 ERROR oslo.messaging._drivers.impl_rabbit [-] [5c38f35a-de38-4bc9-9eb8-453526059529] AMQP server on standalone.ctlplane.localdomain:5672 is unreachable: <RecoverableConnectionError: unknown error>. Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: <RecoverableConnectionError: unknown error>
2022-03-08 03:13:09.387 ERROR /var/log/containers/neutron/server.log: 17 ERROR oslo.messaging._drivers.impl_rabbit [-] [97a86f43-64b6-4459-9bc6-bbe8a095717b] AMQP server on standalone.ctlplane.localdomain:5672 is unreachable: [Errno 104] Connection reset by peer. Trying again in 1 seconds.: ConnectionResetError: [Errno 104] Connection reset by peer
2022-03-08 03:13:09.175 ERROR /var/log/containers/nova/nova-conductor.log: 2 ERROR oslo.messaging._drivers.impl_rabbit [-] [e4479979-4796-4dcc-90fb-9a5495bb377d] AMQP server on standalone.ctlplane.localdomain:5672 is unreachable: [Errno 104] Connection reset by peer. Trying again in 1 seconds.: ConnectionResetError: [Errno 104] Connection reset by peer
```
It might be linked with the above error.
```
AMQP server on standalone.ctlplane.localdomain:5672 is unreachable: [Errno 104] Connection reset by peer. Trying again in 1 seconds.: ConnectionResetError: [Errno 104] Connection reset by peer
```
I also compared with the passing log file [3]. There is no such error.
Logs:
[1]. https://logserver.rdoproject.org/openstack-periodic-integration-main/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-scenario-master/af5c09c/logs/undercloud/var/log/tempest/tempest_run.log.txt.gz
[2]. https://logserver.rdoproject.org/openstack-periodic-integration-main/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-scenario-master/af5c09c/logs/undercloud/var/log/extra/errors.txt.gz
[3]. https://logserver.rdoproject.org/openstack-periodic-integration-main/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-scenario-master/4c6e8f5/logs/undercloud/var/log/extra/errors.txt.gz
BY taking a look at F020 error logs on compute nodes: up.drivers. db Traceback (most recent call last): up.drivers. db File "/usr/lib/ python3. 9/site- packages/ nova/servicegro up/drivers/ db.py", line 92, in _report_state up.drivers. db service. service_ ref.save( ) up.drivers. db File "/usr/lib/ python3. 9/site- packages/ oslo_versionedo bjects/ base.py" , line 209, in wrapper up.drivers. db updates, result = self.indirectio n_api.object_ action( up.drivers. db File "/usr/lib/ python3. 9/site- packages/ nova/conductor/ rpcapi. py", line 247, in object_action up.drivers. db return cctxt.call(context, 'object_action', objinst=objinst, up.drivers. db File "/usr/lib/ python3. 9/site- packages/ oslo_messaging/ rpc/client. py", line 189, in call up.drivers. db result = self.transport. _send( up.drivers. db File "/usr/lib/ python3. 9/site- packages/ oslo_messaging/ transport. py", line 123, in _send up.drivers. db return self._driver. send(target, ctxt, message, up.drivers. db File "/usr/lib/ python3. 9/site- packages/ oslo_messaging/ _drivers/ amqpdriver. py", line 689, in send up.drivers. db return self._send(target, ctxt, message, wait_for_reply, timeout, up.drivers. db File "/usr/lib/ python3. 9/site- packages/ oslo_messaging/ _drivers/ amqpdriver. py", line 681, in _send up.drivers. db raise result up.drivers. db oslo_messaging. rpc.client. RemoteError: Remote error: DBConnectionError (pymysql. err.Operational Error) (2013, 'Lost connection to MySQL server during query') up.drivers. db (Background on this error at: https:/ /sqlalche. me/e/14/ e3q8)
```
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
2022-03-08 11:05:19.332 2 ERROR nova.servicegro
```
Logs: https:/ /logserver. rdoproject. org/openstack- periodic- integration- main/opendev. org/openstack/ tripleo- ci/master/ periodic- tripleo- ci-centos- 9-ovb-1ctlr_ 2comp-featurese t020-master/ 3b6a109/ logs/overcloud- novacompute- 1/var/log/ containers/ nova/nova- compute. log.txt. gz
and the same tempest tests are failing here also with above reason: https:/ /logserver. rdoproject. org/openstack- periodic- integration- main/opendev. org/openstack/ tripleo- ci/master/ periodic- tripleo- ci-centos- 9-ovb-1ctlr_ 2comp-featurese t020-master/ 3b6a109/ logs/undercloud /var/log/ tempest/ failing_ tests.log. txt.gz