Keystone leader fails to connect to itself while running identity-service-relation-changed hook
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Charm Helpers |
Fix Committed
|
Undecided
|
Unassigned | ||
MySQL InnoDB Cluster Charm |
Triaged
|
Medium
|
Unassigned | ||
OpenStack Keystone Charm |
Fix Committed
|
Undecided
|
Unassigned | ||
2023.1 |
Fix Committed
|
Undecided
|
Unassigned | ||
Train |
Fix Committed
|
Undecided
|
Unassigned | ||
Ussuri |
Fix Committed
|
Undecided
|
Unassigned | ||
Victoria |
Fix Committed
|
Undecided
|
Unassigned | ||
Wallaby |
Fix Committed
|
Undecided
|
Unassigned | ||
Xena |
Fix Committed
|
Undecided
|
Unassigned | ||
Yoga |
Fix Committed
|
Undecided
|
Unassigned | ||
Zed |
Fix Committed
|
Undecided
|
Unassigned |
Bug Description
Testing Yoga Focal bits but using a converged networking configuration.
Vault is unsealed and has issued certificates but no further steps have been taken against the cloud other than unsealing vault.
Keystone leader runs the identity-
Looking through the logs, it looks like the keystone service is up and running, pacemaker and haproxy both look happy. It seems like everything is happy, and I cant seem to find why it cant connect to that address.
2023-04-01 12:35:28 DEBUG unit.keystone/
2023-04-01 12:35:29 DEBUG unit.keystone/
2023-04-01 12:36:15 ERROR unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:15 WARNING unit.keystone/
2023-04-01 12:36:16 WARNING unit.keystone/
2023-04-01 12:36:16 WARNING unit.keystone/
2023-04-01 12:36:16 WARNING unit.keystone/
2023-04-01 12:36:16 WARNING unit.keystone/
2023-04-01 12:36:16 WARNING unit.keystone/
2023-04-01 12:36:16 DEBUG unit.keystone/
2023-04-01 12:36:16 DEBUG unit.keystone/
2023-04-01 12:36:16 DEBUG unit.keystone/
2023-04-01 12:36:16 DEBUG unit.keystone/
2023-04-01 12:36:16 DEBUG unit.keystone/
Initial testrun can be found at:
https:/
with crashdrump at:
https:/
and all logs at:
https:/
Changed in charm-helpers: | |
status: | New → Fix Committed |
From keystone/1:
(keystone. server. flask.request_ processing. middleware. auth_context) : 2023-04-01 08:36:25,698 ERROR (pymysql. err.Operational Error) (2013, 'Lost connection to MySQL server during /sqlalche. me/e/14/ e3q8) python3/ dist-packages/ sqlalchemy/ engine/ base.py" , line 1802, in _execute_context dialect. do_execute( python3/ dist-packages/ sqlalchemy/ engine/ default. py", line 732, in do_execute execute( statement, parameters) python3/ dist-packages/ pymysql/ cursors. py", line 148, in execute python3/ dist-packages/ pymysql/ cursors. py", line 310, in _query python3/ dist-packages/ pymysql/ connections. py", line 548, in query _affected_ rows = self._read_ query_result( unbuffered= unbuffered) python3/ dist-packages/ pymysql/ connections. py", line 775, in _read_query_result python3/ dist-packages/ pymysql/ connections. py", line 1163, in read _read_result_ packet( first_packet) python3/ dist-packages/ pymysql/ connections. py", line 1236, in _read_result_packet _read_rowdata_ packet( ) python3/ dist-packages/ pymysql/ connections. py", line 1270, in _read_rowdata_ packet ._read_ packet( ) python3/ dist-packages/ pymysql/ connections. py", line 725, in _read_packet raise_for_ error() python3/ dist-packages/ pymysql/ protocol. py", line 221, in raise_for_error raise_mysql_ exception( self._data) python3/ dist-packages/ pymysql/ err.py" , line 143, in raise_mysql_ exception err.Operational Error: (1053, 'Server shutdown in progress')
query')
(Background on this error at: https:/
Traceback (most recent call last):
File "/usr/lib/
self.
File "/usr/lib/
cursor.
File "/usr/lib/
result = self._query(query)
File "/usr/lib/
conn.query(q)
File "/usr/lib/
self.
File "/usr/lib/
result.read()
File "/usr/lib/
self.
File "/usr/lib/
self.
File "/usr/lib/
packet = self.connection
File "/usr/lib/
packet.
File "/usr/lib/
err.
File "/usr/lib/
raise errorclass(errno, errval)
pymysql.
So "Server shutdown in progress" is a bit ominous.
From mysql-innodb- cluster/ 0 at the same time:
2023-04- 01T08:36: 23.178312Z 0 [Warning] [MY-011499] [Repl] Plugin group_replication reported: 'Members removed from the group: 192.168.33.61:3306' 01T08:36: 23.178353Z 0 [System] [MY-011500] [Repl] Plugin group_replication reported: 'Primary server with address 192.168.33.61:3306 left the group. Electing new Primary.' 01T08:36: 23.178515Z 0 [System] [MY-011507] [Repl] Plugin group_replication reported: 'A new primary with address 192.168.33.83:3306 was elected. The new primary will execute all previous group transactions before allowing writes.' 01T08:36: 23.178770Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to 192.168.33.83:3306, 192.168.33.84:3306 on view 168033553850898 91:10.' 01T08:36: 23.179472Z 68 [System] [MY-011565] [Repl] Plugin group_replication reported: 'Setting super_read_ only=ON. ' 01T08:36: 23.179579Z 68 [System] [MY-011511] [Repl] Plugin group_replication reported: 'This server is working as secondary member with primary member address 192.168. 33.83:3306. ' 01T08:37: 31.674902Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to 192.168.33.61:3306, ...
2023-04-
2023-04-
2023-04-
2023-04-
2023-04-
2023-04-