Galera bootstrap is broken when starting on multinode
Bug #1479970 reported by
Sam Yaple
This bug affects 2 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
kolla |
Fix Released
|
Critical
|
Sam Yaple |
Bug Description
All-In-One works
All-In-One scaled to Multinode works
Initial creations as Multinode is broken
There is a timing issue. This can be solved with a few options added to the bootstrap process
Changed in kolla: | |
milestone: | liberty-2 → liberty-3 |
milestone: | liberty-3 → liberty-2 |
Changed in kolla: | |
status: | Triaged → In Progress |
Changed in kolla: | |
status: | Fix Committed → Fix Released |
To post a comment you must log in.
Not sure if it's related, but I saw this tonight on support02:
Result from run 10 is: {'msg': 'Traceback (most recent call last):\r\n File "/root/ .ansible/ tmp/ansible- tmp-1438368697. 17-264339439770 46/mysql_ user", line 2253, in <module>\r\n main()\r\n File "/root/ .ansible/ tmp/ansible- tmp-1438368697. 17-264339439770 46/mysql_ user", line 498, in main\r\n if user_exists(cursor, user, host):\r\n File "/root/ .ansible/ tmp/ansible- tmp-1438368697. 17-264339439770 46/mysql_ user", line 175, in user_exists\r\n cursor. execute( "SELECT count(*) FROM user WHERE user = %s AND host = %s", (user,host))\r\n File "/usr/lib64/ python2. 7/site- packages/ MySQLdb/ cursors. py", line 174, in execute\r\n self.errorhandl er(self, exc, value)\r\n File "/usr/lib64/ python2. 7/site- packages/ MySQLdb/ connections. py", line 36, in defaulterrorhan dler\r\ n raise errorclass, errorvalue\ r\n_mysql_ exceptions. OperationalErro r: (1047, \'WSREP has not yet prepared node for application use\')\ r\nOpenSSH_ 6.4, OpenSSL 1.0.1e-fips 11 Feb 2013\r\ndebug1: Reading configuration data /home/vagrant/ .ssh/config\ r\ndebug1: /home/vagrant/ .ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ ssh_config\ r\ndebug1: /etc/ssh/ssh_config line 51: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: mux_client_ request_ session: master session id: 2\r\nShared connection to support02 closed.\r\n', 'failed': True, 'attempts': 10, 'parsed': False}
Where support01 and 03 would show this:
Result from run 10 is: {'msg': 'unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials', 'failed': True, 'attempts': 10}
Apparently mariadb is only active on support02 (which may be on purpose).