Enabling wsrep_log_conflicts dynamically causes node to hang
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
MySQL patches by Codership |
Confirmed
|
Undecided
|
Unassigned | |||
Percona XtraDB Cluster moved to https://jira.percona.com/projects/PXC | Status tracked in 5.6 | |||||
5.5 |
Invalid
|
Undecided
|
Unassigned | |||
5.6 |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
Scenario:
3 node cluster, PXC 5.6
[root@node1 ~]# rpm -qa | grep -i percona
Percona-
Percona-
percona-
percona-
Percona-
Percona-
Percona-
Percona-
I am doing this experiment:
# Create a test table
node1 mysql> create table test.deadlocks( i int unsigned not null primary key, j varchar(32) );
node1 mysql> insert into test.deadlocks values ( 1, NULL );
node1 mysql> begin; update test.deadlocks set j="node1" where i=1;
# Before commit, go to node3 in a separate window:
node3 mysql> begin; update test.deadlocks set j="node3" where i=1;
node3 mysql> commit;
node1 mysql> commit;
node1 mysql> select * from test.deadlocks;
This works fine, but if I do this on node1 and re-do the experiment:
node1 mysql> set global wsrep_log_
the commit on node1 hangs indefinitely.
node1 mysql> set global wsrep_log_
Query OK, 0 rows affected (0.00 sec)
node1 mysql> begin;
Query OK, 0 rows affected (0.00 sec)
node1 mysql> update test.deadlocks set j="node1" where i=1; Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
node1 mysql> commit;
^CCtrl-C -- sending "KILL QUERY 19" to server ...
^C^C^C^C^C
I get this in the log:
2014-03-17 15:07:15 32710 [Note] WSREP: cluster conflict due to certification failure for threads:
2014-03-17 15:07:15 32710 [Note] WSREP: Victim thread:
THD: 19, mode: local, state: executing, conflict: cert failure, seqno: 213333
SQL: commit
I have to kill the node after this to get it back to a healthy state.
With UNIV_DEBUG:
2014-03-17 21:24:14 54487 [Note] WSREP: TO BEGIN: -1, 0 : create table test.deadlocks( i int unsigned not null primary key, j varchar(32) ) ####### ####### ####### ####### ##### Oort/ncode/ percona- xtradb- cluster/ pxc56/Percona- Server/ storage/ innobase/ lock/lock0lock. cc line 2456 Oort/ncode/ percona- xtradb- cluster/ pxc56/Percona- Server/ storage/ innobase/ lock/lock0lock. cc line 2456, waiters flag 1 ####### ####### ####### ####### ##### bugs.mysql. com. dev.mysql. com/doc/ refman/ 5.6/en/ forcing- innodb- recovery. html
2014-03-17 21:24:14 54487 [Note] WSREP: TO BEGIN: 561466, 2
2014-03-17 21:24:14 54487 [Note] WSREP: TO END: 561466, 2 : create table test.deadlocks( i int unsigned not null primary key, j varchar(32) )
2014-03-17 21:24:14 54487 [Note] WSREP: TO END: 561466
#######
DEADLOCK of threads detected!
Mutex 0x3fc2748 owned by thread 140133248501504 file /media/
--Thread 140133248501504 has waited at lock0lock.cc line 1642 for 0.0000 seconds the semaphore:
Mutex at 0x3fc2748 '&trx_sys->mutex', lock var 1
Last time reserved in file /media/
#######
2014-03-17 21:26:05 7f73507f8700 InnoDB: Assertion failure in thread 140133248501504 in file sync0arr.cc line 426
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://
InnoDB: about forcing recovery.
15:56:05 UTC - mysqld got signal 6 ;