2017-11-09 13:24:17 |
Rick Pizzi |
bug |
|
|
added bug |
2017-11-09 13:24:17 |
Rick Pizzi |
attachment added |
|
sql script to trigger the bug https://bugs.launchpad.net/bugs/1731260/+attachment/5006488/+files/rickgls.sql |
|
2017-11-09 13:30:36 |
Rick Pizzi |
description |
We have evidence of an incompatibility between Xtrabackup 2.4.x and Percona MySQL 5.7.x (any version, up to 5.7.19) where xtrabackup incremental backup crashes if a mix of insert and truncate statement is issued, and changed bitmap page is in use.
I have tried to reproduce this for weeks and finally looks like I succeeded!
The trick is to make sure the bitmap is flushed to disk otherwise the bug will not bite.
Please see below for information about how to reproduce the bug.
0. set up a master and a slave (our master is still 5.6 but slave has to be 5.7), install xtrabackup 2.4.8 on the slave
1. make sure changes are flushed to disk ASAP - I found the following settings on the slave to work for that purpose:
innodb_max_bitmap_file_size = 100000
innodb_old_blocks_time=250
innodb_old_blocks_pct=5
innodb_max_dirty_pages_pct=0
2. ensure you have innodb_file_per_table = 1 and innodb_track_changed_pages=ON on the slave
3. make sure the incremental backup runs fine before the test; I used the following command line to perfom the backup (run this on the slave):
/usr/bin/innobackupex --defaults-file=/etc/my.cnf --no-version-check --incremental --no-backup-locks --slave-info --no-timestamp --parallel=1 --socket=/db/data/mysql.sock --user=backup --password=amended --tmpdir=/storage/backup/tmp --extra-lsndir=/storage/backup/lsn_incr --incremental-basedir=/storage/backup/lsn_incr /storage/backup/incr
4. run the supplied SQL script on the master (you have to create the schema beforehand)
5. repeat step 3 on the slave (ensure it is caught up with master), it will crash with the following output:
171109 12:28:04 [01] Streaming ./rick2/C_GLS_IDS_AUX.ibd
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: Retry attempts for reading partial data failed.
InnoDB: Tried to read 180224 bytes at offset 0 was only able to read163840
InnoDB: File (unknown): 'read' returned OS error 0. Cannot continue operation
InnoDB: Cannot continue operation.
Please note that I was able to reproduce the same issue using the test #12 from xtrabackup-test/suite/innodb_zip/t/wl6501.test!
This is a big show stopper for us, hope someone @percona will have a look. Thank you! |
We have evidence of an incompatibility between Xtrabackup 2.4.x and Percona MySQL 5.7.x (any version, up to 5.7.19) where xtrabackup incremental backup crashes if a mix of insert and truncate statement is issued, and changed pages bitmap files are in use.
I have tried to reproduce this for weeks and finally looks like I succeeded!
The trick is to make sure the bitmap is flushed to disk otherwise the bug will not bite.
Please see below for information about how to reproduce the bug.
0. set up a master and a slave (our master is still 5.6 but slave has to be 5.7), install xtrabackup 2.4.8 on the slave
1. make sure changes are flushed to disk ASAP - I found the following settings on the slave to work for that purpose:
innodb_max_bitmap_file_size = 100000
innodb_old_blocks_time=250
innodb_old_blocks_pct=5
innodb_max_dirty_pages_pct=0
2. ensure you have innodb_file_per_table = 1 and innodb_track_changed_pages=ON on the slave
3. make sure the incremental backup runs fine before the test; I used the following command line to perfom the backup (run this on the slave):
/usr/bin/innobackupex --defaults-file=/etc/my.cnf --no-version-check --incremental --no-backup-locks --slave-info --no-timestamp --parallel=1 --socket=/db/data/mysql.sock --user=backup --password=amended --tmpdir=/storage/backup/tmp --extra-lsndir=/storage/backup/lsn_incr --incremental-basedir=/storage/backup/lsn_incr /storage/backup/incr
4. run the supplied SQL script on the master (you have to create the schema beforehand)
5. repeat step 3 on the slave (ensure it is caught up with master), it will crash with the following output:
171109 12:28:04 [01] Streaming ./rick2/C_GLS_IDS_AUX.ibd
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: Retry attempts for reading partial data failed.
InnoDB: Tried to read 180224 bytes at offset 0 was only able to read163840
InnoDB: File (unknown): 'read' returned OS error 0. Cannot continue operation
InnoDB: Cannot continue operation.
Please note that I was able to reproduce the same issue using the test #12 from xtrabackup-test/suite/innodb_zip/t/wl6501.test!
This is a big show stopper for us, hope someone @percona will have a look. Thank you! |
|
2017-11-09 13:42:08 |
Rick Pizzi |
description |
We have evidence of an incompatibility between Xtrabackup 2.4.x and Percona MySQL 5.7.x (any version, up to 5.7.19) where xtrabackup incremental backup crashes if a mix of insert and truncate statement is issued, and changed pages bitmap files are in use.
I have tried to reproduce this for weeks and finally looks like I succeeded!
The trick is to make sure the bitmap is flushed to disk otherwise the bug will not bite.
Please see below for information about how to reproduce the bug.
0. set up a master and a slave (our master is still 5.6 but slave has to be 5.7), install xtrabackup 2.4.8 on the slave
1. make sure changes are flushed to disk ASAP - I found the following settings on the slave to work for that purpose:
innodb_max_bitmap_file_size = 100000
innodb_old_blocks_time=250
innodb_old_blocks_pct=5
innodb_max_dirty_pages_pct=0
2. ensure you have innodb_file_per_table = 1 and innodb_track_changed_pages=ON on the slave
3. make sure the incremental backup runs fine before the test; I used the following command line to perfom the backup (run this on the slave):
/usr/bin/innobackupex --defaults-file=/etc/my.cnf --no-version-check --incremental --no-backup-locks --slave-info --no-timestamp --parallel=1 --socket=/db/data/mysql.sock --user=backup --password=amended --tmpdir=/storage/backup/tmp --extra-lsndir=/storage/backup/lsn_incr --incremental-basedir=/storage/backup/lsn_incr /storage/backup/incr
4. run the supplied SQL script on the master (you have to create the schema beforehand)
5. repeat step 3 on the slave (ensure it is caught up with master), it will crash with the following output:
171109 12:28:04 [01] Streaming ./rick2/C_GLS_IDS_AUX.ibd
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: Retry attempts for reading partial data failed.
InnoDB: Tried to read 180224 bytes at offset 0 was only able to read163840
InnoDB: File (unknown): 'read' returned OS error 0. Cannot continue operation
InnoDB: Cannot continue operation.
Please note that I was able to reproduce the same issue using the test #12 from xtrabackup-test/suite/innodb_zip/t/wl6501.test!
This is a big show stopper for us, hope someone @percona will have a look. Thank you! |
We have evidence of an incompatibility between Xtrabackup 2.4.x and Percona MySQL 5.7.x (any version, up to 5.7.19) where xtrabackup incremental backup crashes if a mix of insert and truncate statement is issued, and changed pages bitmap files are in use.
I have tried to reproduce this for weeks and finally looks like I succeeded!
The trick is to make sure the bitmap is flushed to disk otherwise the bug will not bite.
Please see below for information about how to reproduce the bug.
0. set up a master and a slave (our master is still 5.6 but slave has to be 5.7), install xtrabackup 2.4.8 on the slave
1. make sure changes are flushed to disk ASAP - I found the following settings on the slave to work for that purpose:
innodb_max_bitmap_file_size = 100000
innodb_old_blocks_time=250
innodb_old_blocks_pct=5
innodb_max_dirty_pages_pct=0
2. ensure you have innodb_file_per_table = 1 and innodb_track_changed_pages=ON on the slave
3. make sure the incremental backup runs fine before the test, and that it is using the changed pages bitmap; I used the following command line to perfom the backup (run this on the slave):
/usr/bin/innobackupex --defaults-file=/etc/my.cnf --no-version-check --incremental --no-backup-locks --slave-info --no-timestamp --parallel=1 --socket=/db/data/mysql.sock --user=backup --password=amended --tmpdir=/storage/backup/tmp --extra-lsndir=/storage/backup/lsn_incr --incremental-basedir=/storage/backup/lsn_incr /storage/backup/incr (make sure that the output from backup command reports "xtrabackup: using the changed page bitmap")
4. run the supplied SQL script on the master (you have to create the schema beforehand)
5. repeat step 3 on the slave (ensure it is caught up with master), it will crash with the following output:
171109 12:28:04 [01] Streaming ./rick2/C_GLS_IDS_AUX.ibd
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: 180224 bytes should have been read. Only 163840 bytes read. Retrying for the remaining bytes.
InnoDB: Retry attempts for reading partial data failed.
InnoDB: Tried to read 180224 bytes at offset 0 was only able to read163840
InnoDB: File (unknown): 'read' returned OS error 0. Cannot continue operation
InnoDB: Cannot continue operation.
Please note that I was able to reproduce the same issue using the test #12 from xtrabackup-test/suite/innodb_zip/t/wl6501.test!
This is a big show stopper for us, hope someone @percona will have a look. Thank you! |
|
2017-11-10 06:33:56 |
Sergei Glushchenko |
nominated for series |
|
percona-xtrabackup/2.3 |
|
2017-11-10 06:33:56 |
Sergei Glushchenko |
bug task added |
|
percona-xtrabackup/2.3 |
|
2017-11-10 06:33:56 |
Sergei Glushchenko |
nominated for series |
|
percona-xtrabackup/2.4 |
|
2017-11-10 06:33:56 |
Sergei Glushchenko |
bug task added |
|
percona-xtrabackup/2.4 |
|
2017-11-10 06:34:22 |
Sergei Glushchenko |
percona-xtrabackup/2.4: status |
New |
Triaged |
|
2017-11-10 06:34:26 |
Sergei Glushchenko |
percona-xtrabackup/2.4: importance |
Undecided |
High |
|
2017-11-10 06:39:44 |
Sergei Glushchenko |
summary |
xtrabackup 2.4 crashes with MySQL 5.7 when a mix of insert and truncate is executed |
xtrabackup 2.4 incremental crashes with MySQL 5.7 when changed page tracking is used and mix of insert and truncate is executed |
|