Hi, We are hitting same bug with Percona Xtrabackup 2.2.7 version as well while preparing incremental backup. We are taking backup from slave server with binary logs enabled and backups are on stored on a cifs (windows) share. # xtrabackup --version xtrabackup version 2.2.7 based on MySQL server 5.6.21 Linux (x86_64) (revision id: ) # innobackupex --version InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy mysql> show global variables like '%version%'; +-------------------------+--------------------------------------------------+ | Variable_name | Value | +-------------------------+--------------------------------------------------+ | innodb_version | 5.6.21-70.1 | | protocol_version | 10 | | slave_type_conversions | | | version | 5.6.21-70.1-log | | version_comment | Percona Server (GPL), Release 70.1, Revision 698 | | version_compile_machine | x86_64 | | version_compile_os | Linux | +-------------------------+--------------------------------------------------+ 7 rows in set (0.01 sec) mysql> show global variables like 'log_bin'; +---------------------------------+--------------------------------+ | Variable_name | Value | +---------------------------------+--------------------------------+ | log_bin | ON | +---------------------------------+--------------------------------+ mysql> show global variables like '%log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | ON | +-------------------+-------+ 1 row in set (0.00 sec) # mount | grep -i cifs //IP/Path on /mnt/Path type cifs (rw,mand) Following are the commands used to take backup. Taking full backup # innobackupex --no-version-check --user=$USER --password=$PASSWORD --tmpdir=/home/mysqltmp --slave-info /mnt/Path/db_back/Server_Name/full -- Completed OK Preparing full backup # innobackupex --no-version-check --apply-log --redo-only --tmpdir=/home/mysqltmp /mnt/Path/db_back/Server_Name/full/2014-12-29_22-10-40 -- Completed OK Taking incremental backup # innobackupex --no-version-check --user=$USER --password=$PASSWORD --tmpdir=/home/mysqltmp --slave-info --incremental --incremental-basedir=/mnt/Path/db_back/Server_Name/full/2014-12-29_22-10-40 /mnt/Path/db_back/Server_Name/incremental -- Completed OK Preparing incremental backup # innobackupex --no-version-check --apply-log --tmpdir=/home/mysqltmp --incremental-dir=/mnt/Path/db_back/Server_Name/incremental/2014-12-31_05-30-02 /mnt/Path/db_back/Server_Name/full/2014-12-29_22-10-40 InnoDB: Doing recovery: scanned up to log sequence number 19059978766316 (100%) InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 1 row operations to undo InnoDB: Trx id counter is 367948887040 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percent: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: In a MySQL replication slave the last master binlog file InnoDB: position 0 167909734, file name mysql-bin.000595 InnoDB: Last MySQL binlog file position 0 677224771, file name mysql-bin.000488 InnoDB: InnoDB: Starting in background the rollback of uncommitted transactions 128 rollback segment(s) are active. 2014-12-31 12:44:12 42894940 InnoDB: Rolling back trx with id 367948886754, 1 rows to undo InnoDB: Waiting for purge to start InnoDB: Rollback of trx with id 367948886754 completed 2014-12-31 12:44:12 42894940 InnoDB: Rollback of non-prepared transactions completed InnoDB: 5.6.21 started; log sequence number 19059978766316 [notice (again)] If you use binary log and don't use any hack of group commit, the binary log position seems to be: InnoDB: Last MySQL binlog file position 0 677224771, file name mysql-bin.000488 xtrabackup: starting shutdown with innodb_fast_shutdown = 1 InnoDB: FTS optimize thread exiting. InnoDB: Starting shutdown... InnoDB: Waiting for master thread to be suspended InnoDB: Waiting for master thread to be suspended InnoDB: Waiting for master thread to be suspended InnoDB: Waiting for master thread to be suspended 2014-12-31 12:49:42 42894940 InnoDB: Assertion failure in thread 1116293440 in file buf0flu.cc line 2507 InnoDB: Failing assertion: UT_LIST_GET_LEN(buf_pool->flush_list) == 0 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 18:49:42 UTC - xtrabackup got signal 6 ; This could be because you hit a bug or data is corrupted. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Thread pointer: 0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0 thread_stack 0x10000 xtrabackup(my_print_stacktrace+0x32) [0xa3c6b5] xtrabackup(handle_fatal_signal+0x335) [0x91161d] /lib64/libpthread.so.0 [0x36aac0eb10] /lib64/libc.so.6(gsignal+0x35) [0x36aa030265] /lib64/libc.so.6(abort+0x110) [0x36aa031d10] xtrabackup(buf_flush_page_cleaner_thread+0x54e) [0x768b24] /lib64/libpthread.so.0 [0x36aac0673d] /lib64/libc.so.6(clone+0x6d) [0x36aa0d44bd] Please report a bug at https://bugs.launchpad.net/percona-xtrabackup innobackupex: got a fatal error with the following stacktrace: at innobackupex line 2633 main::apply_log() called at innobackupex line 1561 innobackupex: Error: innobackupex: ibbackup failed at innobackupex line 2633.