Exporting tables is inefficient when backup contains a large (and unrelated) change buffer
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Percona XtraBackup moved to https://jira.percona.com/projects/PXB |
Fix Released
|
Medium
|
Alexey Kopytov | ||
2.2 |
Fix Released
|
Medium
|
Alexey Kopytov | ||
2.3 |
Fix Released
|
Medium
|
Alexey Kopytov |
Bug Description
XtraBackup is quite inefficient at exporting tables from a backup when it contains a large change buffer (aka insert buffer) that is mostly composed of changes to tables other than the tables being exported.
The problem arises from the need to merge change buffer entries when exporting tables. In order to accomplish this, XtraBackup relies on a normal InnoDB shutdown that merges pending change buffer entries. The problem is that this background merging does random dives into the change buffer index (see ibuf_merge_pages), making the process rather inefficient if most of the entries in the change buffer are for tables that are not even part of the backup.
An easy solution is to simply discard all change buffer entries for an inexistent (deleted) tablespace once the first entry for such tablespace is found. Ideally and if possible, make export akin to FLUSH TABLES FOR EXPORT and do a fast shutdown.
Related branches
- Alexey Kopytov (community): Approve
-
Diff: 16 lines (+2/-4)1 file modifiedstorage/innobase/buf/buf0rea.cc (+2/-4)
- Alexey Kopytov (community): Approve
--- a/storage/ innobase/ buf/buf0rea. cc innobase/ buf/buf0rea. cc ibuf_merge_ pages( DELETED) ) { deleted:
/* We have deleted or are deleting the single-table
+++ b/storage/
@@ -810,11 +810,9 @@ buf_read_
if (UNIV_UNLIKELY(err == DB_TABLESPACE_
tablespace_
- tablespace: remove the entries for that page */
+ tablespace: remove all entries for the tablespace */
- ibuf_merge_ or_delete_ for_page( NULL, space_ids[i], for_discarded_ space(space_ ids[i]) ;
- page_nos[i],
- zip_size, FALSE);
+ ibuf_delete_
}
}