gcache.page. files not removed fast enough for some workloads
Bug #1488530 reported by
Przemek
This bug affects 3 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Fix Released
|
High
|
Ivan Suzdal | ||
6.1.x |
Invalid
|
High
|
Rodion Tikunov | ||
7.0.x |
Invalid
|
High
|
Rodion Tikunov | ||
8.0.x |
Fix Released
|
High
|
Ivan Suzdal | ||
Galera |
New
|
Undecided
|
Unassigned | ||
Percona XtraDB Cluster moved to https://jira.percona.com/projects/PXC |
Confirmed
|
High
|
Unassigned |
Bug Description
This is copy of initial report: https:/
For workloads with huge transactions it may happen that gcache.page.xxxx files will fill all the available disk space as due to current removal algorithm, oldest files may remain pretty long.
Here is a quickly reproducible test case:
perl -e '$s="$s\x31"; for my $i (0..26214400) { print $s; } ' > /tmp/blob
use test
CREATE TABLE blob1 (
id int(11) NOT NULL,
a tinyint(4) DEFAULT NULL,
big longblob,
PRIMARY KEY (id)
) ENGINE=InnoDB;
insert into blob1 values (1,1,LOAD_
for i in {1..50}; do mysql test -e "UPDATE blob1 SET a=$i,big=
Changed in percona-xtradb-cluster: | |
milestone: | none → 5.6.25-25.12 |
Changed in percona-xtradb-cluster: | |
status: | New → Confirmed |
Changed in percona-xtradb-cluster: | |
milestone: | 5.6.25-25.12 → future-5.6 |
Changed in percona-xtradb-cluster: | |
importance: | Undecided → High |
Changed in fuel: | |
status: | New → Confirmed |
importance: | Undecided → High |
assignee: | nobody → MOS Linux (mos-linux) |
milestone: | none → 9.0 |
Changed in fuel: | |
assignee: | MOS Linux (mos-linux) → Ivan Suzdal (isuzdal) |
Changed in fuel: | |
status: | Confirmed → Fix Committed |
To post a comment you must log in.
Related fix proposed to branch: master /review. fuel-infra. org/16432
Change author: Ivan Suzdal <email address hidden>
Review: https:/