Excessive Memory usage
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Galera |
In Progress
|
Low
|
Alex Yurchenko | |||
Percona XtraDB Cluster moved to https://jira.percona.com/projects/PXC | Status tracked in 5.6 | |||||
5.5 |
Confirmed
|
Medium
|
Unassigned | |||
5.6 |
New
|
Medium
|
Unassigned |
Bug Description
I'm evaluating Percona XtraDB cluster in small 2 Node environment. Replication between nodes is working fine, but seems that even under light utilization mysqld process size grows all the time without releasing the memory compared to Standalone Server.
OS: Centos 6.3 x64
With following packages installed:
Percona-
percona-
percona-
Percona-
Percona-
Updated Galera to the latest available:
galera-
Node 1
my.cnf:
[mysqld_safe]
wsrep_urls=
[mysqld]
datadir=
user=mysql
log_slave_updates = 1
binlog_format=ROW
max_allowed_packet = 200M
default_
#wsrep_
wsrep_provider=
wsrep_slave_
wsrep_cluster_
wsrep_sst_
wsrep_sst_
wsrep_node_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_
innodb_flush_method = O_DIRECT
innodb_
Running following test:
create database ptest;
use ptest;
create table ti2(c1 int auto_increment primary key, c2 char(255)) engine=InnoDB;
insert into ti2(c2) values('abc');
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
insert into ti2(c2) select c2 from ti2;
Results of Percona-
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8951 mysql 20 0 1101m 83m 7160 S 0.0 5.1 0:00.29 mysqld
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8951 mysql 20 0 1741m 846m 2508 S 0.0 51.1 1:19.67 mysqld
Results of Percona-
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7077 mysql 20 0 756m 55m 5540 S 0.0 5.5 0:00.12 mysqld
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7077 mysql 20 0 756m 308m 5788 S 0.3 31.0 0:25.45 mysqld
When inserts are continued Galera cluster will eventually run out memory but with Standalone Server memory usage is not growing. Comparison is done against different Mysql versions, but I see similar result when I disable replication by removing wsrep parameters from XtraDB Cluster my.cnf.
Related branches
- David Bennett: Pending requested
- Diff: 0 lines
no longer affects: | codership-mysql |
I am seeing the same problem with 5.5.28- 23.7-369. squeeze on Debian Squeeze on a 3-node Galera cluster.
Each node has 1GB of RAM, which should be plenty, given that a plain text dump of the data in question is only 150M big. There are a few tables with some hundred thousand entries in it which are mostly appended and seldomly read (mail logs). Performance is just fine. However when I start a cleaning job that just removes 100000 entries from a table the memory usage on all nodes goes through the roof. It usually is enough to push them deep into swapping which makes recovery a mess.
[mysqld] /var/lib/ mysql
datadir=
binlog_format=ROW
thread_cache_size=4
query_cache_size=8M
wsrep_provider= /usr/lib64/ libgalera_ smm.so
wsrep_slave_ threads= 4 name=something method= xtrabackup
wsrep_cluster_
wsrep_sst_
innodb_ buffer_ pool_size= 128M locks_unsafe_ for_binlog= 1 autoinc_ lock_mode= 2 flush_method= O_DIRECT file_per_ table
innodb_
innodb_
innodb_
innodb_