php-fcgi: max_execution_time causes memory leaks
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
php |
Unknown
|
Unknown
|
|||
imagemagick (Ubuntu) |
Incomplete
|
Undecided
|
Unassigned | ||
php7.0 (Ubuntu) |
Invalid
|
High
|
Unassigned |
Bug Description
I noticed that php processes do not free memory after killing scripts that are running more than allowed by max_execution_time directive.
My setup is lighttpd with php configured as fcgi. Sample script (requires php-imagick) that can easily hit default 30 seconds time limit and uses a lot of memory so leaks are easy to notice:
<?php
define(
$photos = [];
if ($album_root = opendir(__BASE__)) {
while (false !== ($entry = readdir(
if (preg_match(
}
}
if (!file_
}
$thumb = 'thumbs/'.$photo;
if (!file_
}
}
}
?>
Just put many big JPGs to images directory.
ProblemType: Bug
DistroRelease: Ubuntu 16.04
Package: php7.0-cgi 7.0.15-
ProcVersionSign
Uname: Linux 4.8.0-45-generic x86_64
ApportVersion: 2.20.1-0ubuntu2.5
Architecture: amd64
Date: Thu Mar 30 14:37:16 2017
InstallationDate: Installed on 2011-04-14 (2177 days ago)
InstallationMedia: Ubuntu-Server 10.04.2 LTS "Lucid Lynx" - Release amd64 (20110211.1)
SourcePackage: php7.0
UpgradeStatus: Upgraded to xenial on 2016-07-30 (242 days ago)
description: | updated |
Changed in php7.0 (Ubuntu): | |
status: | Incomplete → Confirmed |
tags: | removed: server-next |
Changed in php7.0 (Ubuntu): | |
status: | Opinion → Confirmed |
Changed in php7.0 (Ubuntu): | |
importance: | Undecided → High |
Changed in php-imagick (Ubuntu): | |
status: | New → Confirmed |
tags: | removed: server-next |
apt-get install lighttpd php7.0-cgi
sudo lighttpd-enable-mod fastcgi fastcgi-php
Repro php leaking ~50MB each time at /var/www/ html/index. php "Content- Type: text/plain"); set('max_ execution_ time', 3); foobar' );
$ diff=(time( ) - $st_tm);
flush( );
<?php
header(
ini_
for ($i = 0; $i<100; $i++){
$a[$i] = array_fill(0, 16384, '1234567890-
}
echo "Leaking " . memory_get_usage() . "\n";
#busy wait until killed, and consume execution time (so no sleep)
$st_tm = time();
$diff=0;
while (1){
if ((time() - $st_tm) > $diff) {
echo "Waiting to Die " . date('h:i:s') . "\n";
}
}
?>
Track consumption and trigger it 5 times: 127.0.0. 1/index. php; done
$ apt install smem
$ smem | grep www
$ for i in $(seq 1 5); do wget http://
$ smem | grep www
Pre:
19338 www-data /usr/bin/php-cgi 0 44 969 5164
19339 www-data /usr/bin/php-cgi 0 44 969 5164
19340 www-data /usr/bin/php-cgi 0 44 969 5164
19341 www-data /usr/bin/php-cgi 0 44 969 5164
19336 www-data /usr/sbin/lighttpd -f /etc/ 0 1244 1309 2400
19337 www-data /usr/bin/php-cgi 0 16548 17772 23392
Post:
19336 www-data /usr/sbin/lighttpd -f /etc/ 0 1544 1601 2764
19337 www-data /usr/bin/php-cgi 0 15564 17113 23216
19339 www-data /usr/bin/php-cgi 0 40432 41522 47184
19340 www-data /usr/bin/php-cgi 0 40432 41522 47184
19341 www-data /usr/bin/php-cgi 0 40432 41522 47184
19338 www-data /usr/bin/php-cgi 0 40456 41886 47872
Ok, that is a rise, still the same processes. 127.0.0. 1/index. php &); done
Lets speed that up a bit
- modify wget loop to be async:
for i in $(seq 1 100); do (wget http://
And run a few of them
Post:
19336 www-data /usr/sbin/lighttpd -f /etc/ 0 1908 1965 3128
19337 www-data /usr/bin/php-cgi 0 13204 14371 19800
19339 www-data /usr/bin/php-cgi 0 53884 54746 59540
19340 www-data /usr/bin/php-cgi 0 53900 54813 59752
19338 www-data /usr/bin/php-cgi 0 53900 54985 60268
19341 www-data /usr/bin/php-cgi 0 53888 55006 60324
No matter what I do it doesn't rise over ~60MB per worker thread.
I'd assume that is some smart caching/heap-reuse in place.
The same is True for Ubuntu 16.04 and 17.04 so no new fix or such - just always as that.
Is your case exceeding the system to crash at some point?
How many cgi-bin processes do you have and what memory do they consume?