The way https://review.openstack.org/#/c/209627/ applied the limits to qemu-img was by setting rlimits in between fork() and execve(). The problem with this is that before we reach exec() we have a full copy-on-write mapping of the nova python process. So if Nova memory usage is too high, this could tickle the limit we're setting for qemu-img.
The only way to avoid this is to *not* set limits until after we have exec'd. This would require using an external command like my original prlimit patch.
Since not all distros have prlimit, we could create a simple nova-prlimit python script todo this. We would use native prlimit if it was present in the distro (as its fast) and fallacck to nova-prlimit if missing.
It is a shame oslo.concurrency went for the preexec_fn approach, rather than my suggestion of explicitly representing resource limits in the API, as that would have allowed us to hide this compat code in oslo.concurrency, instead of dealing with it in nova, glance and cinder :-(
The way https:/ /review. openstack. org/#/c/ 209627/ applied the limits to qemu-img was by setting rlimits in between fork() and execve(). The problem with this is that before we reach exec() we have a full copy-on-write mapping of the nova python process. So if Nova memory usage is too high, this could tickle the limit we're setting for qemu-img.
The only way to avoid this is to *not* set limits until after we have exec'd. This would require using an external command like my original prlimit patch.
Since not all distros have prlimit, we could create a simple nova-prlimit python script todo this. We would use native prlimit if it was present in the distro (as its fast) and fallacck to nova-prlimit if missing.
It is a shame oslo.concurrency went for the preexec_fn approach, rather than my suggestion of explicitly representing resource limits in the API, as that would have allowed us to hide this compat code in oslo.concurrency, instead of dealing with it in nova, glance and cinder :-(