Please set memory limits by default
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
linux (Ubuntu) |
Triaged
|
Wishlist
|
Unassigned |
Bug Description
Accidental actions by a single user or program that tries to consume all available memory can cause the system to start swapping and becoming completely unusable. I just did this by accident and had to hard reset.
Please set a default limit on the amount of memory available to a single process. I think a default of some proportion of total system memory would be sensible - say 75%. Except in special circumstances, exceeding this sort of amount would cause the system to be unusable, so it shouldn't impact the average user. Advanced users or those with special requirements would be able to increase or remove the limit.
I'd make it a hard limit for security reasons so that multiple users are protected from each other, but I would be happy if it was decided to use a soft limit instead.
See bug 14505 for a discussion of this issue. I think it could be resolved in the same way?
It's never been clear to me which combination of "data seg size", "max memory size", "stack size" and "virtual memory" should be used. I have always used just "virtual memory" and this has caught runaway processes for me every time. Any opinions? I've never found any more detailed documentation on the available limits apart from this.
I will happily write or modify an existing PAM module if this is how you'd like it implemented.
Changed in pam (Ubuntu): | |
status: | New → Triaged |
importance: | Undecided → Wishlist |
A plausible solution would to be enable pam_limits by default, and add support for setting virtual memory limits with a percentage.
In that vein, it would be also handy to be able to set the max number of processes based on the number of cores.