user space process hung in 'D' state waiting for disk io to complete
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
linux (Ubuntu) |
Fix Released
|
Undecided
|
Daniel Axtens | ||
Xenial |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
== SRU Justification ==
[Impact]
Occasionally an application gets stuck in "D" state on NFS reads/sync and close system calls. All the subsequent operations on the NFS mounts are stuck and reboot is required to rectify the situation.
[Fix]
Use GPF_NOIO in some allocations in writeback to avoid a deadlock. This is upstream in:
ae97aa524ef4 ("NFS: Use GFP_NOIO for two allocations in writeback")
[Testcase]
See Test scenario in previous description.
A test kernel with this patch was tested heavily (>100hrs of test suite) without issue.
[Regression Potential]
This changes memory allocation in NFS to use a different policy. This could potentially affect NFS.
However, the patch is already in Artful and Bionic without issue.
The patch does not apply to Trusty.
== Previous Description ==
Using Ubuntu Xenial user reports processes hang in D state waiting for disk io.
Ocassionally one of the applications gets into "D" state on NFS reads/sync and close system calls. based on the kernel backtraces seems to be stuck in kmalloc allocation during cleanup of dirty NFS pages.
All the subsequent operations on the NFS mounts are stuck and reboot is required to rectify the situation.
[Test scenario]
1) Applications running in Docker environment
2) Application have cgroup limits --cpu-shares --memory -shm-limit
3) python and C++ based applications (torch and caffe)
4) Applications read big lmdb files and write results to NFS shares
5) use NFS v3 , hard and fscache is enabled
6) now swap space is configured
This prevents all other I/O activity on that mount to hang.
we are running into this issue more frequently and identified few applications causing this problem.
As updated in the description, the problem seems to be happening when exercising the stack
try_to_
we see this with docker containers with cgroup option --memory <USER_SPECIFIED
whenever there is a deadlock, we see that the process that is hung has reached the maximum cgroup limit, multiple times and typically cleans up dirty data and caches to bring the usage under the limit.
This reclaim path happens many times and finally we hit probably a race get into deadlock
Changed in linux (Ubuntu): | |
milestone: | none → xenial-updates |
assignee: | nobody → Dragan S. (dragan-s) |
description: | updated |
Changed in linux (Ubuntu): | |
assignee: | Dragan S. (dragan-s) → Daniel Axtens (daxtens) |
Changed in linux (Ubuntu Xenial): | |
status: | New → In Progress |
Changed in linux (Ubuntu): | |
status: | Incomplete → Fix Released |
Changed in linux (Ubuntu Xenial): | |
status: | In Progress → Fix Committed |
This bug is missing log files that will aid in diagnosing the problem. While running an Ubuntu kernel (not a mainline or third-party kernel) please enter the following command in a terminal window:
apport-collect 1750038
and then change the status of the bug to 'Confirmed'.
If, due to the nature of the issue you have encountered, you are unable to run this command, please add a comment stating that fact and change the bug status to 'Confirmed'.
This change has been made by an automated script, maintained by the Ubuntu Kernel Team.