include of dir fails when the number of files in the dir approaches or exceeds the processes fd limit
Bug #1255424 reported by
John Johansen
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
AppArmor |
Triaged
|
Low
|
Unassigned | ||
apparmor (Ubuntu) |
Triaged
|
Low
|
Unassigned |
Bug Description
The apparmor parser handles directory includes in an odd way, where each file in the directory is opened and the pushed as a flex buffer state before any files in the directory are actually processed. The flex buffers and associated fds are then processed one by one and the <eof> handling pops the buffer (and closes the fd) to get to the next file to process.
This means that if a directory contains a lot of files the include could fail as the parser will run out of available fds.
Changed in apparmor: | |
importance: | Undecided → Medium |
assignee: | nobody → Steve Beattie (sbeattie) |
Changed in apparmor (Ubuntu): | |
importance: | Undecided → Medium |
Changed in apparmor (Ubuntu): | |
assignee: | Steve Beattie (sbeattie) → nobody |
Changed in apparmor: | |
assignee: | Steve Beattie (sbeattie) → nobody |
importance: | Medium → Low |
Changed in apparmor (Ubuntu): | |
importance: | Medium → Low |
tags: | added: aa-parser |
Changed in apparmor: | |
status: | Confirmed → Triaged |
Changed in apparmor (Ubuntu): | |
status: | Confirmed → Triaged |
To post a comment you must log in.
This can be worked around by splitting large directory includes into multiple directories, or increasing the processes open fd limit.