So, I don't really care where the issue is :). I do think that an
unlimited concurrency factor is a defect, because we're seeing the
machine driven deep into swap, which exacerbates what is occurring by
adding vm thrashing to the issue. Solving that is no guarantee that
loggerhead won't hang, but it would mean than when its in crisis,
we're not *also* fighting a machine trying to do 200 render actions at
once: core dumps will be smaller, thread dumps will be more readable.
Separately from things going wrong, what about when things go right
and we get spidered by something not honoring robots.txt? We shouldn't
become as arbitrarily concurrent as the robot, rather we *should* be
queueing requests and not accepting them [as in, leaving them in the
accept backlog] until we are roughly ready to deal with it.
So, an upper thread limit won't *stop hangs* but it will *mitigate
hangs* and reduce issues in overload situations.
So, I don't really care where the issue is :). I do think that an
unlimited concurrency factor is a defect, because we're seeing the
machine driven deep into swap, which exacerbates what is occurring by
adding vm thrashing to the issue. Solving that is no guarantee that
loggerhead won't hang, but it would mean than when its in crisis,
we're not *also* fighting a machine trying to do 200 render actions at
once: core dumps will be smaller, thread dumps will be more readable.
Separately from things going wrong, what about when things go right
and we get spidered by something not honoring robots.txt? We shouldn't
become as arbitrarily concurrent as the robot, rather we *should* be
queueing requests and not accepting them [as in, leaving them in the
accept backlog] until we are roughly ready to deal with it.
So, an upper thread limit won't *stop hangs* but it will *mitigate
hangs* and reduce issues in overload situations.