I think 1 second is tonnes of time: if the DB can't answer a simple lookup as this in 1 second, given our scaled out infrastructure, we are going to find delivering high responsiveness terribly hard.
There are two parts to this I think:
- extend the client code to accept a per-connection timeout value (rather than changing a global)
- pass down to the client the *remaining appserver timeout allowance* automatically. That way, a fresh request has the maximum possible timeout, and one at the end of a slow transaction has a shorter timeout.
https:/ /lp-oops. canonical. com/oops. py/?oopsid= OOPS-1665EA1853
I think 1 second is tonnes of time: if the DB can't answer a simple lookup as this in 1 second, given our scaled out infrastructure, we are going to find delivering high responsiveness terribly hard.
https:/ /bugs.edge. launchpad. net/launchpad- foundations/ +bug/140817 may be stale, and even if it isn't, we don't want server threads hanging about for 20 minutes when using the librarian.
There are two parts to this I think:
- extend the client code to accept a per-connection timeout value (rather than changing a global)
- pass down to the client the *remaining appserver timeout allowance* automatically. That way, a fresh request has the maximum possible timeout, and one at the end of a slow transaction has a shorter timeout.