@John the most obvious difference between your machine and Vincent's is the different round trip time, which connects to the TCP windowing restriction discussed above. What ping time do you see to chinstrap from those two machines?
Generally speaking when people talk about pull performance we should ask them for the rtt too. (It may be semi obvious from looking at the delay on the hello rpc.)
If the throughput is linear with rtt that would be useful data.
To sum up so far, I think we can split the problem into two parts:
1- bzr is spending too much time thinking (therefore leaving the network idle), and is sending too much data. We can reproduce and fix this entirely locally, either with a slow network simulation or even just a fast local network and looking at the cpu usage and the bytes transmitted. Any improvement in them will help.
2- Bulk TCP transfers from Launchpad do not use the full bandwidth of the the client's local link. This seems independent of the first, because it's also slow fetching over SSH from a DC server. We can look at other downloads from the DC (from PPAs, maybe from cdimage) to try to work out if it is an SSH limit, if it's TCP windowing or congestion, if it's traffic management somewhere in between. It may be possible to tweak tcp settings on either the client or the server to get better results.
I'm moderately confident that there is not too much middle ground or interaction between them: bzr sees about the same performance as plain SSH so we're probably not making things worse, and during the bulk transfer phase we do seem to keep the send buffers full and the receive buffers empty.
Let's keep this bug open for the overall problem, link to bugs in bzr for specific aspects of #1, and file a bug against lp itself for #2.
@John the most obvious difference between your machine and Vincent's is the different round trip time, which connects to the TCP windowing restriction discussed above. What ping time do you see to chinstrap from those two machines?
Generally speaking when people talk about pull performance we should ask them for the rtt too. (It may be semi obvious from looking at the delay on the hello rpc.)
If the throughput is linear with rtt that would be useful data.
To sum up so far, I think we can split the problem into two parts:
1- bzr is spending too much time thinking (therefore leaving the network idle), and is sending too much data. We can reproduce and fix this entirely locally, either with a slow network simulation or even just a fast local network and looking at the cpu usage and the bytes transmitted. Any improvement in them will help.
2- Bulk TCP transfers from Launchpad do not use the full bandwidth of the the client's local link. This seems independent of the first, because it's also slow fetching over SSH from a DC server. We can look at other downloads from the DC (from PPAs, maybe from cdimage) to try to work out if it is an SSH limit, if it's TCP windowing or congestion, if it's traffic management somewhere in between. It may be possible to tweak tcp settings on either the client or the server to get better results.
I'm moderately confident that there is not too much middle ground or interaction between them: bzr sees about the same performance as plain SSH so we're probably not making things worse, and during the bulk transfer phase we do seem to keep the send buffers full and the receive buffers empty.
Let's keep this bug open for the overall problem, link to bugs in bzr for specific aspects of #1, and file a bug against lp itself for #2.