Slow transfer over sftp connection
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
sbackup |
Confirmed
|
Low
|
Unassigned |
Bug Description
I noticed that backups were taking much longer than I expected. Using scp to the same server, I get about 20MB/s (which is about my max HD throughput). However, sbackup writes at about only 2MB/s, a factor of 10 (!) slower (as reported by iotop and network traffic and manual measurement of file growth). It isn't the compression, because that only runs at only 20% CPU. So there seems room for a 5-fold improvement here.
This is a serious issue, because a full backup now can take up to 5 hours or so (in stead of about one hour), which significantly increases the odds that the backup does not finish before I shutdown my laptop, or take it offline.
I have tracked the problem to what I think may be the origin; a bug in glib/gvfs:
https:/
In short, all writes on a gvfs volume are acknowledged, causing a network-latency dependent bottleneck, which may be quite severe even on networks with a high (intrinsic) bandwidth. In my case, it's 1G from my laptop all the way to the server, but with several routers in between (university setup).
The proper solution would be to have gvfs fixed, however according to the bug that isn't going to be any time soon.
An alternative may be to see if a reasonable workaround fix can be implemented for sbackup. Any ideas?
(I tried to use the 'fuse' plugin in stead of 'gvfs' for sbackup to see if the issue exists with fuse as well, but fuse will not connect without password, and does not accept my password since it has a '?' in it. Will get back on this if I found time to try with a different password.)
Changed in sbackup: | |
status: | Incomplete → Confirmed |
Still valid? How is the status?