Notification bubble always says "Uploading X and 199 other files"
Bug #939483 reported by
Martin Albisetti
This bug affects 5 people
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Ubuntu One Client | Status tracked in Trunk | |||||
Stable-3-0 |
Triaged
|
Medium
|
Ubuntu One Client Engineering team | |||
Trunk |
Triaged
|
Medium
|
Ubuntu One Client Engineering team | |||
Ubuntu One Indicator |
New
|
Undecided
|
Unassigned | |||
ubuntuone-client (Ubuntu) |
Triaged
|
Medium
|
Unassigned | |||
Precise |
Won't Fix
|
Medium
|
Unassigned |
Bug Description
I have several thousand of files in the queue to upload, but it also says I have X and 199 more.
ProblemType: Bug
DistroRelease: Ubuntu 12.04
Package: ubuntuone-client 2.99.4-0ubuntu2
ProcVersionSign
Uname: Linux 3.2.0-17-
ApportVersion: 1.92-0ubuntu1
Architecture: i386
Date: Thu Feb 23 10:07:16 2012
EcryptfsInUse: Yes
InstallationMedia: Ubuntu 11.10 "Oneiric Ocelot" - Release i386 (20111012)
PackageArchitec
ProcEnviron:
TERM=xterm
PATH=(custom, no user)
LANG=en_US.UTF-8
SHELL=/bin/bash
SourcePackage: ubuntuone-client
UbuntuOneSyncda
UpgradeStatus: Upgraded to precise on 2011-12-02 (83 days ago)
Changed in ubuntuone-client (Ubuntu): | |
status: | New → Triaged |
importance: | Undecided → Medium |
tags: | added: desktop+ |
To post a comment you must log in.
Recent work on the "Offload Queue" has apparently limited to 200 the number of events processed by syncdaemon.
Pending operations above this number are stored on disk, and they are only retrieved from disk when older operations are completed.
This change has cut on the memory usage, but for status aggregator (the code that shows the notification bubbles) this looks like the operations have not been queued yet.
After a quick look I came with two possible solutions:
1) modify the messages to say "more that 100" if more than 100 operations are scheduled.
2) let the status aggregator take a peek at the operations before storing them on disk.
Option 2 seems like a much bigger change, since we'll need to make sure that all in-memory metadata used by the aggregator is up to date at the point where it looks at the operations. Also, we would need to make sure that the aggregator holds *no* references to pending operations, so they can be safely serialized.
Anyway, I'm sure we can come up with a better ways to fix this.