autopacking should just combine all packs it touches
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Bazaar |
Fix Released
|
Wishlist
|
John A Meinel |
Bug Description
Just been prompted to record this in the bug tracker.
Independent of how we decide what pack to consolidate, there is no point
consolidating into anything other than 1 new pack.
E.g. if we have 30 packs for some reason, and 444 revisions (want 4x100,
4x10, 4x1), its better, once we determine which of the 30 packs are too
small (maybe we have 3x100 and need a forth 100, and 2 new 10's), to
just combine all the ones we would read data from into a single new 110
or whatever pack.
It's better because we'll spend less latency in future reads as we only
have set of indices as a result, and it is as cheap (or cheaper) than
current autopack because we still won't read packs that are large
enough.
affects bzr
tag packs
--
GPG key available at: <http://
Changed in bzr: | |
assignee: | nobody → jameinel |
milestone: | none → 1.8 |
status: | Triaged → Fix Released |
seems reasonable, and better than doing all the work to combine them into a smaller set of packs.
My only concern about all of the auto-packing, is that it doesn't respect project boundaries. Topological sorting would probably help that, though.