This is another chinstrap => local using the mirror of production.
In this case, I have a small patch to the source codebase. Specifically:
=== modified file 'bzrlib/repofmt/groupcompress_repo.py'
--- bzrlib/repofmt/groupcompress_repo.py 2009-10-23 17:27:45 +0000
+++ bzrlib/repofmt/groupcompress_repo.py 2009-11-17 21:41:18 +0000
@@ -1021,7 +1021,7 @@ super(GroupCHKStreamSource, self).__init__(from_repository, to_format) self._revision_keys = None self._text_keys = None
- self._text_fetch_order = 'groupcompress'
+ self._text_fetch_order = 'unordered' self._chk_id_roots = None self._chk_p_id_roots = None
In other words, it doesn't change the signatures/repository/inventory or chk streaming, but it changes the text streaming to be done 'unordered' rather than trying to recompute the 'groupcompress' "proper" order.
This is better for networking at least. The runtime dropped from 350s down to 260->290s. And most of the "creating new compressed block on-the-fly" lines are gone.
I don't have a good feel for what this will mean client-side for final-disk size. As it will be trying to collapse groups, but it won't have the data in an optimal order.
This is another chinstrap => local using the mirror of production. repofmt/ groupcompress_ repo.py' repofmt/ groupcompress_ repo.py 2009-10-23 17:27:45 +0000 repofmt/ groupcompress_ repo.py 2009-11-17 21:41:18 +0000
super( GroupCHKStreamS ource, self)._ _init__ (from_repositor y, to_format)
self. _revision_ keys = None
self. _text_keys = None fetch_order = 'groupcompress' fetch_order = 'unordered'
self. _chk_id_ roots = None
self. _chk_p_ id_roots = None
In this case, I have a small patch to the source codebase. Specifically:
=== modified file 'bzrlib/
--- bzrlib/
+++ bzrlib/
@@ -1021,7 +1021,7 @@
- self._text_
+ self._text_
In other words, it doesn't change the signatures/ repository/ inventory or chk streaming, but it changes the text streaming to be done 'unordered' rather than trying to recompute the 'groupcompress' "proper" order.
This is better for networking at least. The runtime dropped from 350s down to 260->290s. And most of the "creating new compressed block on-the-fly" lines are gone.
I don't have a good feel for what this will mean client-side for final-disk size. As it will be trying to collapse groups, but it won't have the data in an optimal order.