We're already a handoff - just wait until we can ship it to the right
primary location.
If we timeout talking to a couple of nodes (or more likely get rejected
for connection limits because of contention during a rebalance) we can
actually end up making *more* work if we move the part to another node.
I've seen clusters get stuck on rebalance just passing parts around
handoffs for *days*.
Known-Issues:
If we see a 507 from a primary and we're not in the handoff list (we're
an old primary post rebalance) it'd probably be not so terrible to try
to revert it to the first handoff if it's not already holding a part.
But that's more work and sounds more like lp bug #1510342
Reviewed: https:/ /review. openstack. org/425441 /git.openstack. org/cgit/ openstack/ swift/commit/ ?id=eadb01b8af3 cfdea801441744c 360c200b08b8cc
Committed: https:/
Submitter: Jenkins
Branch: master
commit eadb01b8af3cfde a801441744c360c 200b08b8cc
Author: Clay Gerrard <email address hidden>
Date: Wed Jan 25 11:40:54 2017 -0800
Do not revert fragments to handoffs
We're already a handoff - just wait until we can ship it to the right
primary location.
If we timeout talking to a couple of nodes (or more likely get rejected
for connection limits because of contention during a rebalance) we can
actually end up making *more* work if we move the part to another node.
I've seen clusters get stuck on rebalance just passing parts around
handoffs for *days*.
Known-Issues:
If we see a 507 from a primary and we're not in the handoff list (we're
an old primary post rebalance) it'd probably be not so terrible to try
to revert it to the first handoff if it's not already holding a part.
But that's more work and sounds more like lp bug #1510342
Closes-Bug: #1653169
Change-Id: Ie351d8342fc8e5 89b143f981e95ce 74e70e52784