I hit similar issue to this yesterday when removing a node with a single disk from a zone with juju remove-unit the rings were not updated to reflect the removed node/disk.
While the rings were in this inconsistent state it was possible to add-unit and the additional node/disk was added to the ring files and joined the cluster successfully.
However once I manually removed the previously removed node/disk from the rings subsequent add-unit operations caused a hook error on the swift-proxy instance and a traceback in that unit's juju logs.
So it appears that manually mixing manual maintenance operations with juju managed service causes issues, but also currently we support scale-out but not scale-in.
I hit similar issue to this yesterday when removing a node with a single disk from a zone with juju remove-unit the rings were not updated to reflect the removed node/disk.
While the rings were in this inconsistent state it was possible to add-unit and the additional node/disk was added to the ring files and joined the cluster successfully.
However once I manually removed the previously removed node/disk from the rings subsequent add-unit operations caused a hook error on the swift-proxy instance and a traceback in that unit's juju logs.
So it appears that manually mixing manual maintenance operations with juju managed service causes issues, but also currently we support scale-out but not scale-in.