stop hook does not fire when units removed from service
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
juju-core |
Fix Released
|
High
|
William Reade | ||
pyjuju |
Triaged
|
High
|
Unassigned |
Bug Description
Scenario:
Cassandra charm deployed with 4 units supporting the service. Ring is balanced automatically over the four units.
Action:
One or more service units are removed using juju remove-unit from the cassandra service.
Result:
With providers where the underlying machine is not instantly terminated, the removed units remain in the cassandra ring as cassandra is still running on the units.
Ideally the charm should be able to cope with the following actions:
1) Controlled (juju remove-unit) -> unit removes itself from the ring, purges its local data store and shuts down. In this case the data on the node is streamed from the node itself.
2) Uncontrolled (unit goes AWOL) -> the remaining units in the cluster need to detect this and complete a removal on the node from the ring - ideally through some sort of 'run once per deployed service, not unit' hook. In this case the data is streamed from remaining replicas of the data on the unit that has gone awol.
Changed in juju: | |
milestone: | none → florence |
Changed in juju: | |
status: | New → Confirmed |
importance: | Undecided → High |
Changed in juju: | |
importance: | High → Critical |
Changed in juju: | |
assignee: | nobody → Jim Baker (jimbaker) |
status: | Confirmed → In Progress |
Changed in juju: | |
status: | In Progress → Confirmed |
assignee: | Jim Baker (jimbaker) → nobody |
Changed in juju: | |
milestone: | galapagos → honolulu |
Changed in juju: | |
milestone: | 0.6 → 0.7 |
Changed in juju-core: | |
status: | New → Confirmed |
importance: | Undecided → High |
Changed in juju-core: | |
assignee: | nobody → William Reade (fwereade) |
status: | Confirmed → Fix Released |
tags: | added: goju-resolved |
Changed in juju: | |
milestone: | 0.7 → 0.8 |
Changed in juju: | |
status: | Confirmed → Triaged |
attached IRC Conversation log around the issue