Inconsistent DB existence for failed-to-schedule instances
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
Medium
|
Eoghan Glynn |
Bug Description
When a single instance is booted, the DB entry for the new instance is created by the API layer and a non-error status code is returned to the caller.
If that instance launch goes on to fail in the scheduler, say because no valid host is found, then it continues to exist in the DB, and is reported via the native servers API and EC2 DescribeInstance API as being in the error state.
Contrast with the case where multiple instances are being booted at once, e.g. via EC2 RunInstances with MinCount > 1. Here the creation of the DB entry for the new instances is delegated from the API layer to the scheduler. This will not occur if the new instances cannot be scheduled to a host.
Instead a 400 Bad Request is returned to the caller and the instance never comes into existence in the DB, and thus never appears in subsequent server listings.
These two cases should be made consistent - my preference is for the api-driven DB entry creation to be followed consistently.
Changed in nova: | |
assignee: | nobody → Eoghan Glynn (eglynn) |
Changed in nova: | |
status: | Fix Released → Fix Committed |
Changed in nova: | |
milestone: | folsom-3 → none |
Changed in nova: | |
milestone: | none → folsom-rc1 |
status: | Fix Committed → Fix Released |
Changed in nova: | |
milestone: | folsom-rc1 → 2012.2 |
Fixed as a side-effect of:
https:/ /review. openstack. org/11379