When KVM Host does not have enough ram, "nova show " does not show a fault message
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
Medium
|
Vish Ishaya |
Bug Description
Steps to reproduce:
1. use a kvm host that is running as a vm on an esx host with 2048 Megs of RAM
2. on the controller do "nova boot --flavor 1 --image <valid-image-id> vm1
3. do the same nova boot command to create a vm2, vm3, vm4,.....
4. at some point, the nova boot command will fail because there won't be enough memory. You will see a message about ram filter in the nova-scheduler:
2012-04-03 12:37:20 DEBUG nova.scheduler.
2012-04-03 12:37:20 WARNING nova.scheduler.
5. do a "nova show <vm name that failed>" and notice that there is no "fault" line in the output.
The fault line should show why a vm was not created. The nova show command will show a fault line if the fault_instances table has data in it. So the last "nova boot" command failed to create the vm. When that happened the OpenStack code should have added an entry to the fault_instances table with information about why the vm was not created.
To see a fault created correctly, do this:
1. set auto_assign_
2. make sure you do NOT have any floating ip pool configured
3. do "nova boot --flavor 1 --image <valid-image-id> <vm-name>"
4. do "nova show <vm-name>"
notice that there is a "fault" line in the output that will include a message about why the vm was not created.
The same thing should happen in all cases where a vm is not created.
Changed in nova: | |
status: | Fix Committed → Fix Released |
Changed in nova: | |
milestone: | folsom-rc1 → 2012.2 |
Looks right to me. It looks like compute/manager.py is the only place where instance faults are recorded. If the instance fails to be scheduled, it won't make it to the compute manager at all.