Openstack Compute (nova) Essex with last updates.
We have 3 compute nodes in our openstack infrastructure named : opstack87 opstack88 opstack89
(opstack87 is our controller node too)
The default scheduler options are used, that is :
scheduler_topic="scheduler"
scheduler_manager = nova.scheduler.manager.SchedulerManager
default_schedule_zone=<None>
node_availability_zone="nova"
scheduler_host_manager="nova.scheduler.host_manager.HostManager"
reserved_host_memory_mb=512
scheduler_driver="nova.scheduler.multi.MultiScheduler"
compute_scheduler_driver="nova.scheduler.filter_scheduler.FilterScheduler"
scheduler_available_filters="nova.scheduler.filters.standard_filters"
scheduler_default_filters="AvailabilityZoneFilter,RamFilter,ComputeFilter"
least_cost_functions="nova.scheduler.least_cost.compute_fill_first_cost_fn"
compute_fill_first_cost_fn_weight=1.0
We execute the following nova boot command :
root@opstack87:~# nova boot --flavor 1 --image 1f00e541-7a7d-43ab-a4cf-342802f3bca4 gauthier
with the following result :
+-------------------------------------+----------------------------------------------------------+
| Property | Value |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-SRV-ATTR:host | opstack90 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | instance-00000171 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | 8rk7BmD2e3HZ |
| config_drive | |
| created | 2012-07-12T09:28:00Z |
| flavor | m1.tiny |
| hostId | 43b17ba609cacafbdb8961f6d5e8028d0564d640f689b4aad613a349 |
| id | 3cfc6768-d618-4891-b9c2-a08b7ca0910e |
| image | ubuntu_with_cosacs |
| key_name | |
| metadata | {} |
| name | gauthier |
| progress | 0 |
| status | BUILD |
| tenant_id | a41b71ec474b4f9da53d0f6c0e044f48 |
| updated | 2012-07-12T09:28:02Z |
| user_id | 2d6944752cba4942a17cf25365c14bf3 |
+-------------------------------------+----------------------------------------------------------+
root@opstack87:~#
root@opstack87:~# nova list
+--------------------------------------+----------+--------+-------------------+
| ID | Name | Status | Networks |
+--------------------------------------+----------+--------+-------------------+
| 3cfc6768-d618-4891-b9c2-a08b7ca0910e | gauthier | ACTIVE | private=10.0.21.3 |
+--------------------------------------+----------+--------+-------------------+
root@opstack87:~#
flavor =1 is the m1.tiny flavor with 512MB memory
root@opstack87:~# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor |
+----+-----------+-----------+------+-----------+------+-------+-------------+
| 1 | m1.tiny | 512 | 0 | 0 | | 1 | 1.0 |
| 2 | m1.small | 2048 | 10 | 20 | | 1 | 1.0 |
| 3 | m1.medium | 4096 | 10 | 40 | | 2 | 1.0 |
| 4 | m1.large | 8192 | 10 | 80 | | 4 | 1.0 |
| 5 | m1.xlarge | 16384 | 10 | 160 | | 8 | 1.0 |
+----+-----------+-----------+------+-----------+------+-------+-------------+
The scheduler log file shows the following trace :
2012-07-12 11:28:02 DEBUG nova.rpc.amqp [-] received {u'_context_roles': [u'admin'], u'_context_request_id': u'req-5f214e16-8af1-4620-a78b-1bfe29498a67', u'_context_read_deleted': u'no', u'args': {u'request_spec': {u'num_instances': 1, u'block_device_mapping': [], u'image': {u'status': u'active', u'name': u'ubuntu_with_cosacs', u'deleted': False, u'container_format': u'ovf', u'created_at': u'2012-05-21 13:37:45', u'disk_format': u'qcow2', u'updated_at': u'2012-05-21 13:46:36', u'properties': {}, u'min_ram': u'0', u'checksum': u'546e1633a950427d3d7fa35106b632fe', u'min_disk': u'0', u'is_public': True, u'deleted_at': None, u'id': u'1f00e541-7a7d-43ab-a4cf-342802f3bca4', u'size': 1385365504}, u'instance_type': {u'root_gb': 0, u'name': u'm1.tiny', u'flavorid': u'1', u'deleted': False, u'created_at': None, u'ephemeral_gb': 0, u'updated_at': None, u'memory_mb': 512, u'vcpus': 1, u'extra_specs': {}, u'swap': 0, u'rxtx_factor': 1.0, u'deleted_at': None, u'vcpu_weight': None, u'id': 2}, u'instance_properties': {u'vm_state': u'building', u'availability_zone': None, u'launch_time': u'2012-07-12T09:28:00Z', u'ephemeral_gb': 0, u'instance_type_id': 2, u'user_data': u'', u'vm_mode': None, u'reservation_id': u'r-4gz82x4e', u'root_device_name': None, u'display_name': u'gauthier', u'uuid': u'3cfc6768-d618-4891-b9c2-a08b7ca0910e', u'display_description': u'gauthier', u'key_data': None, u'power_state': 0, u'progress': 0, u'project_id': u'a41b71ec474b4f9da53d0f6c0e044f48', u'metadata': {}, u'ramdisk_id': u'', u'access_ip_v6': None, u'access_ip_v4': None, u'kernel_id': u'', u'key_name': None, u'user_id': u'2d6944752cba4942a17cf25365c14bf3', u'config_drive_id': u'', u'root_gb': 0, u'locked': False, u'launch_index': 0, u'memory_mb': 512, u'vcpus': 1, u'image_ref': u'1f00e541-7a7d-43ab-a4cf-342802f3bca4', u'architecture': None, u'auto_disk_config': None, u'os_type': None, u'config_drive': u''}, u'security_group': [u'default']}, u'is_first_time': True, u'filter_properties': {u'scheduler_hints': {}}, u'topic': u'compute', u'admin_password': '<SANITIZED>', u'injected_files': [], u'requested_networks': None}, u'_context_auth_token': '<SANITIZED>', u'_context_is_admin': True, u'_context_project_id': u'a41b71ec474b4f9da53d0f6c0e044f48', u'_context_timestamp': u'2012-07-12T09:28:00.644495', u'_context_user_id': u'2d6944752cba4942a17cf25365c14bf3', u'method': u'run_instance', u'_context_remote_address': u'172.31.70.87'} from (pid=23457) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
2012-07-12 11:28:02 DEBUG nova.rpc.amqp [req-5f214e16-8af1-4620-a78b-1bfe29498a67 2d6944752cba4942a17cf25365c14bf3 a41b71ec474b4f9da53d0f6c0e044f48] unpacked context: {'user_id': u'2d6944752cba4942a17cf25365c14bf3', 'roles': [u'admin'], 'timestamp': '2012-07-12T09:28:00.644495', 'auth_token': '<SANITIZED>', 'remote_address': u'172.31.70.87', 'is_admin': True, 'request_id': u'req-5f214e16-8af1-4620-a78b-1bfe29498a67', 'project_id': u'a41b71ec474b4f9da53d0f6c0e044f48', 'read_deleted': u'no'} from (pid=23457) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
2012-07-12 11:28:02 DEBUG nova.scheduler.filter_scheduler [req-5f214e16-8af1-4620-a78b-1bfe29498a67 2d6944752cba4942a17cf25365c14bf3 a41b71ec474b4f9da53d0f6c0e044f48] Attempting to build 1 instance(s) from (pid=23457) schedule_run_instance /usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py:65
2012-07-12 11:28:02 DEBUG nova.scheduler.host_manager [req-5f214e16-8af1-4620-a78b-1bfe29498a67 2d6944752cba4942a17cf25365c14bf3 a41b71ec474b4f9da53d0f6c0e044f48] Host filter passes for opstack90 from (pid=23457) passes_filters /usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:163
2012-07-12 11:28:02 DEBUG nova.scheduler.host_manager [req-5f214e16-8af1-4620-a78b-1bfe29498a67 2d6944752cba4942a17cf25365c14bf3 a41b71ec474b4f9da53d0f6c0e044f48] Host filter passes for opstack88 from (pid=23457) passes_filters /usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:163
2012-07-12 11:28:02 DEBUG nova.scheduler.host_manager [req-5f214e16-8af1-4620-a78b-1bfe29498a67 2d6944752cba4942a17cf25365c14bf3 a41b71ec474b4f9da53d0f6c0e044f48] Host filter passes for opstack87 from (pid=23457) passes_filters /usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:163
2012-07-12 11:28:02 DEBUG nova.scheduler.filter_scheduler [req-5f214e16-8af1-4620-a78b-1bfe29498a67 2d6944752cba4942a17cf25365c14bf3 a41b71ec474b4f9da53d0f6c0e044f48] Filtered [host 'opstack90': free_ram_mb:17448 free_disk_mb:22528, host 'opstack88': free_ram_mb:7473 free_disk_mb:67584, host 'opstack87': free_ram_mb:11496 free_disk_mb:151552] from (pid=23457) _schedule /usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py:199
2012-07-12 11:28:02 DEBUG nova.scheduler.filter_scheduler [req-5f214e16-8af1-4620-a78b-1bfe29498a67 2d6944752cba4942a17cf25365c14bf3 a41b71ec474b4f9da53d0f6c0e044f48] Weighted <nova.scheduler.least_cost.WeightedHost object at 0x39b5fd0> from (pid=23457) _schedule /usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py:209
2012-07-12 11:28:02 DEBUG nova.rpc.amqp [req-5f214e16-8af1-4620-a78b-1bfe29498a67 2d6944752cba4942a17cf25365c14bf3 a41b71ec474b4f9da53d0f6c0e044f48] Making asynchronous cast on compute.opstack90... from (pid=23457) cast /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:351
2012-07-12 11:28:02 DEBUG nova.scheduler.driver [req-5f214e16-8af1-4620-a78b-1bfe29498a67 2d6944752cba4942a17cf25365c14bf3 a41b71ec474b4f9da53d0f6c0e044f48] Casted 'run_instance' to compute 'opstack90' from (pid=23457) cast_to_compute_host /usr/lib/python2.7/dist-packages/nova/scheduler/driver.py:80
The 3 compute nodes passe the filters. The free ram (MB) for each host is :
opstack87 : 11496
opstack88 : 7473
opstack90 : 17448
The default cost function "nova.scheduler.least_cost.compute_fill_first_cost_fn" returns the free ram for each host and this free ram is the corresponding host cost.
Thus, the lowest cost is 7473 (opstack88) and the highest cost is 17448 (opstack90).
The host opstack88 must be the winner (according code comments and documentation) because it has the lower cost.
But in fact, the winner is opstack90 (see log above) which has the highest cost.
Regards.
The default for compute_ fill_first_ cost_fn_ weight is actually -1.0. Did you explicitly set it to 1.0 (which would make this a bug), or leave it at the defaults, which would make this defined behavior?