Once the ironic instance flavor data migration is complete, it is then safe to schedule only based on ironic node custom resource classes. We have a nova-status check that goes back to queens for making sure you've completed the data migration:
Workarounds for this would be to use host aggregates to segregate VM and BM hosts and pin flavors to those aggregates, or unset the memory_mb/vcpu properties from ironic nodes, but those workarounds might not be feasible at large scale (like CERN).
We can add a workaround config option to nova to disable reporting standard resource class inventory for operators that can't use the other alternative workarounds mentioned above and who know they have done their data migrations.
The code in the ironic virt driver to report VCPU/MEMORY_ MB/DISK_ GB inventory was removed in Stein:
https:/ /github. com/openstack/ nova/commit/ a985e34cdeef777 fe7ff943e363a5f 1be6d991b7
So this bug applies only to rocky/queens/pike.
Once the ironic instance flavor data migration is complete, it is then safe to schedule only based on ironic node custom resource classes. We have a nova-status check that goes back to queens for making sure you've completed the data migration:
https:/ /review. openstack. org/#/q/ Ifd22325e849db2 353b1b1eedfe998 e3d6a79591c
Workarounds for this would be to use host aggregates to segregate VM and BM hosts and pin flavors to those aggregates, or unset the memory_mb/vcpu properties from ironic nodes, but those workarounds might not be feasible at large scale (like CERN).
We can add a workaround config option to nova to disable reporting standard resource class inventory for operators that can't use the other alternative workarounds mentioned above and who know they have done their data migrations.