Looking briefly at the code of other drivers that try to report this (xenapi and ironic) - it is also likely broken for at least xenapi.
The crux of the issue is that resource tracker works by looking at the instances Nova knows about, and also the ongoing migration, so anything that is reported by any of the virt drivers as part of the dictionary returned from get_available_resource should only be based on the available resources and should never try to factor in any resource usage. Only the resource tracker holding the global resource lock (COMPUTE_RESOURCE_SEMAPHORE) knows the current usage of resources since it can take into account migrations that are in flight etc.
Unfortunately, both libvirt and xenapi (I think) try to look at the instance currently know by the hypervisor, which is not all instances we should be taking into account, and deduce the final disk_available_least number.
To fix this we would have to rework how disk_available least is calculated - we'd have to make sure the drivers only report the total available space, and then make sure we update the usage _for each instance and migration_ to come up with the final number.
I believe this should work already for XenAPI? we get the physical utilisation from the SR, not counting up from instances. VDIs will be created for disks that are being live migrated at the start of the live migration.
https:/ /git.openstack. org/cgit/ openstack/ nova/tree/ nova/virt/ xenapi/ driver. py#n448 /git.openstack. org/cgit/ openstack/ nova/tree/ nova/virt/ xenapi/ host.py# n243
https:/
Was the issue identifed by inspection or is there a failure case that has been seen?