Exception handling for cpu_policy and cpu_thread_policy image and flavor metadata is inconsistent

Bug #1628504 reported by Stephen Finucane
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
In Progress

Bug Description


Our criteria for how we deal with conflicting image metadata and flavor extra specs on both 'cpu_policy' and 'cpu_thread_policy' is rather inconsistent. We should probably configure this such that image metadata is the defacto spec, and an competing flavor metadata causes issues. Original comment below from https://review.openstack.org/#/c/361140.

Steps to reproduce

1. Create a new image with 'hw_cpu_policy=dedicated', and a new flavor with 'hw:cpu_thread_policy=shared'
2. Boot an instance using this image and flavor

Expected result

The VM should fail to boot, citing a conflict in policies

Actual result

The VM boots.

Additional information

There are two NUMA related properties:

* CPU allocation policy. Values: dedicated(stricter), shared(softer, default)
* CPU thread allocation policy. Values: prefer(softer, default), isolate(medium), require(stricter)

The behaviour in case of conflicts [1]:

CPU allocation policy:

  flavor image result
  dedicated shared dedicated (stricter, non-default)
  shared dedicated exception

CPU thread allocation policy:

  flavor image result
  prefer isolate/require prefer (softer, default)
  isolate prefer/requre exception
  require prefer/isolate exception

As you see we have two rather different behaviors for NUMA related properties. If first case we prefer stricter non-default value, in second case we prefer softer default value.

Also the **only** phrase in spec for all this stuff [2] is: "Image property will only be honoured if the flavor does not already have a threads policy set. This ensures the cloud administrator can have absolute control over threads policy if desired."
I'll repeat. This is the *only* phrase about conflicts. Flavor docs [3] says nothing about same image properties and conflict cases. So as user at this point I have no idea what will happen in a case of conflict.

Moreover, in image metadata docs [4] we have brilliant phrase: "Image metadata takes precedence over flavor extra specs. Thus, configuring competing policies causes an exception. By setting a shared policy through image metadata, administrators can prevent users configuring CPU policies in flavors and impacting resource utilization."

So now as user I will say "WTF?". I know that the last example is a docs bug. But even if we will replace "image" with "flavor" in image docs it wouldn't become a complitly true. There is no information about first lines in each policy conflicts table (at the begining of my message). in those cases we wouldn't get an exception, everything will work fine. But user don't know about it. And user don't know what is 'fine'? Will be used 'shared' policy or 'dedicated'?
We have such horrible docs about two years. The only way to figure out what is going on is to look at the code.

[1] https://github.com/openstack/nova/blob/master/nova/virt/hardware.py#L1141-L1172
[2] https://specs.openstack.org/openstack/nova-specs/specs/juno/approved/virt-driver-cpu-pinning.html
[3] http://docs.openstack.org/admin-guide/compute-flavors.html
[4] http://docs.openstack.org/admin-guide/compute-cpu-topologies.html

tags: added: numa
description: updated
Revision history for this message
Sylvain Bauza (sylvain-bauza) wrote :

Yeah, that's a common issue with some scheduler filters where the precedence is not really clarified between an image metadata and a flavor extraspec. We have a couple of filters that very poorly mention that.

From a pure process perspective, I tho think that the proposed fix would require some behavioural change (about we now agree and what we now refuse) that would require some upgrade note mentioning the new UX. For that specific reason, I actually wonder whether it shouldn't be better to write a blueprint for that and track it spec-less (if agreed by consensus).

For the moment, tracking the bug as Low, but I'm very tempted into turning it into Wishlist.

Changed in nova:
importance: Undecided → Low
tags: added: scheduler
Sean Dague (sdague)
Changed in nova:
status: New → Confirmed
Revision history for this message
Sean Dague (sdague) wrote :

Automatically discovered version juno in description. If this is incorrect, please update the description to include 'nova version: ...'

tags: added: openstack-version.juno
Sean Dague (sdague)
Changed in nova:
status: Confirmed → In Progress
Revision history for this message
sean mooney (sean-k-mooney) wrote :

for what its worth the thread policies spec clearly stated that the image metadata value would only be honnored if the flavor value was not set

if both were set i would have expect the vm to boot with the flavor value.
if you look at the juno cpu pinning spec the image property for cpu pinnign was never actully approved in juno where we clarified that the image metadata vlaue would be honered only if the flavor value was unset.
when it was reporposed for kilo image metadata support was also not part of the spec.

stephen retroactivly modified the spec in 2016 and added the instance metadata item because it
added to the code but it was never inteneded to support enabling pinning via the flavor


enabling/disabling cpu pinning was intended to be admin only via the flavor which is why
handeling of conficting values was not called out in the spec for hw:cpu_policy but was for hw:cpu_thread_policy and hw_cpu_tread_policy.

we initally had a convention until mitaka or newton to give precedece to the flavor and whenever
there was a confict, use the flavor value and ignore the image. we have since changed our stance on that and moved newer code to rasie an explcit error.

the real bug is that hw_cpu_policy was supported in the flavor at all in the code but its too late to remove that now.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.