Instance resource quota not observed for non-ephemeral storage

Bug #1445637 reported by Tony Walker
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Confirmed
Low
Unassigned

Bug Description

I'm using a nova built from stable/kilo and trying to implement instance IO resource quotas for disk as per https://wiki.openstack.org/wiki/InstanceResourceQuota#IO.

While this works when building an instance from ephemeral storage, it does not when booting from a bootable cinder volume. I realize I can implement this using cinder quota but I want to apply the same settings in nova regardless of the underlying disk.

Steps to produce:

nova flavor-create iolimited 1 8192 64 4
nova flavor-key 1 set quota:disk_read_iops_sec=10000
Boot an instance using the above flavor
Guest XML is missing <iotune> entries

Expected result:
<snip>
      <target dev='vda' bus='virtio'/>
      <iotune>
        <read_iops_sec>10000</read_iops_sec>
      </iotune>
</snip>

This relates somewhat to https://bugs.launchpad.net/nova/+bug/1405367 but that case is purely hit when booting from RBD-backed ephemeral storage.

Essentially, for non-ephemeral disks, a call is made to _get_volume_config() which creates a generic LibvirtConfigGuestDisk object but no further processing is done to add extra-specs (if any).

I've essentially copied the disk_qos() method from the associated code review (https://review.openstack.org/#/c/143939/) to implement my own patch (attached).

Tags: libvirt quotas
Revision history for this message
Tony Walker (tony-walker-h) wrote :
Changed in nova:
importance: Undecided → Low
status: New → Confirmed
tags: added: quotas
tags: added: libvirt
Changed in nova:
assignee: nobody → lyanchih (lyanchih)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/201019

Changed in nova:
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (master)

Change abandoned by lyanchih (<email address hidden>) on branch: master
Review: https://review.openstack.org/201019

Revision history for this message
Chung Chih, Hung (lyanchih) wrote :

Cinder client had offer qos command. Those instance quota settings of non-emphemeral disk should be set via cinder cli instead of inherit from instance's flavor.

Changed in nova:
status: In Progress → Invalid
Revision history for this message
Tony Walker (tony-walker-h) wrote :

I'm aware of the ability to do this via cinder quota but why force this as the only option? By applying this via host aggregates I can generically control users rather than having to individually manage their instances on a per-volume basis. Given this is done for ephemeral volumes can you explain why a solution in nova will be abandoned?

Revision history for this message
Chung Chih, Hung (lyanchih) wrote :

I'm sorry for I was too hurry to change into invalid.
Originally I was thought those non-ephemeral disk was managed by cinder, those settings should dependent on it.
And even you assign higher value, the rate was still limit by cinder. Then you can't observed the real rate.
But I also thought flavor was hardware template, its settings should also apply.
Maybe we could select the minimum quota value between cinder or flavor settings.

Changed in nova:
status: Invalid → In Progress
Revision history for this message
Tony Walker (tony-walker-h) wrote :

No problem - just wanted to understand. Thanks for reconsidering. It would certainly be helpful to accept limits from Nova only (in the case that cinder quotas aren't being set).

Revision history for this message
Pushkar Umaranikar (pushkar-umaranikar) wrote :

@lyanchich, are you still working on this ? Also, changing status to "new" as the existing patch has -2 by core reviewer.

Changed in nova:
status: In Progress → New
Changed in nova:
status: New → Confirmed
assignee: Chung Chih, Hung (lyanchih) → nobody
Revision history for this message
Markus Zoeller (markus_z) (mzoeller) wrote :

In review [1] it was figured out that this is a feature request (=>"wishlist"). The effort is driven by the bp [2] (=> "Opinion"). No need for this bug report anymore.

References:
[1] https://review.openstack.org/#/c/201019/
[2] https://blueprints.launchpad.net/nova/+spec/non-ephemeral-storage-quota-assign

Changed in nova:
status: Confirmed → Opinion
importance: Low → Wishlist
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by Michael Still (<email address hidden>) on branch: master
Review: https://review.openstack.org/201019
Reason: This code hasn't been updated in a long time, and is in merge conflict. I am going to abandon this review, but feel free to restore it if you're still working on this.

Prateek Arora (parora)
Changed in nova:
assignee: nobody → Prateek Arora (parora)
Revision history for this message
Matt Riedemann (mriedem) wrote :

I don't think this is orchestration/proxy work that we want Nova to do when creating a volume during the boot from volume scenario. There is agreement, however, to allow passing a volume type when booting an instance for the BFV scenario and the volume type can have the QoS specs in it, see:

http://lists.openstack.org/pipermail/openstack-dev/2016-August/102401.html

-- mriedem 20160920

Revision history for this message
Prateek Arora (parora) wrote :

Removing myself as assignee as this is something we won't incorporate in nova.

Changed in nova:
assignee: Prateek Arora (parora) → nobody
Revision history for this message
sean mooney (sean-k-mooney) wrote :

i would set this to hi but since it has been unaddress ince 2016 and no one has been screaming about it
im setting it to low.

Changed in nova:
status: Opinion → Confirmed
importance: Wishlist → Low
Revision history for this message
Matthew Booth (mbooth-9) wrote :

This feature request seems to have been previously comprehensively killed in https://review.openstack.org/#/c/201019/ . Is this still something we plan to do? Assuming not, it might be more honest to close this.

Revision history for this message
Tony Walker (tony-walker-h) wrote :

Works for me. Please close.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.