InstanceLocalityFilter passed as config-flags does not work with cinder-volume running alongside Nova Compute

Bug #1825159 reported by Pedro Guimarães
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Cinder Charm

Bug Description

I am adding a new filter to my deployment with:

config-flags: "scheduler_default_filters = AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter,InstanceLocalityFilter"

That should allow me to force LVM volumes to be allocated on same host as instances by using:
cinder create --hint local_to_instance=<instance-id> <size>

My bundle:

Once the all charms get on Active state, I can run cinder service list and see that LVM is set for appropriate host.
If I run cinder-volume service anywhere else, it will get different host names and then, it won't work when passing by InstanceLocalityFilter.

I can create disks.
Here is the logs:
(The disk in error is because I tried to force cinder to create on other host, it is expected).

But, when I try to add it to VMs, I get the following error on cinder-volume.log:

First thing that draws my attention: there is no mention to Ceph whatsoever on service-list.
If I add relation: [ cinder-volume cinder-ceph ]
I can see a new service-list as:

Therefore, LVM is now set as down, which means InstanceLocalityFilter won't work any longer.

Revision history for this message
Ryan Beisner (1chb1n) wrote :

Use of config-flags is experimental and typically indicates that new charm feature work is necessary. This path is untested and not implemented in OpenStack Charms. If we need to get this feature in place, we should lab it, write a specification based on the findings in the lab, and put a plan in place to properly introduce the charm feature.

Changed in charm-cinder:
importance: Undecided → Wishlist
Revision history for this message
Pedro Guimarães (pguimaraes) wrote :

Ryan, I do think there is an issue somewhere (hopefully, on my configs).
Just to clarify, I am following the step-by-step as described on:

Section "Separate volume units for scale out, using local storage and iSCSI"
Please, double-check juju status --relations to see if all relations on this Section have been applied:

Notice that we have cinder-volume (with only volume service) and cinder (with both api and scheduler).

You can see on my bundle that config-flags have been commented out:

So, I am now running on what documentation suggests as standard for cinder-volume scaling.

Even on this case, when running LVM, I get the very same error on cinder-volume:

I did some searching on the docs and found out a manual on multi-backend:

Even though applying the steps above as recommended:, I still see the same error trying to add my volume to the server.

I think the co-location of nova-compute and cinder-volume is causing some issues. I cannot figure out what, exactly.

Notice that on the original bug text I did enable InstanceLocalityFilter. But the issue just pops up later, at cinder-volume. If InstanceLocalityFilter was the reason, I should see Scheduler filtering / not-filtering correctly. The InstanceLocalityFilter seems to be fine.

Revision history for this message
Ryan Beisner (1chb1n) wrote :

Ok that seems to be a different bug. It is certainly different than the title and description of this bug.

Revision history for this message
Pedro Guimarães (pguimaraes) wrote :

Ryan, agreed, my original understanding on this bug was that issues were related to the Filter I've added. However, I can see this is not the case anylonger.

I've filed a separate bug on this:
I put all info I've gathered there.

Will leave this bug if we find any InstanceLocalityFilter-related issues.

Changed in charm-cinder:
status: New → Triaged
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.