commit ce7ad878093e5730b8dcec5ca717b831db8965eb
Author: Matt Riedemann <email address hidden>
Date: Wed May 30 12:07:53 2018 -0400
Use instance project/user when creating RequestSpec during resize reschedule
When rescheduling from a failed cold migrate / resize, the compute
service does not pass the request spec back to conductor so we
create one based on the in-scope variables.
This introduces a problem for some scheduler filters like the
AggregateMultiTenancyIsolation filter since it will create the
RequestSpec using the project and user information from the current
context, which for a cold migrate is the admin and might not be
the owner of the instance (which could be in some other project).
So the AggregateMultiTenancyIsolation filter might reject the
request or select a host that fits an aggregate for the admin but
not the end user.
This fixes the problem by using the instance project/user information
when constructing the RequestSpec which will take priority over
the context in RequestSpec.from_components().
Long-term we need the compute service to pass the request spec back
to the conductor during a reschedule, but we do this first since we
can backport it.
NOTE(mriedem): RequestSpec.user_id was added in Rocky in commit
6e49019fae80586c4bbb8a7281600cf6140c176a so we have to remove its
usage in this backport.
Reviewed: https:/ /review. openstack. org/577926 /git.openstack. org/cgit/ openstack/ nova/commit/ ?id=ce7ad878093 e5730b8dcec5ca7 17b831db8965eb
Committed: https:/
Submitter: Zuul
Branch: stable/pike
commit ce7ad878093e573 0b8dcec5ca717b8 31db8965eb
Author: Matt Riedemann <email address hidden>
Date: Wed May 30 12:07:53 2018 -0400
Use instance project/user when creating RequestSpec during resize reschedule
When rescheduling from a failed cold migrate / resize, the compute
service does not pass the request spec back to conductor so we
create one based on the in-scope variables.
This introduces a problem for some scheduler filters like the ltiTenancyIsola tion filter since it will create the enancyIsolation filter might reject the
AggregateMu
RequestSpec using the project and user information from the current
context, which for a cold migrate is the admin and might not be
the owner of the instance (which could be in some other project).
So the AggregateMultiT
request or select a host that fits an aggregate for the admin but
not the end user.
This fixes the problem by using the instance project/user information from_components ().
when constructing the RequestSpec which will take priority over
the context in RequestSpec.
Long-term we need the compute service to pass the request spec back
to the conductor during a reschedule, but we do this first since we
can backport it.
NOTE(mriedem): RequestSpec.user_id was added in Rocky in commit 0586c4bbb8a7281 600cf6140c176a so we have to remove its
6e49019fae8
usage in this backport.
Conflicts:
nova/ tests/unit/ conductor/ test_conductor. py
NOTE(mriedem): The conflict is due to not having change 1b314bb92062a88 ca9ee6b81298dc3 in Pike.
Ibc44e3b226
Change-Id: Iaaf7f68d6874fd 5d6e737e7d2bc58 9ea4a048fee 281e8d2b66abd1e 50e2405b01) 6201738ef54ff83 00f974b374)
Closes-Bug: #1774205
(cherry picked from commit 8c216608194c89d
(cherry picked from commit 1162902280d06eb