Scheduling is not even among multiple thin provisioning pools which have different sizes
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Incomplete
|
Low
|
Unassigned |
Bug Description
Description
===========
Scheduling is not even among multiple thin provisioning pools
which have different sizes. For example, there are two thin provisioning
pools. Pool0 has 10T capacity, Pool1 has 30T capacity, the max_over_
of them are both 20. We assume that the provisioned_
is 250T and the provisioned_
According to the formula in the cinder source code, the free capacity of Pool0
is 10*20-0=200T and the free capacity of Pool1 is 30*20-250=350T.
So it is clear for us to see that a new created volume is
sheduled to Pool1 instead of Pool0. However, Pool0 is supposed to sheduled
since it has bigger real capacity.
In a word, the sheduler tends to schedule the pool that has bigger size.
Steps to reproduce
==================
1. Provision two thin provisioning pools, and the sizes of
them have a gap. For example, Pool0 has 10T capacity, Pool1
has 30T capacity.
2. Gaurantee that the max_over_
of them are both 20, and the provisioned_
is 250T and the provisioned_
3. Create a new volume.
4. Observe pool location where the volume is sheduled.
Expected result
===============
The new volume is sheduled to the Pool0.
Actual result
=============
The new volume is sheduled to the Pool1 due to its bigger size.
Environment
===========
master branch of cinder
Codes
==============
# cinder\
def calculate_
total = float(total_
reserved = float(reserved_
if thin and thin_provisioni
free = (total * max_over_
- provisioned_
- math.floor(total * reserved))
else:
# Calculate how much free space is left after taking into
# account the reserved space.
free = free_capacity - math.floor(total * reserved)
return free
description: | updated |
tags: | added: provisioning thin |
tags: |
added: thin-provisioning removed: provisioning thin |
Changed in cinder: | |
status: | New → Incomplete |
Hi zhaoleilc,
Regarding the last Cinder meeting[1] we need more information from you:
- Are you testing with Devstack or with a deployment that has 3 schedulers?
(a)Anything that happens with multiple schedulers but doesn't with a single one can be attributed to different in-memory data. It may be that the requests are going to different schedulers and each scheduler has a different in-memory data at the time of receiving the request.
(b) This will also depend on how you configure the capacity weigher.
Regards,Sofia eavesdrop. openstack. org/meetings/ cinder/ 2021/cinder. 2021-03- 03-14.00. log.html
[1] http://