Comment 2 for bug 1746238

Revision history for this message
Lucian Petrut (petrutlucian94) wrote :

This has always confused me, I'll be honest. Cinder considers the provisioned backend capacity to be the total allocated volume size on a backend (including non-Cinder volumes), as underlined by https://github.com/openstack/cinder/blob/12.0.0.0b3/cinder/scheduler/host_manager.py#L121-L124

So, let's take an example:

We have a 10 GB share and 2 thinly provisioned volumes, one 2GB empty created by cinder, another 2GB non-cinder volume that has 1GB of physically used space.

What the SMBFS driver will report:
* total_capacity: 10 GB
* free_capacity: 9 GB
* allocated_capacity: 1 GB (we currently report the physically used capacity), the manager would report 2 GB (the total Cinder volume size), so we may be wrong and need to drop this field
* provisioned_capacity: 2 GB (we can only account the volumes created by Cinder, we cannot query in-use images that have not been created by Cinder). We usually recommend customers to avoid using the shares for other purposes, but we may slightly improve the formula to take into account other files residing on the share)

What the NFS driver will report (using sparse files):
* total_capacity: 10 GB
* free_capacity: 9 GB
* allocated_capacity: 1 GB
* provisioned_capacity: 4 GB

What the NFS driver will report (using qcow2 files):
* total_capacity: 10 GB
* free_capacity: 9 GB
* allocated_capacity: 1 GB
* provisioned_capacity: 1 GB (it doesn't know how much data has been actually allocated to the qcow2 files since it's using "du -sb --aparent-size")