Another complication is that some cinder drivers appear to over allocate when the storage backend is only capable of allocating space in fixed intervals of > 1GiB.
For example,
After requesting a 100 GiB volume via `nova volume-create 100` and attaching it to my instance, `lsblk` now reports
```
In [12]: Byte(111669149696).to_GiB()
Out[12]: GiB(104.0)
```
Which makes me think that GB vs GiB is less important than I thought.
Telling people GB and actually creating GiB (or more) is just another form of over allocation, right?
One complication is that if I now fill this block device I might to be able to move the data to another 100 GiB cinder block device on another cinder platform....but that platform may have a different allocation interval and its 100 GiB volume may not have room.
Another complication is that some cinder drivers appear to over allocate when the storage backend is only capable of allocating space in fixed intervals of > 1GiB.
For example,
After requesting a 100 GiB volume via `nova volume-create 100` and attaching it to my instance, `lsblk` now reports
``` clusterhq- flocker- buildslave ~]$ sudo lsblk --bytes --output SIZE /dev/vdb
[centos@
SIZE
111669149696
```
Which is exactly 104 GiB
``` 96).to_ GiB()
In [12]: Byte(1116691496
Out[12]: GiB(104.0)
```
Which makes me think that GB vs GiB is less important than I thought.
Telling people GB and actually creating GiB (or more) is just another form of over allocation, right?
One complication is that if I now fill this block device I might to be able to move the data to another 100 GiB cinder block device on another cinder platform....but that platform may have a different allocation interval and its 100 GiB volume may not have room.