LVM backed ISCSI device not reporting same size to nova
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
New
|
Medium
|
Unassigned |
Bug Description
Environment:
- Compute nodes: CentOS 8 Stream (latest) (Supermicro)
- Storage nodes: CentOS 8 Stream (latest)(Supermicro hardware with 18TB Storage hardware raid5)
- Controller nodes: CentOS 8 Stream (latest) (Supermicro)
Openstack version: Wallaby, deployed by kolla-ansible not using Ironic.
When using LVM backed devices exposed to nova through the iscsi protocol, the blocksize of the device differs.
Example:
Using horizon, create a volume 20GB or bigger and attach it to a Virtual Machine.
Exact device size in bytes gotten by running on the storage-node:
fdisk -l /dev/cinder-
4398046511104 (Example 4 Terabyte volume on storage-node, in bytes)
However, when using fdisk /dev/vda (attached to nova-instance on compute node, inside the virtual machine)
4398066466816 (Example 4 Terabyte volume as seen by the virtual machine, in bytes)
Now if the sizes would be the other way around this would not be a problem, but the VM disksize is bigger than the real disk size on the iscsi-backed lvm volume.
This is from a 20GB backed volume vm, because smaller vm disks are affected sooner rather than later:
Thus resulting in the following messages in the kernel log on the vm:
[111761.391344] blk_update_request: I/O error, dev vda, sector 17777976
[111761.394839] blk_update_request: I/O error, dev vda, sector 17778984
[111761.396241] blk_update_request: I/O error, dev vda, sector 17779992
[111761.397782] blk_update_request: I/O error, dev vda, sector 17781000
[111761.399343] blk_update_request: I/O error, dev vda, sector 17782008
[111761.400929] blk_update_request: I/O error, dev vda, sector 17783016
[111761.402189] blk_update_request: I/O error, dev vda, sector 17784024
[111761.403377] blk_update_request: I/O error, dev vda, sector 17785032
[111761.404569] blk_update_request: I/O error, dev vda, sector 17786040
[111761.406165] blk_update_request: I/O error, dev vda, sector 17787048
Double-checked, by creating an All-in-One node from a storage-node so network issues can be ruled out.
Issue did not go away.
Changed in cinder: | |
importance: | Undecided → Medium |
Greetings Mark Olie,
I'd like to ask you the next questions:
- Is the vm allowed to read/write into the "extra" space?
- Is this scenario happening only for <20G or have you only tried with 20G?
Thanks in advance