2023-02-02 11:28:50 |
Jan Wasilewski |
bug |
|
|
added bug |
2023-02-02 11:58:47 |
Jan Wasilewski |
bug |
|
|
added subscriber Lukasz |
2023-02-02 14:20:26 |
Jeremy Stanley |
description |
Hello OpenStack Security Team,
I’m writing to you, as we faced a serious security breach in OpenStack functionality(correlated a bit with libvirt, iscsi and huawei driver). I was going through OSSA documents and correlated libvirt notes, but I couldn't find something similar. It is not related to https://security.openstack.org/ossa/OSSA-2020-006.html
In short: we observed that newly created cinder volume(1GB size) was attached to compute node instance, but an instance recognized it as a 115GB volume, which(this 115GB volume) in fact was connected to another instance on the same compute node.
[1. Test environment]
Compute node: OpenStack Ussuri configured with Huawei dorado as a storage backend(configuration driver is available here: https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/huawei-storage-driver.html)
Packages:
v# dpkg -l | grep libvirt
ii libvirt-clients 6.0.0-0ubuntu8.16 amd64 Programs for the libvirt library
ii libvirt-daemon 6.0.0-0ubuntu8.16 amd64 Virtualization daemon
ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.16 amd64 Virtualization daemon QEMU connection driver
ii libvirt-daemon-driver-storage-rbd 6.0.0-0ubuntu8.16 amd64 Virtualization daemon RBD storage driver
ii libvirt-daemon-system 6.0.0-0ubuntu8.16 amd64 Libvirt daemon configuration files
ii libvirt-daemon-system-systemd 6.0.0-0ubuntu8.16 amd64 Libvirt daemon configuration files (systemd)
ii libvirt0:amd64 6.0.0-0ubuntu8.16 amd64 library for interfacing with different virtualization systems
ii nova-compute-libvirt 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node libvirt support
ii python3-libvirt 6.1.0-1 amd64 libvirt Python 3 bindings
# dpkg -l | grep qemu
ii ipxe-qemu 1.0.0+git-20190109.133f4c4-0ubuntu3.2 all PXE boot firmware - ROM images for qemu
ii ipxe-qemu-256k-compat-efi-roms 1.0.0+git-20150424.a25a16d-0ubuntu4 all PXE boot firmware - Compat EFI ROM images for qemu
ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.16 amd64 Virtualization daemon QEMU connection driver
ii qemu 1:4.2-3ubuntu6.23 amd64 fast processor emulator, dummy package
ii qemu-block-extra:amd64 1:4.2-3ubuntu6.23 amd64 extra block backend modules for qemu-system and qemu-utils
ii qemu-kvm 1:4.2-3ubuntu6.23 amd64 QEMU Full virtualization on x86 hardware
ii qemu-system-common 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (common files)
ii qemu-system-data 1:4.2-3ubuntu6.23 all QEMU full system emulation (data files)
ii qemu-system-gui:amd64 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (user interface and audio support)
ii qemu-system-x86 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (x86)
ii qemu-utils 1:4.2-3ubuntu6.23 amd64 QEMU utilities
# dpkg -l | grep nova
ii nova-common 2:21.2.4-0ubuntu1 all OpenStack Compute - common files
ii nova-compute 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node base
ii nova-compute-kvm 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node libvirt support
ii python3-nova 2:21.2.4-0ubuntu1 all OpenStack Compute Python 3 libraries
ii python3-novaclient 2:17.0.0-0ubuntu1 all client library for OpenStack Compute API - 3.x
# dpkg -l | grep multipath
ii multipath-tools 0.8.3-1ubuntu2 amd64 maintain multipath block device access
# dpkg -l | grep iscsi
ii libiscsi7:amd64 1.18.0-2 amd64 iSCSI client shared library
ii open-iscsi 2.0.874-7.1ubuntu6.2 amd64 iSCSI initiator tools
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.4 LTS"
Instance OS: Debian-11-amd64
[2. Test scenario]
Already created instance with two volumes attached. First - 10GB for root system, second - 115GB used as vdb. Recognized by compute node as vda - dm-11, vdb - dm-9:
# virsh domblklist 90fas439-fc0e-4e22-8d0b-6f2a18eee5c1
Target Source
----------------------
vda /dev/dm-11
vdb /dev/dm-9
# multipath -ll
(...)
36e00084100ee7e7ed6ad25d900002f6b dm-9 HUAWEI,XSG1
size=115G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:4 sdl 8:176 active ready running
`- 17:0:0:4 sdn 8:208 active ready running
(...)
36e00084100ee7e7ed6acaa2900002f6a dm-11 HUAWEI,XSG1
size=10G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:3 sdq 65:0 active ready running
|- 15:0:0:3 sdr 65:16 active ready running
|- 16:0:0:3 sdp 8:240 active ready running
`- 17:0:0:3 sds 65:32 active ready running
Creating a new instance, with the same OS guest system and 10GB root volume. After successful deployment, creating a new volume(1GB) size and attaching it to newly created instance. We can see after that:
# multipath -ll
(...)
36e00084100ee7e7ed6ad25d900002f6b dm-9 HUAWEI,XSG1
size=115G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:10 sdao 66:128 failed faulty running
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:10 sdap 66:144 failed faulty running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:10 sdan 66:112 failed faulty running
|- 16:0:0:4 sdl 8:176 active ready running
|- 17:0:0:10 sdaq 66:160 failed faulty running
`- 17:0:0:4 sdn 8:208 active ready running
This way at instance we were able to see a new drive - not 1GB, but 115GB -> so it seems it was incorrectly attached and this way we were able to destroy some data on that volume.
Additionaly we were able to see many errors like that in compute node logs:
# dmesg -T | grep dm-9
[Fri Jan 27 13:37:42 2023] blk_update_request: critical target error, dev dm-9, sector 62918760 op 0x1:(WRITE) flags 0x8800 phys_seg 2 prio class 0
[Fri Jan 27 13:37:42 2023] blk_update_request: critical target error, dev dm-9, sector 33625152 op 0x1:(WRITE) flags 0x8800 phys_seg 6 prio class 0
[Fri Jan 27 13:37:46 2023] blk_update_request: critical target error, dev dm-9, sector 66663000 op 0x1:(WRITE) flags 0x8800 phys_seg 5 prio class 0
[Fri Jan 27 13:37:46 2023] blk_update_request: critical target error, dev dm-9, sector 66598120 op 0x1:(WRITE) flags 0x8800 phys_seg 5 prio class 0
[Fri Jan 27 13:37:51 2023] blk_update_request: critical target error, dev dm-9, sector 66638680 op 0x1:(WRITE) flags 0x8800 phys_seg 12 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66614344 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66469296 op 0x1:(WRITE) flags 0x8800 phys_seg 24 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66586472 op 0x1:(WRITE) flags 0x8800 phys_seg 3 prio class 0
(...)
Unfortunately we do not know what is a perfect test-scenario for it as we could face such issue in less than 2% of our tries, but it looks like a serious security breach.
Additionally we observed that linux kernel is not fully clearing a device allocation(from volume detach), so some of drives names are visible in an output, i.e. lsblk command. Then, after new volume attachment we can see such names(i.e. sdao, sdap, sdan and so on) are reusable by that drive and wrongly mapped by multipath/iscsi to another drive and this way we hit an issue.
Our question is why linux kernel of compute node is not removing devices allocation and this way is leading to a scenario like that? Maybe this can be a solution here.
Thanks in advance for your help and understanding. In case when more details is needed, do not hesitate to contact me. |
This issue is being treated as a potential security risk under
embargo. Please do not make any public mention of embargoed
(private) security vulnerabilities before their coordinated
publication by the OpenStack Vulnerability Management Team in the
form of an official OpenStack Security Advisory. This includes
discussion of the bug or associated fixes in public forums such as
mailing lists, code review systems and bug trackers. Please also
avoid private disclosure to other individuals not already approved
for access to this information, and provide this same reminder to
those who are made aware of the issue prior to publication. All
discussion should remain confined to this private bug report, and
any proposed fixes should be added to the bug as attachments. This
embargo shall not extend past 2023-05-03 and will be made
public by or on that date even if no fix is identified.
Hello OpenStack Security Team,
I’m writing to you, as we faced a serious security breach in OpenStack functionality(correlated a bit with libvirt, iscsi and huawei driver). I was going through OSSA documents and correlated libvirt notes, but I couldn't find something similar. It is not related to https://security.openstack.org/ossa/OSSA-2020-006.html
In short: we observed that newly created cinder volume(1GB size) was attached to compute node instance, but an instance recognized it as a 115GB volume, which(this 115GB volume) in fact was connected to another instance on the same compute node.
[1. Test environment]
Compute node: OpenStack Ussuri configured with Huawei dorado as a storage backend(configuration driver is available here: https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/huawei-storage-driver.html)
Packages:
v# dpkg -l | grep libvirt
ii libvirt-clients 6.0.0-0ubuntu8.16 amd64 Programs for the libvirt library
ii libvirt-daemon 6.0.0-0ubuntu8.16 amd64 Virtualization daemon
ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.16 amd64 Virtualization daemon QEMU connection driver
ii libvirt-daemon-driver-storage-rbd 6.0.0-0ubuntu8.16 amd64 Virtualization daemon RBD storage driver
ii libvirt-daemon-system 6.0.0-0ubuntu8.16 amd64 Libvirt daemon configuration files
ii libvirt-daemon-system-systemd 6.0.0-0ubuntu8.16 amd64 Libvirt daemon configuration files (systemd)
ii libvirt0:amd64 6.0.0-0ubuntu8.16 amd64 library for interfacing with different virtualization systems
ii nova-compute-libvirt 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node libvirt support
ii python3-libvirt 6.1.0-1 amd64 libvirt Python 3 bindings
# dpkg -l | grep qemu
ii ipxe-qemu 1.0.0+git-20190109.133f4c4-0ubuntu3.2 all PXE boot firmware - ROM images for qemu
ii ipxe-qemu-256k-compat-efi-roms 1.0.0+git-20150424.a25a16d-0ubuntu4 all PXE boot firmware - Compat EFI ROM images for qemu
ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.16 amd64 Virtualization daemon QEMU connection driver
ii qemu 1:4.2-3ubuntu6.23 amd64 fast processor emulator, dummy package
ii qemu-block-extra:amd64 1:4.2-3ubuntu6.23 amd64 extra block backend modules for qemu-system and qemu-utils
ii qemu-kvm 1:4.2-3ubuntu6.23 amd64 QEMU Full virtualization on x86 hardware
ii qemu-system-common 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (common files)
ii qemu-system-data 1:4.2-3ubuntu6.23 all QEMU full system emulation (data files)
ii qemu-system-gui:amd64 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (user interface and audio support)
ii qemu-system-x86 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (x86)
ii qemu-utils 1:4.2-3ubuntu6.23 amd64 QEMU utilities
# dpkg -l | grep nova
ii nova-common 2:21.2.4-0ubuntu1 all OpenStack Compute - common files
ii nova-compute 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node base
ii nova-compute-kvm 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node libvirt support
ii python3-nova 2:21.2.4-0ubuntu1 all OpenStack Compute Python 3 libraries
ii python3-novaclient 2:17.0.0-0ubuntu1 all client library for OpenStack Compute API - 3.x
# dpkg -l | grep multipath
ii multipath-tools 0.8.3-1ubuntu2 amd64 maintain multipath block device access
# dpkg -l | grep iscsi
ii libiscsi7:amd64 1.18.0-2 amd64 iSCSI client shared library
ii open-iscsi 2.0.874-7.1ubuntu6.2 amd64 iSCSI initiator tools
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.4 LTS"
Instance OS: Debian-11-amd64
[2. Test scenario]
Already created instance with two volumes attached. First - 10GB for root system, second - 115GB used as vdb. Recognized by compute node as vda - dm-11, vdb - dm-9:
# virsh domblklist 90fas439-fc0e-4e22-8d0b-6f2a18eee5c1
Target Source
----------------------
vda /dev/dm-11
vdb /dev/dm-9
# multipath -ll
(...)
36e00084100ee7e7ed6ad25d900002f6b dm-9 HUAWEI,XSG1
size=115G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:4 sdl 8:176 active ready running
`- 17:0:0:4 sdn 8:208 active ready running
(...)
36e00084100ee7e7ed6acaa2900002f6a dm-11 HUAWEI,XSG1
size=10G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:3 sdq 65:0 active ready running
|- 15:0:0:3 sdr 65:16 active ready running
|- 16:0:0:3 sdp 8:240 active ready running
`- 17:0:0:3 sds 65:32 active ready running
Creating a new instance, with the same OS guest system and 10GB root volume. After successful deployment, creating a new volume(1GB) size and attaching it to newly created instance. We can see after that:
# multipath -ll
(...)
36e00084100ee7e7ed6ad25d900002f6b dm-9 HUAWEI,XSG1
size=115G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:10 sdao 66:128 failed faulty running
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:10 sdap 66:144 failed faulty running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:10 sdan 66:112 failed faulty running
|- 16:0:0:4 sdl 8:176 active ready running
|- 17:0:0:10 sdaq 66:160 failed faulty running
`- 17:0:0:4 sdn 8:208 active ready running
This way at instance we were able to see a new drive - not 1GB, but 115GB -> so it seems it was incorrectly attached and this way we were able to destroy some data on that volume.
Additionaly we were able to see many errors like that in compute node logs:
# dmesg -T | grep dm-9
[Fri Jan 27 13:37:42 2023] blk_update_request: critical target error, dev dm-9, sector 62918760 op 0x1:(WRITE) flags 0x8800 phys_seg 2 prio class 0
[Fri Jan 27 13:37:42 2023] blk_update_request: critical target error, dev dm-9, sector 33625152 op 0x1:(WRITE) flags 0x8800 phys_seg 6 prio class 0
[Fri Jan 27 13:37:46 2023] blk_update_request: critical target error, dev dm-9, sector 66663000 op 0x1:(WRITE) flags 0x8800 phys_seg 5 prio class 0
[Fri Jan 27 13:37:46 2023] blk_update_request: critical target error, dev dm-9, sector 66598120 op 0x1:(WRITE) flags 0x8800 phys_seg 5 prio class 0
[Fri Jan 27 13:37:51 2023] blk_update_request: critical target error, dev dm-9, sector 66638680 op 0x1:(WRITE) flags 0x8800 phys_seg 12 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66614344 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66469296 op 0x1:(WRITE) flags 0x8800 phys_seg 24 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66586472 op 0x1:(WRITE) flags 0x8800 phys_seg 3 prio class 0
(...)
Unfortunately we do not know what is a perfect test-scenario for it as we could face such issue in less than 2% of our tries, but it looks like a serious security breach.
Additionally we observed that linux kernel is not fully clearing a device allocation(from volume detach), so some of drives names are visible in an output, i.e. lsblk command. Then, after new volume attachment we can see such names(i.e. sdao, sdap, sdan and so on) are reusable by that drive and wrongly mapped by multipath/iscsi to another drive and this way we hit an issue.
Our question is why linux kernel of compute node is not removing devices allocation and this way is leading to a scenario like that? Maybe this can be a solution here.
Thanks in advance for your help and understanding. In case when more details is needed, do not hesitate to contact me. |
|
2023-02-02 14:20:36 |
Jeremy Stanley |
bug task added |
|
ossa |
|
2023-02-02 14:20:43 |
Jeremy Stanley |
ossa: status |
New |
Incomplete |
|
2023-02-02 14:20:58 |
Jeremy Stanley |
bug |
|
|
added subscriber Nova Core security contacts |
2023-02-02 15:21:07 |
Sylvain Bauza |
bug |
|
|
added subscriber melanie witt |
2023-02-02 15:38:31 |
Sylvain Bauza |
nova: status |
New |
Incomplete |
|
2023-02-02 16:15:31 |
Jan Wasilewski |
nova: status |
Incomplete |
New |
|
2023-02-02 16:16:02 |
Jeremy Stanley |
bug task added |
|
cinder |
|
2023-02-02 16:16:16 |
Jeremy Stanley |
bug |
|
|
added subscriber Cinder Core security contacts |
2023-02-13 09:00:00 |
Jan Wasilewski |
bug |
|
|
added subscriber booliczek |
2023-02-16 05:13:41 |
melanie witt |
attachment added |
|
nova-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5647561/+files/nova-2004555.patch |
|
2023-02-16 11:09:55 |
Gorka Eguileor |
bug task added |
|
os-brick |
|
2023-02-17 09:50:03 |
Gorka Eguileor |
attachment added |
|
Cinder fix https://bugs.launchpad.net/os-brick/+bug/2004555/+attachment/5648028/+files/cinder-2004555.patch |
|
2023-03-01 16:42:29 |
Gorka Eguileor |
attachment added |
|
os-brick FC force disconnect support https://bugs.launchpad.net/os-brick/+bug/2004555/+attachment/5650757/+files/osbrick-fc-2004555.patch |
|
2023-03-01 16:57:17 |
Gorka Eguileor |
attachment added |
|
os-brick data leak prevention https://bugs.launchpad.net/os-brick/+bug/2004555/+attachment/5650759/+files/osbrick-leak-2004555.patch |
|
2023-03-08 15:40:27 |
Gorka Eguileor |
bug |
|
|
added subscriber Anten Skrabec |
2023-03-08 15:40:40 |
Gorka Eguileor |
bug |
|
|
added subscriber Avinash Hanwate |
2023-03-08 15:40:50 |
Gorka Eguileor |
bug |
|
|
added subscriber Nick Tait |
2023-03-17 21:08:24 |
Gorka Eguileor |
attachment added |
|
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5655187/+files/cinder-2004555.patch |
|
2023-03-17 21:09:15 |
Gorka Eguileor |
attachment removed |
Cinder fix https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5648028/+files/cinder-2004555.patch |
|
|
2023-03-21 11:44:27 |
Gorka Eguileor |
attachment removed |
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5655187/+files/cinder-2004555.patch |
|
|
2023-03-21 12:00:06 |
Gorka Eguileor |
attachment added |
|
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5656303/+files/cinder-2004555.patch |
|
2023-04-10 18:57:24 |
Brian Rosmaita |
bug |
|
|
added subscriber Alan Bishop |
2023-04-13 20:17:26 |
Gorka Eguileor |
attachment removed |
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5656303/+files/cinder-2004555.patch |
|
|
2023-04-13 20:18:06 |
Gorka Eguileor |
attachment added |
|
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5663732/+files/cinder-2004555.patch |
|
2023-04-14 08:28:50 |
Gorka Eguileor |
attachment removed |
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5663732/+files/cinder-2004555.patch |
|
|
2023-04-14 08:30:09 |
Gorka Eguileor |
attachment added |
|
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5663933/+files/cinder-2004555.patch |
|
2023-04-14 14:32:59 |
Gorka Eguileor |
attachment removed |
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5663933/+files/cinder-2004555.patch |
|
|
2023-04-14 14:33:38 |
Gorka Eguileor |
attachment added |
|
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5664044/+files/cinder-2004555.patch |
|
2023-04-14 22:47:49 |
Nick Tait |
cve linked |
|
2023-2088 |
|
2023-04-18 12:20:33 |
Gorka Eguileor |
attachment removed |
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5664044/+files/cinder-2004555.patch |
|
|
2023-04-18 12:21:06 |
Gorka Eguileor |
attachment added |
|
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5664986/+files/cinder-2004555.patch |
|
2023-04-18 13:01:30 |
Gorka Eguileor |
attachment removed |
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5664986/+files/cinder-2004555.patch |
|
|
2023-04-18 13:01:58 |
Gorka Eguileor |
attachment added |
|
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5664988/+files/cinder-2004555.patch |
|
2023-04-18 17:57:19 |
Brian Rosmaita |
bug task added |
|
glance-store |
|
2023-04-18 18:44:15 |
Brian Rosmaita |
attachment added |
|
glance_store-2004555.patch https://bugs.launchpad.net/glance-store/+bug/2004555/+attachment/5665044/+files/glance_store-2004555.patch |
|
2023-04-19 15:00:58 |
Gorka Eguileor |
attachment removed |
glance_store-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5665044/+files/glance_store-2004555.patch |
|
|
2023-04-19 15:01:48 |
Gorka Eguileor |
attachment added |
|
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5665262/+files/cinder-2004555.patch |
|
2023-04-19 15:09:49 |
Gorka Eguileor |
attachment removed |
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5664988/+files/cinder-2004555.patch |
|
|
2023-04-19 15:09:58 |
Gorka Eguileor |
attachment removed |
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5665262/+files/cinder-2004555.patch |
|
|
2023-04-19 15:10:16 |
Gorka Eguileor |
attachment added |
|
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5665263/+files/cinder-2004555.patch |
|
2023-04-19 15:13:45 |
Brian Rosmaita |
attachment added |
|
glance_store-2004555.patch https://bugs.launchpad.net/glance-store/+bug/2004555/+attachment/5665264/+files/glance_store-2004555.patch |
|
2023-04-19 17:25:41 |
Dan Smith |
bug |
|
|
added subscriber Ghanshyam Mann |
2023-04-20 02:49:28 |
melanie witt |
attachment added |
|
nova-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5665336/+files/nova-2004555.patch |
|
2023-04-20 02:49:51 |
melanie witt |
attachment removed |
nova-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5647561/+files/nova-2004555.patch |
|
|
2023-04-20 19:55:57 |
Gorka Eguileor |
attachment added |
|
tempest-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5665897/+files/tempest-2004555.patch |
|
2023-04-20 20:00:43 |
Gorka Eguileor |
attachment removed |
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5665263/+files/cinder-2004555.patch |
|
|
2023-04-20 20:02:34 |
Gorka Eguileor |
attachment added |
|
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5665898/+files/cinder-2004555.patch |
|
2023-04-21 01:05:23 |
melanie witt |
attachment added |
|
nova-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5665961/+files/nova-2004555.patch |
|
2023-04-21 01:06:05 |
melanie witt |
attachment removed |
nova-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5665336/+files/nova-2004555.patch |
|
|
2023-04-21 13:45:28 |
Dan Smith |
bug |
|
|
added subscriber Luigi Toscano |
2023-04-21 19:16:12 |
melanie witt |
attachment added |
|
nova-2004555-xena.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5666527/+files/nova-2004555-xena.patch |
|
2023-04-24 09:47:12 |
Gorka Eguileor |
attachment removed |
os-brick FC force disconnect support https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5650757/+files/osbrick-fc-2004555.patch |
|
|
2023-04-24 09:47:21 |
Gorka Eguileor |
attachment removed |
os-brick data leak prevention https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5650759/+files/osbrick-leak-2004555.patch |
|
|
2023-04-24 09:48:42 |
Gorka Eguileor |
attachment added |
|
osbrick-fc-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5667881/+files/osbrick-fc-2004555.patch |
|
2023-04-24 09:49:03 |
Gorka Eguileor |
attachment added |
|
osbrick-leak-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5667882/+files/osbrick-leak-2004555.patch |
|
2023-04-24 10:28:42 |
Gorka Eguileor |
attachment removed |
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5665898/+files/cinder-2004555.patch |
|
|
2023-04-24 10:29:27 |
Gorka Eguileor |
attachment added |
|
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5667903/+files/cinder-2004555.patch |
|
2023-04-24 17:44:32 |
Gorka Eguileor |
attachment removed |
cinder-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5667903/+files/cinder-2004555.patch |
|
|
2023-04-24 17:45:42 |
Gorka Eguileor |
attachment added |
|
cinder-2004555-master.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5668187/+files/cinder-2004555-master.patch |
|
2023-04-24 17:46:17 |
Gorka Eguileor |
attachment added |
|
cinder-2004555-2023.1_and_zed.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5668188/+files/cinder-2004555-2023.1_and_zed.patch |
|
2023-04-24 17:46:35 |
Gorka Eguileor |
attachment added |
|
cinder-2004555-yoga.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5668189/+files/cinder-2004555-yoga.patch |
|
2023-04-24 17:46:56 |
Gorka Eguileor |
attachment added |
|
cinder-2004555-xena.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5668190/+files/cinder-2004555-xena.patch |
|
2023-04-24 17:47:23 |
Gorka Eguileor |
attachment removed |
osbrick-fc-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5667881/+files/osbrick-fc-2004555.patch |
|
|
2023-04-24 17:48:36 |
Gorka Eguileor |
attachment added |
|
osbrick-fc-2004555-master_to_zed.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5668191/+files/osbrick-fc-2004555-master_to_zed.patch |
|
2023-04-24 17:51:10 |
Gorka Eguileor |
attachment added |
|
osbrick-fc-2004555-yoga_and_xena.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5668192/+files/osbrick-fc-2004555-yoga_and_xena.patch |
|
2023-04-24 17:51:51 |
Gorka Eguileor |
attachment removed |
osbrick-fc-2004555-master_to_zed.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5668191/+files/osbrick-fc-2004555-master_to_zed.patch |
|
|
2023-04-24 17:52:53 |
Gorka Eguileor |
attachment removed |
osbrick-leak-2004555.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5667882/+files/osbrick-leak-2004555.patch |
|
|
2023-04-24 17:53:13 |
Gorka Eguileor |
attachment added |
|
osbrick-fc-2004555-master_to_zed.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5668211/+files/osbrick-fc-2004555-master_to_zed.patch |
|
2023-04-24 17:53:33 |
Gorka Eguileor |
attachment added |
|
osbrick-leak-2004555-master.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5668212/+files/osbrick-leak-2004555-master.patch |
|
2023-04-25 11:58:07 |
Gorka Eguileor |
attachment added |
|
glance_store-2004555-zed.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5668492/+files/glance_store-2004555-zed.patch |
|
2023-04-25 11:58:45 |
Gorka Eguileor |
attachment added |
|
glance_store-2004555-yoga_and_xena.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5668493/+files/glance_store-2004555-yoga_and_xena.patch |
|
2023-04-25 18:57:24 |
Gorka Eguileor |
attachment removed |
osbrick-fc-2004555-master_to_zed.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5668211/+files/osbrick-fc-2004555-master_to_zed.patch |
|
|
2023-04-25 18:57:43 |
Gorka Eguileor |
attachment added |
|
osbrick-fc-2004555-master_to_zed.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5668644/+files/osbrick-fc-2004555-master_to_zed.patch |
|
2023-04-27 18:42:01 |
melanie witt |
attachment added |
|
nova-2004555-xena_and_wallaby.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5669261/+files/nova-2004555-xena_and_wallaby.patch |
|
2023-04-27 18:42:13 |
melanie witt |
attachment removed |
nova-2004555-xena.patch https://bugs.launchpad.net/nova/+bug/2004555/+attachment/5666527/+files/nova-2004555-xena.patch |
|
|
2023-04-27 18:47:00 |
melanie witt |
nominated for series |
|
nova/xena |
|
2023-04-27 18:47:00 |
melanie witt |
bug task added |
|
nova/xena |
|
2023-04-27 18:47:00 |
melanie witt |
nominated for series |
|
nova/zed |
|
2023-04-27 18:47:00 |
melanie witt |
bug task added |
|
nova/zed |
|
2023-04-27 18:47:00 |
melanie witt |
nominated for series |
|
nova/yoga |
|
2023-04-27 18:47:00 |
melanie witt |
bug task added |
|
nova/yoga |
|
2023-04-27 18:47:00 |
melanie witt |
nominated for series |
|
nova/antelope |
|
2023-04-27 18:47:00 |
melanie witt |
bug task added |
|
nova/antelope |
|
2023-05-02 02:16:19 |
Jeremy Stanley |
description |
This issue is being treated as a potential security risk under
embargo. Please do not make any public mention of embargoed
(private) security vulnerabilities before their coordinated
publication by the OpenStack Vulnerability Management Team in the
form of an official OpenStack Security Advisory. This includes
discussion of the bug or associated fixes in public forums such as
mailing lists, code review systems and bug trackers. Please also
avoid private disclosure to other individuals not already approved
for access to this information, and provide this same reminder to
those who are made aware of the issue prior to publication. All
discussion should remain confined to this private bug report, and
any proposed fixes should be added to the bug as attachments. This
embargo shall not extend past 2023-05-03 and will be made
public by or on that date even if no fix is identified.
Hello OpenStack Security Team,
I’m writing to you, as we faced a serious security breach in OpenStack functionality(correlated a bit with libvirt, iscsi and huawei driver). I was going through OSSA documents and correlated libvirt notes, but I couldn't find something similar. It is not related to https://security.openstack.org/ossa/OSSA-2020-006.html
In short: we observed that newly created cinder volume(1GB size) was attached to compute node instance, but an instance recognized it as a 115GB volume, which(this 115GB volume) in fact was connected to another instance on the same compute node.
[1. Test environment]
Compute node: OpenStack Ussuri configured with Huawei dorado as a storage backend(configuration driver is available here: https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/huawei-storage-driver.html)
Packages:
v# dpkg -l | grep libvirt
ii libvirt-clients 6.0.0-0ubuntu8.16 amd64 Programs for the libvirt library
ii libvirt-daemon 6.0.0-0ubuntu8.16 amd64 Virtualization daemon
ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.16 amd64 Virtualization daemon QEMU connection driver
ii libvirt-daemon-driver-storage-rbd 6.0.0-0ubuntu8.16 amd64 Virtualization daemon RBD storage driver
ii libvirt-daemon-system 6.0.0-0ubuntu8.16 amd64 Libvirt daemon configuration files
ii libvirt-daemon-system-systemd 6.0.0-0ubuntu8.16 amd64 Libvirt daemon configuration files (systemd)
ii libvirt0:amd64 6.0.0-0ubuntu8.16 amd64 library for interfacing with different virtualization systems
ii nova-compute-libvirt 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node libvirt support
ii python3-libvirt 6.1.0-1 amd64 libvirt Python 3 bindings
# dpkg -l | grep qemu
ii ipxe-qemu 1.0.0+git-20190109.133f4c4-0ubuntu3.2 all PXE boot firmware - ROM images for qemu
ii ipxe-qemu-256k-compat-efi-roms 1.0.0+git-20150424.a25a16d-0ubuntu4 all PXE boot firmware - Compat EFI ROM images for qemu
ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.16 amd64 Virtualization daemon QEMU connection driver
ii qemu 1:4.2-3ubuntu6.23 amd64 fast processor emulator, dummy package
ii qemu-block-extra:amd64 1:4.2-3ubuntu6.23 amd64 extra block backend modules for qemu-system and qemu-utils
ii qemu-kvm 1:4.2-3ubuntu6.23 amd64 QEMU Full virtualization on x86 hardware
ii qemu-system-common 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (common files)
ii qemu-system-data 1:4.2-3ubuntu6.23 all QEMU full system emulation (data files)
ii qemu-system-gui:amd64 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (user interface and audio support)
ii qemu-system-x86 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (x86)
ii qemu-utils 1:4.2-3ubuntu6.23 amd64 QEMU utilities
# dpkg -l | grep nova
ii nova-common 2:21.2.4-0ubuntu1 all OpenStack Compute - common files
ii nova-compute 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node base
ii nova-compute-kvm 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node libvirt support
ii python3-nova 2:21.2.4-0ubuntu1 all OpenStack Compute Python 3 libraries
ii python3-novaclient 2:17.0.0-0ubuntu1 all client library for OpenStack Compute API - 3.x
# dpkg -l | grep multipath
ii multipath-tools 0.8.3-1ubuntu2 amd64 maintain multipath block device access
# dpkg -l | grep iscsi
ii libiscsi7:amd64 1.18.0-2 amd64 iSCSI client shared library
ii open-iscsi 2.0.874-7.1ubuntu6.2 amd64 iSCSI initiator tools
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.4 LTS"
Instance OS: Debian-11-amd64
[2. Test scenario]
Already created instance with two volumes attached. First - 10GB for root system, second - 115GB used as vdb. Recognized by compute node as vda - dm-11, vdb - dm-9:
# virsh domblklist 90fas439-fc0e-4e22-8d0b-6f2a18eee5c1
Target Source
----------------------
vda /dev/dm-11
vdb /dev/dm-9
# multipath -ll
(...)
36e00084100ee7e7ed6ad25d900002f6b dm-9 HUAWEI,XSG1
size=115G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:4 sdl 8:176 active ready running
`- 17:0:0:4 sdn 8:208 active ready running
(...)
36e00084100ee7e7ed6acaa2900002f6a dm-11 HUAWEI,XSG1
size=10G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:3 sdq 65:0 active ready running
|- 15:0:0:3 sdr 65:16 active ready running
|- 16:0:0:3 sdp 8:240 active ready running
`- 17:0:0:3 sds 65:32 active ready running
Creating a new instance, with the same OS guest system and 10GB root volume. After successful deployment, creating a new volume(1GB) size and attaching it to newly created instance. We can see after that:
# multipath -ll
(...)
36e00084100ee7e7ed6ad25d900002f6b dm-9 HUAWEI,XSG1
size=115G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:10 sdao 66:128 failed faulty running
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:10 sdap 66:144 failed faulty running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:10 sdan 66:112 failed faulty running
|- 16:0:0:4 sdl 8:176 active ready running
|- 17:0:0:10 sdaq 66:160 failed faulty running
`- 17:0:0:4 sdn 8:208 active ready running
This way at instance we were able to see a new drive - not 1GB, but 115GB -> so it seems it was incorrectly attached and this way we were able to destroy some data on that volume.
Additionaly we were able to see many errors like that in compute node logs:
# dmesg -T | grep dm-9
[Fri Jan 27 13:37:42 2023] blk_update_request: critical target error, dev dm-9, sector 62918760 op 0x1:(WRITE) flags 0x8800 phys_seg 2 prio class 0
[Fri Jan 27 13:37:42 2023] blk_update_request: critical target error, dev dm-9, sector 33625152 op 0x1:(WRITE) flags 0x8800 phys_seg 6 prio class 0
[Fri Jan 27 13:37:46 2023] blk_update_request: critical target error, dev dm-9, sector 66663000 op 0x1:(WRITE) flags 0x8800 phys_seg 5 prio class 0
[Fri Jan 27 13:37:46 2023] blk_update_request: critical target error, dev dm-9, sector 66598120 op 0x1:(WRITE) flags 0x8800 phys_seg 5 prio class 0
[Fri Jan 27 13:37:51 2023] blk_update_request: critical target error, dev dm-9, sector 66638680 op 0x1:(WRITE) flags 0x8800 phys_seg 12 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66614344 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66469296 op 0x1:(WRITE) flags 0x8800 phys_seg 24 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66586472 op 0x1:(WRITE) flags 0x8800 phys_seg 3 prio class 0
(...)
Unfortunately we do not know what is a perfect test-scenario for it as we could face such issue in less than 2% of our tries, but it looks like a serious security breach.
Additionally we observed that linux kernel is not fully clearing a device allocation(from volume detach), so some of drives names are visible in an output, i.e. lsblk command. Then, after new volume attachment we can see such names(i.e. sdao, sdap, sdan and so on) are reusable by that drive and wrongly mapped by multipath/iscsi to another drive and this way we hit an issue.
Our question is why linux kernel of compute node is not removing devices allocation and this way is leading to a scenario like that? Maybe this can be a solution here.
Thanks in advance for your help and understanding. In case when more details is needed, do not hesitate to contact me. |
This issue is being treated as a potential security risk under
embargo. Please do not make any public mention of embargoed
(private) security vulnerabilities before their coordinated
publication by the OpenStack Vulnerability Management Team in the
form of an official OpenStack Security Advisory. This includes
discussion of the bug or associated fixes in public forums such as
mailing lists, code review systems and bug trackers. Please also
avoid private disclosure to other individuals not already approved
for access to this information, and provide this same reminder to
those who are made aware of the issue prior to publication. All
discussion should remain confined to this private bug report, and
any proposed fixes should be added to the bug as attachments. This
embargo shall not extend past 2023-05-10 and will be made
public by or on that date even if no fix is identified.
Hello OpenStack Security Team,
I’m writing to you, as we faced a serious security breach in OpenStack functionality(correlated a bit with libvirt, iscsi and huawei driver). I was going through OSSA documents and correlated libvirt notes, but I couldn't find something similar. It is not related to https://security.openstack.org/ossa/OSSA-2020-006.html
In short: we observed that newly created cinder volume(1GB size) was attached to compute node instance, but an instance recognized it as a 115GB volume, which(this 115GB volume) in fact was connected to another instance on the same compute node.
[1. Test environment]
Compute node: OpenStack Ussuri configured with Huawei dorado as a storage backend(configuration driver is available here: https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/huawei-storage-driver.html)
Packages:
v# dpkg -l | grep libvirt
ii libvirt-clients 6.0.0-0ubuntu8.16 amd64 Programs for the libvirt library
ii libvirt-daemon 6.0.0-0ubuntu8.16 amd64 Virtualization daemon
ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.16 amd64 Virtualization daemon QEMU connection driver
ii libvirt-daemon-driver-storage-rbd 6.0.0-0ubuntu8.16 amd64 Virtualization daemon RBD storage driver
ii libvirt-daemon-system 6.0.0-0ubuntu8.16 amd64 Libvirt daemon configuration files
ii libvirt-daemon-system-systemd 6.0.0-0ubuntu8.16 amd64 Libvirt daemon configuration files (systemd)
ii libvirt0:amd64 6.0.0-0ubuntu8.16 amd64 library for interfacing with different virtualization systems
ii nova-compute-libvirt 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node libvirt support
ii python3-libvirt 6.1.0-1 amd64 libvirt Python 3 bindings
# dpkg -l | grep qemu
ii ipxe-qemu 1.0.0+git-20190109.133f4c4-0ubuntu3.2 all PXE boot firmware - ROM images for qemu
ii ipxe-qemu-256k-compat-efi-roms 1.0.0+git-20150424.a25a16d-0ubuntu4 all PXE boot firmware - Compat EFI ROM images for qemu
ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.16 amd64 Virtualization daemon QEMU connection driver
ii qemu 1:4.2-3ubuntu6.23 amd64 fast processor emulator, dummy package
ii qemu-block-extra:amd64 1:4.2-3ubuntu6.23 amd64 extra block backend modules for qemu-system and qemu-utils
ii qemu-kvm 1:4.2-3ubuntu6.23 amd64 QEMU Full virtualization on x86 hardware
ii qemu-system-common 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (common files)
ii qemu-system-data 1:4.2-3ubuntu6.23 all QEMU full system emulation (data files)
ii qemu-system-gui:amd64 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (user interface and audio support)
ii qemu-system-x86 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (x86)
ii qemu-utils 1:4.2-3ubuntu6.23 amd64 QEMU utilities
# dpkg -l | grep nova
ii nova-common 2:21.2.4-0ubuntu1 all OpenStack Compute - common files
ii nova-compute 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node base
ii nova-compute-kvm 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node libvirt support
ii python3-nova 2:21.2.4-0ubuntu1 all OpenStack Compute Python 3 libraries
ii python3-novaclient 2:17.0.0-0ubuntu1 all client library for OpenStack Compute API - 3.x
# dpkg -l | grep multipath
ii multipath-tools 0.8.3-1ubuntu2 amd64 maintain multipath block device access
# dpkg -l | grep iscsi
ii libiscsi7:amd64 1.18.0-2 amd64 iSCSI client shared library
ii open-iscsi 2.0.874-7.1ubuntu6.2 amd64 iSCSI initiator tools
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.4 LTS"
Instance OS: Debian-11-amd64
[2. Test scenario]
Already created instance with two volumes attached. First - 10GB for root system, second - 115GB used as vdb. Recognized by compute node as vda - dm-11, vdb - dm-9:
# virsh domblklist 90fas439-fc0e-4e22-8d0b-6f2a18eee5c1
Target Source
----------------------
vda /dev/dm-11
vdb /dev/dm-9
# multipath -ll
(...)
36e00084100ee7e7ed6ad25d900002f6b dm-9 HUAWEI,XSG1
size=115G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:4 sdl 8:176 active ready running
`- 17:0:0:4 sdn 8:208 active ready running
(...)
36e00084100ee7e7ed6acaa2900002f6a dm-11 HUAWEI,XSG1
size=10G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:3 sdq 65:0 active ready running
|- 15:0:0:3 sdr 65:16 active ready running
|- 16:0:0:3 sdp 8:240 active ready running
`- 17:0:0:3 sds 65:32 active ready running
Creating a new instance, with the same OS guest system and 10GB root volume. After successful deployment, creating a new volume(1GB) size and attaching it to newly created instance. We can see after that:
# multipath -ll
(...)
36e00084100ee7e7ed6ad25d900002f6b dm-9 HUAWEI,XSG1
size=115G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:10 sdao 66:128 failed faulty running
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:10 sdap 66:144 failed faulty running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:10 sdan 66:112 failed faulty running
|- 16:0:0:4 sdl 8:176 active ready running
|- 17:0:0:10 sdaq 66:160 failed faulty running
`- 17:0:0:4 sdn 8:208 active ready running
This way at instance we were able to see a new drive - not 1GB, but 115GB -> so it seems it was incorrectly attached and this way we were able to destroy some data on that volume.
Additionaly we were able to see many errors like that in compute node logs:
# dmesg -T | grep dm-9
[Fri Jan 27 13:37:42 2023] blk_update_request: critical target error, dev dm-9, sector 62918760 op 0x1:(WRITE) flags 0x8800 phys_seg 2 prio class 0
[Fri Jan 27 13:37:42 2023] blk_update_request: critical target error, dev dm-9, sector 33625152 op 0x1:(WRITE) flags 0x8800 phys_seg 6 prio class 0
[Fri Jan 27 13:37:46 2023] blk_update_request: critical target error, dev dm-9, sector 66663000 op 0x1:(WRITE) flags 0x8800 phys_seg 5 prio class 0
[Fri Jan 27 13:37:46 2023] blk_update_request: critical target error, dev dm-9, sector 66598120 op 0x1:(WRITE) flags 0x8800 phys_seg 5 prio class 0
[Fri Jan 27 13:37:51 2023] blk_update_request: critical target error, dev dm-9, sector 66638680 op 0x1:(WRITE) flags 0x8800 phys_seg 12 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66614344 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66469296 op 0x1:(WRITE) flags 0x8800 phys_seg 24 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66586472 op 0x1:(WRITE) flags 0x8800 phys_seg 3 prio class 0
(...)
Unfortunately we do not know what is a perfect test-scenario for it as we could face such issue in less than 2% of our tries, but it looks like a serious security breach.
Additionally we observed that linux kernel is not fully clearing a device allocation(from volume detach), so some of drives names are visible in an output, i.e. lsblk command. Then, after new volume attachment we can see such names(i.e. sdao, sdap, sdan and so on) are reusable by that drive and wrongly mapped by multipath/iscsi to another drive and this way we hit an issue.
Our question is why linux kernel of compute node is not removing devices allocation and this way is leading to a scenario like that? Maybe this can be a solution here.
Thanks in advance for your help and understanding. In case when more details is needed, do not hesitate to contact me. |
|
2023-05-02 02:18:15 |
Jeremy Stanley |
summary |
[ussuri] Wrong volume attachment - volumes overlapping when connected through iscsi on host |
Unauthorized volume access through deleted volume attachments (CVE-2023-2088) |
|
2023-05-02 02:19:19 |
Jeremy Stanley |
ossa: status |
Incomplete |
In Progress |
|
2023-05-02 02:19:25 |
Jeremy Stanley |
ossa: importance |
Undecided |
High |
|
2023-05-02 02:19:33 |
Jeremy Stanley |
ossa: assignee |
|
Jeremy Stanley (fungi) |
|
2023-05-02 19:14:19 |
Jeremy Stanley |
bug |
|
|
added subscriber Glance Core security contacts |
2023-05-04 12:11:59 |
Jeremy Stanley |
bug |
|
|
added subscriber Kurt Garloff |
2023-05-04 12:12:14 |
Jeremy Stanley |
bug |
|
|
added subscriber Felix Huettner |
2023-05-09 08:25:38 |
Luigi Toscano |
bug |
|
|
added subscriber Evelina Shames |
2023-05-10 00:17:00 |
melanie witt |
nominated for series |
|
nova/wallaby |
|
2023-05-10 00:17:00 |
melanie witt |
bug task added |
|
nova/wallaby |
|
2023-05-10 14:29:34 |
Jeremy Stanley |
description |
This issue is being treated as a potential security risk under
embargo. Please do not make any public mention of embargoed
(private) security vulnerabilities before their coordinated
publication by the OpenStack Vulnerability Management Team in the
form of an official OpenStack Security Advisory. This includes
discussion of the bug or associated fixes in public forums such as
mailing lists, code review systems and bug trackers. Please also
avoid private disclosure to other individuals not already approved
for access to this information, and provide this same reminder to
those who are made aware of the issue prior to publication. All
discussion should remain confined to this private bug report, and
any proposed fixes should be added to the bug as attachments. This
embargo shall not extend past 2023-05-10 and will be made
public by or on that date even if no fix is identified.
Hello OpenStack Security Team,
I’m writing to you, as we faced a serious security breach in OpenStack functionality(correlated a bit with libvirt, iscsi and huawei driver). I was going through OSSA documents and correlated libvirt notes, but I couldn't find something similar. It is not related to https://security.openstack.org/ossa/OSSA-2020-006.html
In short: we observed that newly created cinder volume(1GB size) was attached to compute node instance, but an instance recognized it as a 115GB volume, which(this 115GB volume) in fact was connected to another instance on the same compute node.
[1. Test environment]
Compute node: OpenStack Ussuri configured with Huawei dorado as a storage backend(configuration driver is available here: https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/huawei-storage-driver.html)
Packages:
v# dpkg -l | grep libvirt
ii libvirt-clients 6.0.0-0ubuntu8.16 amd64 Programs for the libvirt library
ii libvirt-daemon 6.0.0-0ubuntu8.16 amd64 Virtualization daemon
ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.16 amd64 Virtualization daemon QEMU connection driver
ii libvirt-daemon-driver-storage-rbd 6.0.0-0ubuntu8.16 amd64 Virtualization daemon RBD storage driver
ii libvirt-daemon-system 6.0.0-0ubuntu8.16 amd64 Libvirt daemon configuration files
ii libvirt-daemon-system-systemd 6.0.0-0ubuntu8.16 amd64 Libvirt daemon configuration files (systemd)
ii libvirt0:amd64 6.0.0-0ubuntu8.16 amd64 library for interfacing with different virtualization systems
ii nova-compute-libvirt 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node libvirt support
ii python3-libvirt 6.1.0-1 amd64 libvirt Python 3 bindings
# dpkg -l | grep qemu
ii ipxe-qemu 1.0.0+git-20190109.133f4c4-0ubuntu3.2 all PXE boot firmware - ROM images for qemu
ii ipxe-qemu-256k-compat-efi-roms 1.0.0+git-20150424.a25a16d-0ubuntu4 all PXE boot firmware - Compat EFI ROM images for qemu
ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.16 amd64 Virtualization daemon QEMU connection driver
ii qemu 1:4.2-3ubuntu6.23 amd64 fast processor emulator, dummy package
ii qemu-block-extra:amd64 1:4.2-3ubuntu6.23 amd64 extra block backend modules for qemu-system and qemu-utils
ii qemu-kvm 1:4.2-3ubuntu6.23 amd64 QEMU Full virtualization on x86 hardware
ii qemu-system-common 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (common files)
ii qemu-system-data 1:4.2-3ubuntu6.23 all QEMU full system emulation (data files)
ii qemu-system-gui:amd64 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (user interface and audio support)
ii qemu-system-x86 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (x86)
ii qemu-utils 1:4.2-3ubuntu6.23 amd64 QEMU utilities
# dpkg -l | grep nova
ii nova-common 2:21.2.4-0ubuntu1 all OpenStack Compute - common files
ii nova-compute 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node base
ii nova-compute-kvm 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node libvirt support
ii python3-nova 2:21.2.4-0ubuntu1 all OpenStack Compute Python 3 libraries
ii python3-novaclient 2:17.0.0-0ubuntu1 all client library for OpenStack Compute API - 3.x
# dpkg -l | grep multipath
ii multipath-tools 0.8.3-1ubuntu2 amd64 maintain multipath block device access
# dpkg -l | grep iscsi
ii libiscsi7:amd64 1.18.0-2 amd64 iSCSI client shared library
ii open-iscsi 2.0.874-7.1ubuntu6.2 amd64 iSCSI initiator tools
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.4 LTS"
Instance OS: Debian-11-amd64
[2. Test scenario]
Already created instance with two volumes attached. First - 10GB for root system, second - 115GB used as vdb. Recognized by compute node as vda - dm-11, vdb - dm-9:
# virsh domblklist 90fas439-fc0e-4e22-8d0b-6f2a18eee5c1
Target Source
----------------------
vda /dev/dm-11
vdb /dev/dm-9
# multipath -ll
(...)
36e00084100ee7e7ed6ad25d900002f6b dm-9 HUAWEI,XSG1
size=115G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:4 sdl 8:176 active ready running
`- 17:0:0:4 sdn 8:208 active ready running
(...)
36e00084100ee7e7ed6acaa2900002f6a dm-11 HUAWEI,XSG1
size=10G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:3 sdq 65:0 active ready running
|- 15:0:0:3 sdr 65:16 active ready running
|- 16:0:0:3 sdp 8:240 active ready running
`- 17:0:0:3 sds 65:32 active ready running
Creating a new instance, with the same OS guest system and 10GB root volume. After successful deployment, creating a new volume(1GB) size and attaching it to newly created instance. We can see after that:
# multipath -ll
(...)
36e00084100ee7e7ed6ad25d900002f6b dm-9 HUAWEI,XSG1
size=115G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:10 sdao 66:128 failed faulty running
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:10 sdap 66:144 failed faulty running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:10 sdan 66:112 failed faulty running
|- 16:0:0:4 sdl 8:176 active ready running
|- 17:0:0:10 sdaq 66:160 failed faulty running
`- 17:0:0:4 sdn 8:208 active ready running
This way at instance we were able to see a new drive - not 1GB, but 115GB -> so it seems it was incorrectly attached and this way we were able to destroy some data on that volume.
Additionaly we were able to see many errors like that in compute node logs:
# dmesg -T | grep dm-9
[Fri Jan 27 13:37:42 2023] blk_update_request: critical target error, dev dm-9, sector 62918760 op 0x1:(WRITE) flags 0x8800 phys_seg 2 prio class 0
[Fri Jan 27 13:37:42 2023] blk_update_request: critical target error, dev dm-9, sector 33625152 op 0x1:(WRITE) flags 0x8800 phys_seg 6 prio class 0
[Fri Jan 27 13:37:46 2023] blk_update_request: critical target error, dev dm-9, sector 66663000 op 0x1:(WRITE) flags 0x8800 phys_seg 5 prio class 0
[Fri Jan 27 13:37:46 2023] blk_update_request: critical target error, dev dm-9, sector 66598120 op 0x1:(WRITE) flags 0x8800 phys_seg 5 prio class 0
[Fri Jan 27 13:37:51 2023] blk_update_request: critical target error, dev dm-9, sector 66638680 op 0x1:(WRITE) flags 0x8800 phys_seg 12 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66614344 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66469296 op 0x1:(WRITE) flags 0x8800 phys_seg 24 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66586472 op 0x1:(WRITE) flags 0x8800 phys_seg 3 prio class 0
(...)
Unfortunately we do not know what is a perfect test-scenario for it as we could face such issue in less than 2% of our tries, but it looks like a serious security breach.
Additionally we observed that linux kernel is not fully clearing a device allocation(from volume detach), so some of drives names are visible in an output, i.e. lsblk command. Then, after new volume attachment we can see such names(i.e. sdao, sdap, sdan and so on) are reusable by that drive and wrongly mapped by multipath/iscsi to another drive and this way we hit an issue.
Our question is why linux kernel of compute node is not removing devices allocation and this way is leading to a scenario like that? Maybe this can be a solution here.
Thanks in advance for your help and understanding. In case when more details is needed, do not hesitate to contact me. |
Hello OpenStack Security Team,
I’m writing to you, as we faced a serious security breach in OpenStack functionality(correlated a bit with libvirt, iscsi and huawei driver). I was going through OSSA documents and correlated libvirt notes, but I couldn't find something similar. It is not related to https://security.openstack.org/ossa/OSSA-2020-006.html
In short: we observed that newly created cinder volume(1GB size) was attached to compute node instance, but an instance recognized it as a 115GB volume, which(this 115GB volume) in fact was connected to another instance on the same compute node.
[1. Test environment]
Compute node: OpenStack Ussuri configured with Huawei dorado as a storage backend(configuration driver is available here: https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/huawei-storage-driver.html)
Packages:
v# dpkg -l | grep libvirt
ii libvirt-clients 6.0.0-0ubuntu8.16 amd64 Programs for the libvirt library
ii libvirt-daemon 6.0.0-0ubuntu8.16 amd64 Virtualization daemon
ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.16 amd64 Virtualization daemon QEMU connection driver
ii libvirt-daemon-driver-storage-rbd 6.0.0-0ubuntu8.16 amd64 Virtualization daemon RBD storage driver
ii libvirt-daemon-system 6.0.0-0ubuntu8.16 amd64 Libvirt daemon configuration files
ii libvirt-daemon-system-systemd 6.0.0-0ubuntu8.16 amd64 Libvirt daemon configuration files (systemd)
ii libvirt0:amd64 6.0.0-0ubuntu8.16 amd64 library for interfacing with different virtualization systems
ii nova-compute-libvirt 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node libvirt support
ii python3-libvirt 6.1.0-1 amd64 libvirt Python 3 bindings
# dpkg -l | grep qemu
ii ipxe-qemu 1.0.0+git-20190109.133f4c4-0ubuntu3.2 all PXE boot firmware - ROM images for qemu
ii ipxe-qemu-256k-compat-efi-roms 1.0.0+git-20150424.a25a16d-0ubuntu4 all PXE boot firmware - Compat EFI ROM images for qemu
ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.16 amd64 Virtualization daemon QEMU connection driver
ii qemu 1:4.2-3ubuntu6.23 amd64 fast processor emulator, dummy package
ii qemu-block-extra:amd64 1:4.2-3ubuntu6.23 amd64 extra block backend modules for qemu-system and qemu-utils
ii qemu-kvm 1:4.2-3ubuntu6.23 amd64 QEMU Full virtualization on x86 hardware
ii qemu-system-common 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (common files)
ii qemu-system-data 1:4.2-3ubuntu6.23 all QEMU full system emulation (data files)
ii qemu-system-gui:amd64 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (user interface and audio support)
ii qemu-system-x86 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (x86)
ii qemu-utils 1:4.2-3ubuntu6.23 amd64 QEMU utilities
# dpkg -l | grep nova
ii nova-common 2:21.2.4-0ubuntu1 all OpenStack Compute - common files
ii nova-compute 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node base
ii nova-compute-kvm 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node libvirt support
ii python3-nova 2:21.2.4-0ubuntu1 all OpenStack Compute Python 3 libraries
ii python3-novaclient 2:17.0.0-0ubuntu1 all client library for OpenStack Compute API - 3.x
# dpkg -l | grep multipath
ii multipath-tools 0.8.3-1ubuntu2 amd64 maintain multipath block device access
# dpkg -l | grep iscsi
ii libiscsi7:amd64 1.18.0-2 amd64 iSCSI client shared library
ii open-iscsi 2.0.874-7.1ubuntu6.2 amd64 iSCSI initiator tools
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.4 LTS"
Instance OS: Debian-11-amd64
[2. Test scenario]
Already created instance with two volumes attached. First - 10GB for root system, second - 115GB used as vdb. Recognized by compute node as vda - dm-11, vdb - dm-9:
# virsh domblklist 90fas439-fc0e-4e22-8d0b-6f2a18eee5c1
Target Source
----------------------
vda /dev/dm-11
vdb /dev/dm-9
# multipath -ll
(...)
36e00084100ee7e7ed6ad25d900002f6b dm-9 HUAWEI,XSG1
size=115G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:4 sdl 8:176 active ready running
`- 17:0:0:4 sdn 8:208 active ready running
(...)
36e00084100ee7e7ed6acaa2900002f6a dm-11 HUAWEI,XSG1
size=10G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:3 sdq 65:0 active ready running
|- 15:0:0:3 sdr 65:16 active ready running
|- 16:0:0:3 sdp 8:240 active ready running
`- 17:0:0:3 sds 65:32 active ready running
Creating a new instance, with the same OS guest system and 10GB root volume. After successful deployment, creating a new volume(1GB) size and attaching it to newly created instance. We can see after that:
# multipath -ll
(...)
36e00084100ee7e7ed6ad25d900002f6b dm-9 HUAWEI,XSG1
size=115G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 14:0:0:10 sdao 66:128 failed faulty running
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:10 sdap 66:144 failed faulty running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:10 sdan 66:112 failed faulty running
|- 16:0:0:4 sdl 8:176 active ready running
|- 17:0:0:10 sdaq 66:160 failed faulty running
`- 17:0:0:4 sdn 8:208 active ready running
This way at instance we were able to see a new drive - not 1GB, but 115GB -> so it seems it was incorrectly attached and this way we were able to destroy some data on that volume.
Additionaly we were able to see many errors like that in compute node logs:
# dmesg -T | grep dm-9
[Fri Jan 27 13:37:42 2023] blk_update_request: critical target error, dev dm-9, sector 62918760 op 0x1:(WRITE) flags 0x8800 phys_seg 2 prio class 0
[Fri Jan 27 13:37:42 2023] blk_update_request: critical target error, dev dm-9, sector 33625152 op 0x1:(WRITE) flags 0x8800 phys_seg 6 prio class 0
[Fri Jan 27 13:37:46 2023] blk_update_request: critical target error, dev dm-9, sector 66663000 op 0x1:(WRITE) flags 0x8800 phys_seg 5 prio class 0
[Fri Jan 27 13:37:46 2023] blk_update_request: critical target error, dev dm-9, sector 66598120 op 0x1:(WRITE) flags 0x8800 phys_seg 5 prio class 0
[Fri Jan 27 13:37:51 2023] blk_update_request: critical target error, dev dm-9, sector 66638680 op 0x1:(WRITE) flags 0x8800 phys_seg 12 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66614344 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66469296 op 0x1:(WRITE) flags 0x8800 phys_seg 24 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66586472 op 0x1:(WRITE) flags 0x8800 phys_seg 3 prio class 0
(...)
Unfortunately we do not know what is a perfect test-scenario for it as we could face such issue in less than 2% of our tries, but it looks like a serious security breach.
Additionally we observed that linux kernel is not fully clearing a device allocation(from volume detach), so some of drives names are visible in an output, i.e. lsblk command. Then, after new volume attachment we can see such names(i.e. sdao, sdap, sdan and so on) are reusable by that drive and wrongly mapped by multipath/iscsi to another drive and this way we hit an issue.
Our question is why linux kernel of compute node is not removing devices allocation and this way is leading to a scenario like that? Maybe this can be a solution here.
Thanks in advance for your help and understanding. In case when more details is needed, do not hesitate to contact me. |
|
2023-05-10 14:29:45 |
Jeremy Stanley |
information type |
Private Security |
Public Security |
|
2023-05-10 14:30:44 |
Jeremy Stanley |
bug task added |
|
ossn |
|
2023-05-10 14:31:05 |
Jeremy Stanley |
ossn: importance |
Undecided |
High |
|
2023-05-10 14:31:05 |
Jeremy Stanley |
ossn: status |
New |
In Progress |
|
2023-05-10 14:31:05 |
Jeremy Stanley |
ossn: assignee |
|
Jeremy Stanley (fungi) |
|
2023-05-10 14:35:08 |
OpenStack Infra |
glance-store: status |
New |
In Progress |
|
2023-05-10 14:37:05 |
OpenStack Infra |
cinder: status |
New |
In Progress |
|
2023-05-10 14:38:16 |
OpenStack Infra |
nova/zed: status |
New |
In Progress |
|
2023-05-10 14:38:23 |
Jeremy Stanley |
summary |
Unauthorized volume access through deleted volume attachments (CVE-2023-2088) |
[OSSA-2023-003] Unauthorized volume access through deleted volume attachments (CVE-2023-2088) |
|
2023-05-10 14:38:39 |
OpenStack Infra |
nova/yoga: status |
New |
In Progress |
|
2023-05-10 14:39:19 |
OpenStack Infra |
nova/xena: status |
New |
In Progress |
|
2023-05-10 14:39:49 |
OpenStack Infra |
os-brick: status |
New |
In Progress |
|
2023-05-10 14:40:35 |
OpenStack Infra |
nova: status |
New |
In Progress |
|
2023-05-10 15:06:01 |
OpenStack Infra |
nova/wallaby: status |
New |
In Progress |
|
2023-05-10 15:08:52 |
melanie witt |
nova/antelope: status |
New |
In Progress |
|
2023-05-10 17:24:01 |
OpenStack Infra |
ossa: status |
In Progress |
Fix Released |
|
2023-05-10 17:27:03 |
Jeremy Stanley |
ossn: status |
In Progress |
Fix Released |
|
2023-05-10 18:09:44 |
OpenStack Infra |
glance-store: status |
In Progress |
Fix Released |
|
2023-05-10 20:46:23 |
OpenStack Infra |
nova/zed: status |
In Progress |
Fix Committed |
|
2023-05-10 21:25:54 |
OpenStack Infra |
nova/yoga: status |
In Progress |
Fix Committed |
|
2023-05-10 21:26:00 |
OpenStack Infra |
nova/xena: status |
In Progress |
Fix Committed |
|
2023-05-10 22:29:10 |
OpenStack Infra |
tags |
|
in-stable-yoga |
|
2023-05-10 23:05:40 |
OpenStack Infra |
nova: status |
In Progress |
Fix Released |
|
2023-05-10 23:53:23 |
OpenStack Infra |
tags |
in-stable-yoga |
in-stable-yoga in-stable-zed |
|
2023-05-11 10:12:53 |
OpenStack Infra |
cinder: status |
In Progress |
Fix Released |
|
2023-05-11 10:26:45 |
Maksim Malchuk |
bug task added |
|
kolla-ansible |
|
2023-05-11 10:27:14 |
Maksim Malchuk |
nominated for series |
|
kolla-ansible/zed |
|
2023-05-11 10:27:14 |
Maksim Malchuk |
bug task added |
|
kolla-ansible/zed |
|
2023-05-11 10:27:45 |
Maksim Malchuk |
kolla-ansible: status |
New |
In Progress |
|
2023-05-11 18:00:37 |
OpenStack Infra |
kolla-ansible/zed: status |
New |
Fix Committed |
|
2023-05-15 11:34:46 |
OpenStack Infra |
tags |
in-stable-yoga in-stable-zed |
in-stable-wallaby in-stable-yoga in-stable-zed |
|
2023-05-15 15:12:55 |
OpenStack Infra |
tags |
in-stable-wallaby in-stable-yoga in-stable-zed |
in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed |
|
2023-05-17 08:54:21 |
Christian Rohmann |
bug |
|
|
added subscriber Christian Rohmann |
2023-05-17 11:06:02 |
OpenStack Infra |
nova/antelope: status |
In Progress |
Fix Released |
|
2023-05-17 11:06:14 |
OpenStack Infra |
nova/yoga: status |
Fix Committed |
Fix Released |
|
2023-05-17 11:06:31 |
OpenStack Infra |
kolla-ansible/zed: status |
Fix Committed |
Fix Released |
|
2023-05-17 12:20:14 |
Waldemar Reger |
bug |
|
|
added subscriber Waldemar Reger |
2023-05-18 15:22:55 |
OpenStack Infra |
nova/wallaby: status |
In Progress |
Fix Committed |
|
2023-05-20 00:40:01 |
melanie witt |
nova/zed: status |
Fix Committed |
Fix Released |
|
2023-05-26 14:59:04 |
Felipe Reyes |
bug |
|
|
added subscriber Felipe Reyes |
2023-06-08 18:40:53 |
OpenStack Infra |
tags |
in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed |
in-stable-ussuri in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed |
|
2023-06-08 18:40:58 |
OpenStack Infra |
tags |
in-stable-ussuri in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed |
in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed |
|
2023-06-08 18:41:04 |
OpenStack Infra |
tags |
in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed |
in-stable-train in-stable-ussuri in-stable-victoria in-stable-wallaby in-stable-xena in-stable-yoga in-stable-zed |
|
2023-06-09 04:30:54 |
Avinash Hanwate |
removed subscriber Avinash Hanwate |
|
|
|