[OSSA-2023-003] Unauthorized volume access through deleted volume attachments (CVE-2023-2088)
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Fix Released
|
Undecided
|
Unassigned | ||
OpenStack Compute (nova) |
Fix Released
|
Undecided
|
Unassigned | ||
Antelope |
Fix Released
|
Undecided
|
Unassigned | ||
Wallaby |
Fix Committed
|
Undecided
|
Unassigned | ||
Xena |
Fix Committed
|
Undecided
|
Unassigned | ||
Yoga |
Fix Released
|
Undecided
|
Unassigned | ||
Zed |
Fix Released
|
Undecided
|
Unassigned | ||
OpenStack Security Advisory |
Fix Released
|
High
|
Jeremy Stanley | ||
OpenStack Security Notes |
Fix Released
|
High
|
Jeremy Stanley | ||
glance_store |
Fix Released
|
Undecided
|
Unassigned | ||
kolla-ansible |
In Progress
|
Undecided
|
Unassigned | ||
Zed |
Fix Released
|
Undecided
|
Unassigned | ||
os-brick |
In Progress
|
Undecided
|
Unassigned |
Bug Description
Hello OpenStack Security Team,
I’m writing to you, as we faced a serious security breach in OpenStack functionality(
In short: we observed that newly created cinder volume(1GB size) was attached to compute node instance, but an instance recognized it as a 115GB volume, which(this 115GB volume) in fact was connected to another instance on the same compute node.
[1. Test environment]
Compute node: OpenStack Ussuri configured with Huawei dorado as a storage backend(
Packages:
v# dpkg -l | grep libvirt
ii libvirt-clients 6.0.0-0ubuntu8.16 amd64 Programs for the libvirt library
ii libvirt-daemon 6.0.0-0ubuntu8.16 amd64 Virtualization daemon
ii libvirt-
ii libvirt-
ii libvirt-
ii libvirt-
ii libvirt0:amd64 6.0.0-0ubuntu8.16 amd64 library for interfacing with different virtualization systems
ii nova-compute-
ii python3-libvirt 6.1.0-1 amd64 libvirt Python 3 bindings
# dpkg -l | grep qemu
ii ipxe-qemu 1.0.0+git-
ii ipxe-qemu-
ii libvirt-
ii qemu 1:4.2-3ubuntu6.23 amd64 fast processor emulator, dummy package
ii qemu-block-
ii qemu-kvm 1:4.2-3ubuntu6.23 amd64 QEMU Full virtualization on x86 hardware
ii qemu-system-common 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (common files)
ii qemu-system-data 1:4.2-3ubuntu6.23 all QEMU full system emulation (data files)
ii qemu-system-
ii qemu-system-x86 1:4.2-3ubuntu6.23 amd64 QEMU full system emulation binaries (x86)
ii qemu-utils 1:4.2-3ubuntu6.23 amd64 QEMU utilities
# dpkg -l | grep nova
ii nova-common 2:21.2.4-0ubuntu1 all OpenStack Compute - common files
ii nova-compute 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node base
ii nova-compute-kvm 2:21.2.4-0ubuntu1 all OpenStack Compute - compute node (KVM)
ii nova-compute-
ii python3-nova 2:21.2.4-0ubuntu1 all OpenStack Compute Python 3 libraries
ii python3-novaclient 2:17.0.0-0ubuntu1 all client library for OpenStack Compute API - 3.x
# dpkg -l | grep multipath
ii multipath-tools 0.8.3-1ubuntu2 amd64 maintain multipath block device access
# dpkg -l | grep iscsi
ii libiscsi7:amd64 1.18.0-2 amd64 iSCSI client shared library
ii open-iscsi 2.0.874-
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_
DISTRIB_
DISTRIB_
Instance OS: Debian-11-amd64
[2. Test scenario]
Already created instance with two volumes attached. First - 10GB for root system, second - 115GB used as vdb. Recognized by compute node as vda - dm-11, vdb - dm-9:
# virsh domblklist 90fas439-
Target Source
-------
vda /dev/dm-11
vdb /dev/dm-9
# multipath -ll
(...)
36e00084100ee7e
size=115G features='0' hwhandler='0' wp=rw
`-+- policy=
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:4 sdl 8:176 active ready running
`- 17:0:0:4 sdn 8:208 active ready running
(...)
36e00084100ee7e
size=10G features='0' hwhandler='0' wp=rw
`-+- policy=
|- 14:0:0:3 sdq 65:0 active ready running
|- 15:0:0:3 sdr 65:16 active ready running
|- 16:0:0:3 sdp 8:240 active ready running
`- 17:0:0:3 sds 65:32 active ready running
Creating a new instance, with the same OS guest system and 10GB root volume. After successful deployment, creating a new volume(1GB) size and attaching it to newly created instance. We can see after that:
# multipath -ll
(...)
36e00084100ee7e
size=115G features='0' hwhandler='0' wp=rw
`-+- policy=
|- 14:0:0:10 sdao 66:128 failed faulty running
|- 14:0:0:4 sdm 8:192 active ready running
|- 15:0:0:10 sdap 66:144 failed faulty running
|- 15:0:0:4 sdo 8:224 active ready running
|- 16:0:0:10 sdan 66:112 failed faulty running
|- 16:0:0:4 sdl 8:176 active ready running
|- 17:0:0:10 sdaq 66:160 failed faulty running
`- 17:0:0:4 sdn 8:208 active ready running
This way at instance we were able to see a new drive - not 1GB, but 115GB -> so it seems it was incorrectly attached and this way we were able to destroy some data on that volume.
Additionaly we were able to see many errors like that in compute node logs:
# dmesg -T | grep dm-9
[Fri Jan 27 13:37:42 2023] blk_update_request: critical target error, dev dm-9, sector 62918760 op 0x1:(WRITE) flags 0x8800 phys_seg 2 prio class 0
[Fri Jan 27 13:37:42 2023] blk_update_request: critical target error, dev dm-9, sector 33625152 op 0x1:(WRITE) flags 0x8800 phys_seg 6 prio class 0
[Fri Jan 27 13:37:46 2023] blk_update_request: critical target error, dev dm-9, sector 66663000 op 0x1:(WRITE) flags 0x8800 phys_seg 5 prio class 0
[Fri Jan 27 13:37:46 2023] blk_update_request: critical target error, dev dm-9, sector 66598120 op 0x1:(WRITE) flags 0x8800 phys_seg 5 prio class 0
[Fri Jan 27 13:37:51 2023] blk_update_request: critical target error, dev dm-9, sector 66638680 op 0x1:(WRITE) flags 0x8800 phys_seg 12 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66614344 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66469296 op 0x1:(WRITE) flags 0x8800 phys_seg 24 prio class 0
[Fri Jan 27 13:37:56 2023] blk_update_request: critical target error, dev dm-9, sector 66586472 op 0x1:(WRITE) flags 0x8800 phys_seg 3 prio class 0
(...)
Unfortunately we do not know what is a perfect test-scenario for it as we could face such issue in less than 2% of our tries, but it looks like a serious security breach.
Additionally we observed that linux kernel is not fully clearing a device allocation(from volume detach), so some of drives names are visible in an output, i.e. lsblk command. Then, after new volume attachment we can see such names(i.e. sdao, sdap, sdan and so on) are reusable by that drive and wrongly mapped by multipath/iscsi to another drive and this way we hit an issue.
Our question is why linux kernel of compute node is not removing devices allocation and this way is leading to a scenario like that? Maybe this can be a solution here.
Thanks in advance for your help and understanding. In case when more details is needed, do not hesitate to contact me.
CVE References
description: | updated |
summary: |
- [ussuri] Wrong volume attachment - volumes overlapping when connected - through iscsi on host + Unauthorized volume access through deleted volume attachments + (CVE-2023-2088) |
Changed in ossa: | |
status: | Incomplete → In Progress |
importance: | Undecided → High |
assignee: | nobody → Jeremy Stanley (fungi) |
description: | updated |
information type: | Private Security → Public Security |
Changed in ossn: | |
assignee: | nobody → Jeremy Stanley (fungi) |
importance: | Undecided → High |
status: | New → In Progress |
Changed in glance-store: | |
status: | New → In Progress |
Changed in cinder: | |
status: | New → In Progress |
summary: |
- Unauthorized volume access through deleted volume attachments - (CVE-2023-2088) + [OSSA-2023-003] Unauthorized volume access through deleted volume + attachments (CVE-2023-2088) |
Changed in os-brick: | |
status: | New → In Progress |
Changed in nova: | |
status: | New → In Progress |
Changed in ossa: | |
status: | In Progress → Fix Released |
Changed in ossn: | |
status: | In Progress → Fix Released |
Changed in glance-store: | |
status: | In Progress → Fix Released |
tags: | added: in-stable-yoga |
Changed in nova: | |
status: | In Progress → Fix Released |
tags: | added: in-stable-zed |
Changed in cinder: | |
status: | In Progress → Fix Released |
Changed in kolla-ansible: | |
status: | New → In Progress |
tags: | added: in-stable-wallaby |
tags: | added: in-stable-xena |
Since this report concerns a possible security risk, an incomplete
security advisory task has been added while the core security
reviewers for the affected project or projects confirm the bug and
discuss the scope of any vulnerability along with potential
solutions.