[OSSA 2016-012] qemu-img calls need to be restricted by ulimit (CVE-2015-5162)
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Fix Released
|
Medium
|
Sean McGinnis | ||
Mitaka |
Fix Released
|
Undecided
|
Hemanth Makkapati | ||
Newton |
Fix Released
|
Medium
|
Sean McGinnis | ||
Glance |
Fix Released
|
High
|
Hemanth Makkapati | ||
Liberty |
Fix Released
|
Undecided
|
Unassigned | ||
Mitaka |
Fix Released
|
High
|
Hemanth Makkapati | ||
Newton |
Fix Released
|
Critical
|
Hemanth Makkapati | ||
OpenStack Compute (nova) |
Fix Released
|
Medium
|
Daniel Berrange | ||
OpenStack Security Advisory |
Fix Released
|
Medium
|
Jeremy Stanley | ||
Ubuntu Cloud Archive |
Fix Released
|
Medium
|
Unassigned | ||
Liberty |
Fix Committed
|
Medium
|
Unassigned | ||
Mitaka |
Fix Committed
|
Medium
|
Unassigned | ||
Newton |
Fix Released
|
Medium
|
Unassigned | ||
python-oslo.concurrency (Ubuntu) |
Fix Released
|
Medium
|
Unassigned | ||
Wily |
Fix Committed
|
Medium
|
Unassigned | ||
Xenial |
Fix Released
|
Medium
|
Corey Bryant | ||
Yakkety |
Fix Released
|
Medium
|
Unassigned |
Bug Description
Reported via private E-mail from Richard W.M. Jones.
Turns out qemu image parser is not hardened against malicious input and can be abused to allocated an arbitrary amount of memory and/or dump a lot of information when used with "--output=json".
The solution seems to be: limit qemu-img ressource using ulimit.
Example of abuse:
-- afl1.img --
$ /usr/bin/time qemu-img info afl1.img
image: afl1.img
[...]
0.13user 0.19system 0:00.36elapsed 92%CPU (0avgtext+0avgdata 642416maxresident)k
0inputs+0outputs (0major+
The original image is 516 bytes, but it causes qemu-img to allocate 640 MB.
-- afl2.img --
$ qemu-img info --output=json afl2.img | wc -l
589843
This is a 200K image which causes qemu-img info to output half a
million lines of JSON (14 MB of JSON).
Glance runs the --output=json variant of the command.
-- afl3.img --
$ /usr/bin/time qemu-img info afl3.img
image: afl3.img
[...]
0.09user 0.35system 0:00.47elapsed 94%CPU (0avgtext+0avgdata 1262388maxresid
0inputs+0outputs (0major+
qemu-img allocates 1.3 GB (actually, a bit more if you play with
ulimit -v). It appears that you could change it to allocate
arbitrarily large amounts of RAM.
Changed in ossa: | |
importance: | Undecided → Medium |
status: | Incomplete → Confirmed |
Changed in nova: | |
status: | New → Confirmed |
Changed in glance: | |
milestone: | none → liberty-1 |
assignee: | nobody → nikhil komawar (nikhil-komawar) |
Changed in nova: | |
assignee: | nobody → Daniel Berrange (berrange) |
Changed in glance: | |
milestone: | liberty-1 → liberty-2 |
description: | updated |
information type: | Private Security → Public Security |
summary: |
- qemu-img calls need to be restricted by ulimit + qemu-img calls need to be restricted by ulimit (CVE-2015-5162) |
Changed in glance: | |
milestone: | liberty-2 → liberty-3 |
Changed in glance: | |
status: | Triaged → In Progress |
Changed in glance: | |
milestone: | liberty-3 → liberty-rc1 |
tags: | added: liberty-rc-potential |
Changed in glance: | |
milestone: | liberty-rc1 → ongoing |
Changed in nova: | |
assignee: | Tristan Cacqueray (tristan-cacqueray) → Dan Smith (danms) |
Changed in nova: | |
status: | Fix Committed → Fix Released |
Changed in glance: | |
assignee: | nikhil komawar (nikhil-komawar) → nobody |
Changed in nova: | |
assignee: | Dan Smith (danms) → nobody |
importance: | Undecided → Medium |
Changed in python-oslo.concurrency (Ubuntu Yakkety): | |
status: | New → Fix Released |
Changed in python-oslo.concurrency (Ubuntu Xenial): | |
status: | New → Triaged |
importance: | Undecided → Medium |
Changed in python-oslo.concurrency (Ubuntu Yakkety): | |
importance: | Undecided → Medium |
Changed in python-oslo.concurrency (Ubuntu Xenial): | |
assignee: | nobody → Corey Bryant (corey.bryant) |
Changed in python-oslo.concurrency (Ubuntu Wily): | |
importance: | Undecided → Medium |
affects: | cinder → ubuntu-translations |
no longer affects: | ubuntu-translations |
affects: | glance → ubuntu-translations |
Changed in ubuntu-translations: | |
milestone: | ongoing → none |
no longer affects: | ubuntu-translations |
Changed in cloud-archive: | |
status: | New → Fix Released |
importance: | Undecided → Medium |
Changed in ossa: | |
assignee: | nobody → Jeremy Stanley (fungi) |
status: | Incomplete → In Progress |
Changed in cinder: | |
importance: | Undecided → Medium |
assignee: | nobody → Sean McGinnis (sean-mcginnis) |
Changed in glance: | |
assignee: | nobody → Hemanth Makkapati (hemanth-makkapati) |
importance: | Undecided → High |
status: | New → In Progress |
tags: | added: newton-rc-potential |
Changed in glance: | |
status: | Fix Released → Fix Committed |
summary: |
- qemu-img calls need to be restricted by ulimit (CVE-2015-5162) + [OSSA 2016-012] qemu-img calls need to be restricted by ulimit + (CVE-2015-5162) |
Changed in ossa: | |
status: | In Progress → Fix Released |
FYI, I tested a recentish git master to see if I can trigger it and was succesful:
What I did was
$ cd /usr/bin
$ mv qemu-img qemu-img.real
$ cat > qemu-img <<EOF
#!/bin/sh
echo "$@" >> /tmp/usage.log qemu-img. real "$@" 2>> /tmp/usage.log
/usr/bin/time /usr/bin/
echo >> /tmp/usage.log
EOF
$ chmod +x qemu-img
Then
$ glance image-create --name afl3 --disk-format vmdk --container-format bare --file /home/berrange/ afl3.img --is-public true
Glance did not appear to run qemu-img info at this stage, but when I then boot the image iwth Nova:
$ nova boot --image afl3 --flavor m1.tiny afl3
Nova correctly refuses to boot the guest, because the disk image is larger
than the maximum it permits:
2015-04-27 14:39:00.381 TRACE nova.compute. manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] return f(*args, **kwargs) manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] File berrange/ src/cloud/ nova/nova/ virt/libvirt/ imagebackend. py", line 221, in fetch_func_sync manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] fetch_func( target= target, *args, **kwargs) manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] File berrange/ src/cloud/ nova/nova/ virt/libvirt/ utils.py" , line 501, in fetch_image manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] max_size=max_size) manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] File berrange/ src/cloud/ nova/nova/ virt/images. py", line 119, in fetch_to_raw manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] raise exception. FlavorDiskTooSm all() manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] FlavorDiskTooSmall: Flavor's disk is too small for manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b]
2015-04-27 14:39:00.381 TRACE nova.compute.
+"/home/
2015-04-27 14:39:00.381 TRACE nova.compute.
2015-04-27 14:39:00.381 TRACE nova.compute.
+"/home/
2015-04-27 14:39:00.381 TRACE nova.compute.
2015-04-27 14:39:00.381 TRACE nova.compute.
+"/home/
2015-04-27 14:39:00.381 TRACE nova.compute.
2015-04-27 14:39:00.381 TRACE nova.compute.
+requested image.
2015-04-27 14:39:00.381 TRACE nova.compute.
However in checking that maximum limit, it has indeed run qemu-img info
and suffered the memory usage DOS.
$ cat /tmp/usage.log
info /home/berrange/ src/cloud/ data/nova/ instances/ _base/30854c868 15b92c21c8af45c f8ff5757c04046a a.part ent)k 311876minor) pagefaults 0swaps
0.08user 0.94system 0:01.24elapsed 82%CPU (0avgtext+0avgdata 1260680maxresid
0inputs+0outputs (0major+
So someone needs to see just what code paths in glance might trigger the qemu-img info call. I also wonder if there is any risk to cinder, because it too has various invokations of qemu-img