FYI, I tested a recentish git master to see if I can trigger it and was succesful:
What I did was
$ cd /usr/bin $ mv qemu-img qemu-img.real
$ cat > qemu-img <<EOF #!/bin/sh
echo "$@" >> /tmp/usage.log /usr/bin/time /usr/bin/qemu-img.real "$@" 2>> /tmp/usage.log echo >> /tmp/usage.log EOF
$ chmod +x qemu-img
Then
$ glance image-create --name afl3 --disk-format vmdk --container-format bare --file /home/berrange/afl3.img --is-public true
Glance did not appear to run qemu-img info at this stage, but when I then boot the image iwth Nova:
$ nova boot --image afl3 --flavor m1.tiny afl3
Nova correctly refuses to boot the guest, because the disk image is larger than the maximum it permits:
2015-04-27 14:39:00.381 TRACE nova.compute.manager [instance: aab2db9f-cbcc-4858-98e8-95e82b1d3b3b] return f(*args, **kwargs) 2015-04-27 14:39:00.381 TRACE nova.compute.manager [instance: aab2db9f-cbcc-4858-98e8-95e82b1d3b3b] File +"/home/berrange/src/cloud/nova/nova/virt/libvirt/imagebackend.py", line 221, in fetch_func_sync 2015-04-27 14:39:00.381 TRACE nova.compute.manager [instance: aab2db9f-cbcc-4858-98e8-95e82b1d3b3b] fetch_func(target=target, *args, **kwargs) 2015-04-27 14:39:00.381 TRACE nova.compute.manager [instance: aab2db9f-cbcc-4858-98e8-95e82b1d3b3b] File +"/home/berrange/src/cloud/nova/nova/virt/libvirt/utils.py", line 501, in fetch_image 2015-04-27 14:39:00.381 TRACE nova.compute.manager [instance: aab2db9f-cbcc-4858-98e8-95e82b1d3b3b] max_size=max_size) 2015-04-27 14:39:00.381 TRACE nova.compute.manager [instance: aab2db9f-cbcc-4858-98e8-95e82b1d3b3b] File +"/home/berrange/src/cloud/nova/nova/virt/images.py", line 119, in fetch_to_raw 2015-04-27 14:39:00.381 TRACE nova.compute.manager [instance: aab2db9f-cbcc-4858-98e8-95e82b1d3b3b] raise exception.FlavorDiskTooSmall() 2015-04-27 14:39:00.381 TRACE nova.compute.manager [instance: aab2db9f-cbcc-4858-98e8-95e82b1d3b3b] FlavorDiskTooSmall: Flavor's disk is too small for +requested image. 2015-04-27 14:39:00.381 TRACE nova.compute.manager [instance: aab2db9f-cbcc-4858-98e8-95e82b1d3b3b]
However in checking that maximum limit, it has indeed run qemu-img info and suffered the memory usage DOS.
$ cat /tmp/usage.log
info /home/berrange/src/cloud/data/nova/instances/_base/30854c86815b92c21c8af45cf8ff5757c04046aa.part 0.08user 0.94system 0:01.24elapsed 82%CPU (0avgtext+0avgdata 1260680maxresident)k 0inputs+0outputs (0major+311876minor)pagefaults 0swaps
So someone needs to see just what code paths in glance might trigger the qemu-img info call. I also wonder if there is any risk to cinder, because it too has various invokations of qemu-img
FYI, I tested a recentish git master to see if I can trigger it and was succesful:
What I did was
$ cd /usr/bin
$ mv qemu-img qemu-img.real
$ cat > qemu-img <<EOF
#!/bin/sh
echo "$@" >> /tmp/usage.log qemu-img. real "$@" 2>> /tmp/usage.log
/usr/bin/time /usr/bin/
echo >> /tmp/usage.log
EOF
$ chmod +x qemu-img
Then
$ glance image-create --name afl3 --disk-format vmdk --container-format bare --file /home/berrange/ afl3.img --is-public true
Glance did not appear to run qemu-img info at this stage, but when I then boot the image iwth Nova:
$ nova boot --image afl3 --flavor m1.tiny afl3
Nova correctly refuses to boot the guest, because the disk image is larger
than the maximum it permits:
2015-04-27 14:39:00.381 TRACE nova.compute. manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] return f(*args, **kwargs) manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] File berrange/ src/cloud/ nova/nova/ virt/libvirt/ imagebackend. py", line 221, in fetch_func_sync manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] fetch_func( target= target, *args, **kwargs) manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] File berrange/ src/cloud/ nova/nova/ virt/libvirt/ utils.py" , line 501, in fetch_image manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] max_size=max_size) manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] File berrange/ src/cloud/ nova/nova/ virt/images. py", line 119, in fetch_to_raw manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] raise exception. FlavorDiskTooSm all() manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b] FlavorDiskTooSmall: Flavor's disk is too small for manager [instance: aab2db9f- cbcc-4858- 98e8-95e82b1d3b 3b]
2015-04-27 14:39:00.381 TRACE nova.compute.
+"/home/
2015-04-27 14:39:00.381 TRACE nova.compute.
2015-04-27 14:39:00.381 TRACE nova.compute.
+"/home/
2015-04-27 14:39:00.381 TRACE nova.compute.
2015-04-27 14:39:00.381 TRACE nova.compute.
+"/home/
2015-04-27 14:39:00.381 TRACE nova.compute.
2015-04-27 14:39:00.381 TRACE nova.compute.
+requested image.
2015-04-27 14:39:00.381 TRACE nova.compute.
However in checking that maximum limit, it has indeed run qemu-img info
and suffered the memory usage DOS.
$ cat /tmp/usage.log
info /home/berrange/ src/cloud/ data/nova/ instances/ _base/30854c868 15b92c21c8af45c f8ff5757c04046a a.part ent)k 311876minor) pagefaults 0swaps
0.08user 0.94system 0:01.24elapsed 82%CPU (0avgtext+0avgdata 1260680maxresid
0inputs+0outputs (0major+
So someone needs to see just what code paths in glance might trigger the qemu-img info call. I also wonder if there is any risk to cinder, because it too has various invokations of qemu-img