image_utils: code hardening around decompression
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Triaged
|
Low
|
Unassigned |
Bug Description
Cinder introduced a feature in train that can upload a volume as an image in a 'compressed' container_format [0]. When a volume is created from such an image, cinder decompresses the image, using gzip (or qzip, if a hardware accelerator is present) for decompression without any process limits ([1] calls [2]).
This is a hardening opportunity and not a security issue because:
- this feature is governed by a config option and is off by default [3]
- the tmp directory where the decompression is happening is configurable [4] and can be isolated from other tmp space
- gzip is good about cleaning up after itself, so even if it runs out of space doing a decompress, it won't leave the tmp dir full and block its use for concurrent image conversions
We should at least put some processlimits on this to protect against a gzip bomb. An additional hardening would be to add a get_size() method to the Accelerator class so we could compare the decompressed size to the available space before doing the decompression. And there may be some other improvements you can think of.
[0] https:/
[1] https:/
[2] https:/
[3] https:/
[4] https:/