ScaleIO volumes contain previous data
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Fix Released
|
High
|
tssgery | ||
OpenStack Security Advisory |
Won't Fix
|
Undecided
|
Unassigned | ||
OpenStack Security Notes |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
ScaleIO driver does not clear the volume after deletion when the following configuration is set in cinder.conf:
[DEFAULT]
(...)
# Method used to wipe old volumes (string value)
# Allowed values: none, zero, shred
volume_clear=zero
# Size in MiB to wipe at start of old volumes. 1024 MiBat max. 0 => all
# (integer value)
# Maximum value: 1024
volume_clear_size=8
Asking on IRC, it appears this feature is not implemented in the ScaleIO driver.
Would it be possible to implement it?
We have the Zero padding feature disabled because of concerns over performance and we are getting newly created volumes that have pre-existing filesystems on them.
With this feature, we could quickly wipe the beginning of the volume and the filesystem would be gone.
CVE References
Changed in ossa: | |
status: | New → Incomplete |
Changed in cinder: | |
status: | New → Triaged |
Changed in cinder: | |
status: | Triaged → In Progress |
Changed in cinder: | |
assignee: | nobody → tssgery (eric-aceshome) |
Changed in ossn: | |
status: | New → Fix Released |
The volume_clear option is not appropriate for scaleio, it's only for drivers where the data path is managed by cinder volume (LVM, block device driver, etc.)
If scaleio is provisioning volumes that have pre-existing data on them, that is a serious bug in the scaleio backend or driver.
It should not expose data of previously deleted volumes, and whatever is needed to fix that behavior should not be optional.