Error in /etc/lvm.conf resulted in reinitialization of already-prepared LVM PVs
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Charm Cinder LVM |
Triaged
|
High
|
Unassigned | ||
Charm Helpers |
New
|
Undecided
|
Unassigned |
Bug Description
During a recent maintenance window, an attempted workaround to https:/
Unfortunately, at one point the file was rewritten incorrectly, setting that line to instead read "wipe_signature
This unfortunately caused certain charmhelpers commands to return incorrect values regarding whether disks were initialized for LVM or not, resulting in the charm re-initializing the disks.
I'm attaching an excerpt from Juju unit logs for cinder-lvm which shows the problem, but giving a summary of what happened:
* reactive/
* configure_
* In configure_
* prepare_
See the attachment for a detailed traceback.
Re: the has_partition_
This leaves the is_lvm_
try:
return True
except CalledProcessError:
return False
Basically, anything which would cause the above command to fail - like, perhaps, a misconfigured /etc/lvm.conf - may cause this check to falsely return that something is *not* an LVM volume, resulting in it getting re-initialized.
In summary: I believe this is a critical bug in charmhelpers which also critically impacts the cinder-lvm charm with the risk of blowing away data on configured LVM devices in the case of a misconfigured /etc/lvm.conf.
Adding ~field-high. This feels pretty critical, and while I know it was caused by an unmanaged action, the fact that an unmanaged edit like that - or really, any "bugged" edit to /etc/lvm/lvm.conf - could cause a wiping of an in-use disk as a side effect of the above-described code path seems to warrant the attention.