Stop deployment erases partitions sizes
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Confirmed
|
Medium
|
Fuel Python (Deprecated) | ||
8.0.x |
Confirmed
|
Medium
|
Fuel Python (Deprecated) | ||
Mitaka |
Confirmed
|
Medium
|
Fuel Python (Deprecated) |
Bug Description
Scenario:
1. Create new environment
2. Add nodes (in my case 3 ctrl + ceph, 2 computes)
3. Change default partitioning scheme for ceph partitions (see 1 pic)
4. Start deployment
5. Stop process on controller deployment (see 2 pic)
6. Start deploy cluster
7. Check disks partitions (see 3 pic)
Expected result:
All partitions have the same size as before a stop action
Actual result:
Partitions' sizes are dropped to default.
cat /etc/fuel/
VERSION:
feature_groups:
- mirantis
production: "docker"
release: "8.0"
api: "1.0"
build_number: "529"
build_id: "529"
fuel-nailgun_sha: "baec8643ca624e
python-
fuel-agent_sha: "658be72c4b42d3
fuel-
astute_sha: "b81577a5b7857c
fuel-library_sha: "e2d79330d5d708
fuel-ostf_sha: "3bc76a63a9e7d1
fuel-mirror_sha: "fb45b80d7bee58
fuelmenu_sha: "e071216cb214e3
shotgun_sha: "63645dea384a37
network-
fuel-upgrade_sha: "616a7490ec7199
fuelmain_sha: "a365f05b903368
tags: | added: area-python |
Good old volume manager ignoring changes to volumes metadata, it seems. Linking the bug to BP: https:/ /blueprints. launchpad. net/fuel/ +spec/volume- manager- refactoring