fs_setup always creates new filesystem with partition 'auto'
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
cloud-init |
Fix Released
|
Medium
|
Jonathan Ballet | ||
cloud-init (Ubuntu) |
Fix Released
|
Medium
|
Unassigned | ||
Trusty |
Confirmed
|
Low
|
Unassigned | ||
Xenial |
Fix Released
|
Medium
|
Unassigned | ||
Yakkety |
Fix Released
|
Medium
|
Unassigned |
Bug Description
=== Begin SRU Template ===
[Impact]
On instance first boot cloud-init may create a filesystem when
it should have re-used an existing filesystem.
[Test Case]
The test case launches an instance, assuming that has an old version
of cloud-init inside. The user-data will not be valid for the default
configuration of disks. (Most openstack instances would have a
single 'ephemeral' disk in addition to root, with an ext4 filesystem
labelled 'ephemeral0'). We will then upgrade instance to proposed.
And create a filesystem on /dev/vdb1 that *should* match.
1. launch an instance in openstack with the following user-data.
|#cloud-config
|fs_setup:
| - label: mydata
| device: /dev/vdb
| filesystem: ext4
| partition: auto
|mounts:
| - ["/dev/vdb1", "/mnt"]
$ cat > my-userdata.txt <<EOF
#cloud-config
fs_setup:
- label: mydata
device: /dev/vdb
filesystem: ext4
partition: auto
mounts:
- ["/dev/vdb1", "/mnt"]
EOF
$ openstack server create --user-
-
2. ssh in, prepare the /dev/vdb to have a partition, and upgrade
# run attached 'disk-setup'. This will partition the disk
# and wipe any filesystem data off, basically making it a partitioned
# but otherwise empty disk.
$ sudo ./disk-setup
umount: /mnt: not mounted
wiping /dev/vdb
partitioning /dev/vdb
/dev/vdb: PTUUID=
/dev/vdb1: PARTUUID=
3. enable proposed, upgrade
4. clean out state and reboot
sudo rm -Rf /var/lib/cloud /var/log/
sudo sed -i '/comment=
sudo reboot
5. ssh back in and look around.
# cloud-init should have created a filesystem on /dev/vdb1
# and mounted it at /mnt.
$ grep /mnt /proc/mounts
/dev/vdb1 /mnt ext4 rw,relatime,
# and have a filesystem 'mydata'
$ sudo blkid /dev/vdb1
/dev/vdb1: LABEL="mydata" UUID="79090091-
# put a file on there, then clean up and reboot.
# we will expect that that this time, cloud-init will just re-use
# the existing filesystem rather than making another.
$ echo hi mom | sudo tee -a /mnt/my-
6. ssh in and expect /mnt/my-
$ cat /mnt/my-
hi mom
[Regression Potential]
Potentially this could re-use a partition that the user wanted reformatted.
[Other Info]
Upstream commit:
https:/
=== End SRU Template ===
# cloud-init -v
cloud-init 0.7.5
# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.4 LTS
Release: 14.04
Codename: trusty
AMI: ami-1721ff77 - Ubuntu 14.04 20160314
fs_setup fails to detect an existing filesystem and creates a new one when using the following configuration:
fs_setup:
label: None
filesystem: ext4
device: /dev/xvdf
partition: auto
There error seems to be here - https:/
This line sets definition[
I believe " definition[
Related branches
Changed in cloud-init: | |
importance: | Undecided → Medium |
status: | New → Confirmed |
assignee: | nobody → Jonathan Ballet (multani) |
Changed in cloud-init: | |
status: | Confirmed → Fix Committed |
Changed in cloud-init (Ubuntu): | |
status: | New → Confirmed |
importance: | Undecided → Medium |
status: | Confirmed → Fix Released |
Changed in cloud-init (Ubuntu Trusty): | |
status: | New → Confirmed |
Changed in cloud-init (Ubuntu Xenial): | |
status: | New → Confirmed |
Changed in cloud-init (Ubuntu Yakkety): | |
status: | New → Confirmed |
Changed in cloud-init (Ubuntu Trusty): | |
importance: | Undecided → Low |
Changed in cloud-init (Ubuntu Xenial): | |
importance: | Undecided → Medium |
Changed in cloud-init (Ubuntu Yakkety): | |
importance: | Undecided → Medium |
description: | updated |
I'm facing the same problem here.
I'm trying to get this configuration into cloud-init:
disk_setup:
/dev/xvdb:
overwrite: false
layout: true
fs_setup:
- device: /dev/xvdb1
filesystem: ext4
label: test
overwrite: false
partition: auto
For the context, it's applied on a EBS disk on Amazon, and my hope was to be able to keep the content of the partition accross different machines.
I've tested @leelynnef 's patch and it fixes the problem for me.