block storage is not supported for container charms

Bug #1931758 reported by Tom Haddon
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Canonical Juju
Triaged
High
Unassigned

Bug Description

I'm working on a a concourse-worker charm (https://github.com/mthaddon/concourse-worker-operator) and it's currently failing because it's trying to create a btrfs filesystem for its "working directory". I tried to switch to a "block" (from "filesystem") storage device which would then allow Concourse to create what it needed, but when deploying the charm I got:

ERROR block storage "workdir" is not supported for container charms

For reference, the error I'm getting when I have the storage device as "filesystem" is as follows (not directly relevant to this bug, but including for completeness):

2021-06-11T13:46:19.617Z [concourse-worker] {"timestamp":"2021-06-11T13:46:19.616881977Z","level":"error","source":"baggageclaim","message":"baggageclaim.fs.run-command.failed","data":{"args":["bash","-e","-x","-c","\n\t\tif [ ! -e $IMAGE_PATH ] || [ \"$(stat --printf=\"%s\" $IMAGE_PATH)\" != \"$SIZE_IN_BYTES\" ]; then\n\t\t\ttouch $IMAGE_PATH\n\t\t\ttruncate -s ${SIZE_IN_BYTES} $IMAGE_PATH\n\t\tfi\n\n\t\tlo=\"$(losetup -j $IMAGE_PATH | cut -d':' -f1)\"\n\t\tif [ -z \"$lo\" ]; then\n\t\t\tlo=\"$(losetup -f --show $IMAGE_PATH)\"\n\t\tfi\n\n\t\tif ! file $IMAGE_PATH | grep BTRFS; then\n\t\t\tmkfs.btrfs --nodiscard $IMAGE_PATH\n\t\tfi\n\n\t\tmkdir -p $MOUNT_PATH\n\n\t\tif ! mountpoint -q $MOUNT_PATH; then\n\t\t\tmount -t btrfs -o discard $lo $MOUNT_PATH\n\t\tfi\n\t"],"command":"/bin/bash","env":["PATH=/usr/local/concourse/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","MOUNT_PATH=/opt/concourse/worker/volumes","IMAGE_PATH=/opt/concourse/worker/volumes.img","SIZE_IN_BYTES=230504038400"],"error":"exit status 1","session":"3.1","stderr":"+ '[' '!' -e /opt/concourse/worker/volumes.img ']'\n+ touch /opt/concourse/worker/volumes.img\n+ truncate -s 230504038400 /opt/concourse/worker/volumes.img\n++ losetup -j /opt/concourse/worker/volumes.img\n++ cut -d: -f1\n+ lo=\n+ '[' -z '' ']'\n++ losetup -f --show /opt/concourse/worker/volumes.img\nlosetup: cannot find an unused loop device\n+ lo=\n","stdout":""}}
2021-06-11T13:46:19.617Z [concourse-worker] {"timestamp":"2021-06-11T13:46:19.617026127Z","level":"error","source":"baggageclaim","message":"baggageclaim.failed-to-set-up-driver","data":{"error":"failed to create btrfs filesystem: exit status 1"}}
2021-06-11T13:46:19.617Z [concourse-worker] error: failed to create btrfs filesystem: exit status 1

Tom Haddon (mthaddon)
tags: added: sidecar-charm
Revision history for this message
Ian Booth (wallyworld) wrote :

What's the juju deploy command you are trying to use? Storage in juju is premised on the fact that the charm declares the type of storage needed, and at deploy time the --storage argument is used to specify how that storage is to be provisioned. The provisioning implementation is typically provided by the underlying cloud - for k8s that would be a storage class or emptydir; for AWS, an EBS volume etc. Juju also supports the "loop" storage provisioner, which uses a combination of fallocate and losetup to create a loop device on the node hosting the charm.

So idiomatic juju would be

juju deploy concourse-worker --storage workdir=loop

and then the charm could create a btrfs on that block device.
(except that loop is not yet supported on k8s charms but we could look to add it so long as the workload image has losetup etc)

Juju does have a nice behaviour in that if the charm specifies it wants a filesystem and at deploy time the underlying storage is satisfied by a block device, juju will create a filesystem on top and give that to the charm at the specified mount point. However, this is hardcoded to ext4 but it wouldn't be too hard to make that configurable to allow a btrfs filesystem to be set up by juju. Something like:

juju create-storage-pool mydata type=loop filesystem-type=btrfs
juju deploy concourse-worker --storage workdir=mydata,2G

Note though the it seems that scripts built into the concourse docker image are creating a btrfs manually so maybe if we add support for loop devices that might be sufficient. But it would be better to have juju just hand off to the workload a btrfs filesystem at a given mount point.

Revision history for this message
Tom Haddon (mthaddon) wrote :

The command I was using was simply `juju deploy ./concourse-worker.charm --resource concourse-image=concourse/concourse`. Since storage is declared in metadata.yaml I didn't think I needed to declare anything at deploy time? If I deploy with a type of filesystem in metadata.yaml it doesn't seem like I need that, but sounds like either way it would be better if we could have juju manage the storage for us. This may require some changes to the image (or possibly just inputs to the image), as currently it's doing this, from what I can tell:

https://pastebin.ubuntu.com/p/Q9yYXZj3tK/

The docker image does have losetup, but running `losetup -f --show /opt/concourse/worker/volumes.img` on the workload container (which is what the docker image seems to be trying to do) does indeed give "losetup: cannot find an unused loop device".

It sounds like what we're really want here is:

1) A way in Concourse to be able to say "if directory x is a btrfs mount point, do nothing"
2) A way in Juju to be able to set up btrfs filesystems

Alternatively, we'd want a way to say to Concourse "just use whatever filesystem is already there" :)

What do you think?

Revision history for this message
Ian Booth (wallyworld) wrote :

Without the --storage arg, juju will use sensible (provider specific) defaults to provision the charm storage requirements. eg block storage on a lxd cloud will be a loop device, on aws it will be a ebs volume; a filesystem storage on lxd will be rootfs type, on aws it will be a fi;esystem on yop on an ebs volume.

All of the above is premised on there being the concept of a storage provisioner, some of which are built in to juju, like loop, rootfs, tmpfs; others are cloud specific, like ebs, cinder, k8s storage class etc.

k8s does support tmpfs and rootfs storage types - these are mapped to emytydir memory or disk respectively. But loop is not supported yet. And any filesystem juju creates on top of a block storage is hard wired to ext4 currently.

So yeah, juju could provision and provide a btrfs filesystem mounted at /foo and concourse should use that.

Changed in juju:
milestone: none → 2.9.8
importance: Undecided → High
status: New → Triaged
Changed in juju:
milestone: 2.9.8 → 2.9.9
Changed in juju:
milestone: 2.9.9 → 2.9.10
Changed in juju:
milestone: 2.9.10 → 2.9.11
Changed in juju:
milestone: 2.9.11 → 2.9.12
Changed in juju:
milestone: 2.9.12 → 2.9.13
Changed in juju:
milestone: 2.9.13 → 2.9.14
Changed in juju:
milestone: 2.9.14 → 2.9.15
Changed in juju:
milestone: 2.9.15 → 2.9.16
Changed in juju:
milestone: 2.9.16 → 2.9.17
Changed in juju:
milestone: 2.9.17 → 2.9.18
Changed in juju:
milestone: 2.9.18 → 2.9.19
Changed in juju:
milestone: 2.9.19 → 2.9.20
Changed in juju:
milestone: 2.9.20 → 2.9.21
Changed in juju:
milestone: 2.9.21 → 2.9.22
Changed in juju:
milestone: 2.9.22 → 2.9.23
Changed in juju:
milestone: 2.9.23 → 2.9.24
Changed in juju:
milestone: 2.9.24 → 2.9.25
Changed in juju:
milestone: 2.9.25 → 2.9.26
Changed in juju:
milestone: 2.9.26 → 2.9.27
Changed in juju:
milestone: 2.9.27 → 2.9.28
Harry Pidcock (hpidcock)
Changed in juju:
milestone: 2.9.28 → 2.9-next
Harry Pidcock (hpidcock)
Changed in juju:
milestone: 2.9-next → 3.1-beta1
Changed in juju:
milestone: 3.1-beta1 → 3.2-beta1
Revision history for this message
Heather Lanigan (hmlanigan) wrote :

@wallyworld, looking at deployment arg validation recently, there is a an explicit check to reject charms with blockstorage specified in their metadata from caas models:

https://github.com/juju/juju/blob/develop/apiserver/facades/client/application/application.go#L438

Changed in juju:
milestone: 3.2-beta1 → 3.2-rc1
Changed in juju:
milestone: 3.2-rc1 → 3.2.0
Changed in juju:
milestone: 3.2.0 → 3.2.1
Changed in juju:
milestone: 3.2.1 → 3.2.2
Changed in juju:
milestone: 3.2.2 → 3.2.3
Changed in juju:
milestone: 3.2.3 → 3.2.4
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.