I'm working on a a concourse-worker charm (https://github.com/mthaddon/concourse-worker-operator) and it's currently failing because it's trying to create a btrfs filesystem for its "working directory". I tried to switch to a "block" (from "filesystem") storage device which would then allow Concourse to create what it needed, but when deploying the charm I got:
ERROR block storage "workdir" is not supported for container charms
For reference, the error I'm getting when I have the storage device as "filesystem" is as follows (not directly relevant to this bug, but including for completeness):
2021-06-11T13:46:19.617Z [concourse-worker] {"timestamp":"2021-06-11T13:46:19.616881977Z","level":"error","source":"baggageclaim","message":"baggageclaim.fs.run-command.failed","data":{"args":["bash","-e","-x","-c","\n\t\tif [ ! -e $IMAGE_PATH ] || [ \"$(stat --printf=\"%s\" $IMAGE_PATH)\" != \"$SIZE_IN_BYTES\" ]; then\n\t\t\ttouch $IMAGE_PATH\n\t\t\ttruncate -s ${SIZE_IN_BYTES} $IMAGE_PATH\n\t\tfi\n\n\t\tlo=\"$(losetup -j $IMAGE_PATH | cut -d':' -f1)\"\n\t\tif [ -z \"$lo\" ]; then\n\t\t\tlo=\"$(losetup -f --show $IMAGE_PATH)\"\n\t\tfi\n\n\t\tif ! file $IMAGE_PATH | grep BTRFS; then\n\t\t\tmkfs.btrfs --nodiscard $IMAGE_PATH\n\t\tfi\n\n\t\tmkdir -p $MOUNT_PATH\n\n\t\tif ! mountpoint -q $MOUNT_PATH; then\n\t\t\tmount -t btrfs -o discard $lo $MOUNT_PATH\n\t\tfi\n\t"],"command":"/bin/bash","env":["PATH=/usr/local/concourse/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","MOUNT_PATH=/opt/concourse/worker/volumes","IMAGE_PATH=/opt/concourse/worker/volumes.img","SIZE_IN_BYTES=230504038400"],"error":"exit status 1","session":"3.1","stderr":"+ '[' '!' -e /opt/concourse/worker/volumes.img ']'\n+ touch /opt/concourse/worker/volumes.img\n+ truncate -s 230504038400 /opt/concourse/worker/volumes.img\n++ losetup -j /opt/concourse/worker/volumes.img\n++ cut -d: -f1\n+ lo=\n+ '[' -z '' ']'\n++ losetup -f --show /opt/concourse/worker/volumes.img\nlosetup: cannot find an unused loop device\n+ lo=\n","stdout":""}}
2021-06-11T13:46:19.617Z [concourse-worker] {"timestamp":"2021-06-11T13:46:19.617026127Z","level":"error","source":"baggageclaim","message":"baggageclaim.failed-to-set-up-driver","data":{"error":"failed to create btrfs filesystem: exit status 1"}}
2021-06-11T13:46:19.617Z [concourse-worker] error: failed to create btrfs filesystem: exit status 1
What's the juju deploy command you are trying to use? Storage in juju is premised on the fact that the charm declares the type of storage needed, and at deploy time the --storage argument is used to specify how that storage is to be provisioned. The provisioning implementation is typically provided by the underlying cloud - for k8s that would be a storage class or emptydir; for AWS, an EBS volume etc. Juju also supports the "loop" storage provisioner, which uses a combination of fallocate and losetup to create a loop device on the node hosting the charm.
So idiomatic juju would be
juju deploy concourse-worker --storage workdir=loop
and then the charm could create a btrfs on that block device.
(except that loop is not yet supported on k8s charms but we could look to add it so long as the workload image has losetup etc)
Juju does have a nice behaviour in that if the charm specifies it wants a filesystem and at deploy time the underlying storage is satisfied by a block device, juju will create a filesystem on top and give that to the charm at the specified mount point. However, this is hardcoded to ext4 but it wouldn't be too hard to make that configurable to allow a btrfs filesystem to be set up by juju. Something like:
juju create-storage-pool mydata type=loop filesystem- type=btrfs
juju deploy concourse-worker --storage workdir=mydata,2G
Note though the it seems that scripts built into the concourse docker image are creating a btrfs manually so maybe if we add support for loop devices that might be sufficient. But it would be better to have juju just hand off to the workload a btrfs filesystem at a given mount point.