juju creates an unused, dead filesystem per deployed unit

Bug #1942091 reported by Leon
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Triaged
High
Unassigned

Bug Description

Every time I deploy an application, juju creates a storage (as specified by metadata.yaml), but then detaches it and creates another one.
As a result, another dead volume is accumulated every deployment, then have to manually juju remove-storage.

Starting point:

> juju status --storage
Model Controller Cloud/Region Version SLA Timestamp
charm-dev ctrlr microk8s/localhost 2.9.11 unsupported 10:24:55-04:00

Model "admin/charm-dev" is empty.

> juju list-storage
No storage to display.

Deploying:

> juju deploy ./alertmanager-k8s_ubuntu-20.04-amd64.charm am --resource alertmanager-image=ubuntu/prometheus-alertmanager --num-units 1
> juju debug-log --tail | grep -i storage
controller-0: 10:26:56.311 WARNING juju.worker.storageprovisioner unexpected dead filesystem attachments: [{unit-am-11 filesystem-21}]
controller-0: 10:26:56.312 WARNING juju.worker.storageprovisioner unexpected dead filesystem attachments: [{unit-am-11 filesystem-21}]
controller-0: 10:26:56.357 WARNING juju.worker.storageprovisioner unexpected dead volume attachments: [{unit-am-11 volume-21}]
unit-am-0: 10:27:21.959 DEBUG unit.am/0.juju-log Legacy hooks/data-storage-attached does not exist.
unit-am-0: 10:27:21.989 DEBUG unit.am/0.juju-log Emitting Juju event data_storage_attached.

> juju status --storage
Model Controller Cloud/Region Version SLA Timestamp
charm-dev ctrlr microk8s/localhost 2.9.11 unsupported 10:28:32-04:00

App Version Status Scale Charm Store Channel Rev OS Address Message
am active 1 alertmanager-k8s local 5 kubernetes 10.152.183.55

Unit Workload Agent Address Ports Message
am/0* active idle 10.1.179.85

Storage Unit Storage id Type Pool Mountpoint Size Status Message
              data/21 filesystem detached
am/0 data/22 filesystem kubernetes /var/lib/juju/storage/data/0 1.0GiB attached Successfully provisioned volume pvc-7911af37-6456-4480-8108-c2eb4eccb502

> juju list-storage
Unit Storage id Type Pool Size Status Message
      data/21 filesystem detached
am/0 data/22 filesystem kubernetes 1.0GiB attached Successfully provisioned volume pvc-7911af37-6456-4480-8108-c2eb4eccb502

Revision history for this message
Leon (sed-i) wrote :

Deploying two units (juju deploy --num-units 2) results in 2 additional dead filesystem, i.e. juju creates an unused, dead filesystem per deployed unit.

summary: - juju deploy always adds a dead filesystem
+ juju creates an unused, dead filesystem per deployed unit
Revision history for this message
John A Meinel (jameinel) wrote :

Is this just https://charmhub.io/alertmanager-k8s ? it would be useful if we can reproduce it.
It certainly isn't expected that deploying 1 unit would create 2 storage volumes and leave one as dead.

Is there anything in the charm that is using the K8s api to change your pod spec?

Changed in juju:
importance: Undecided → High
status: New → Triaged
Revision history for this message
Leon (sed-i) wrote :

This happens for every app that specs storage in metadata.yaml. For example, prometheus-k8s as well.
This happens for both local as well as charmhub charms.

All you need to reproduce is

juju deploy prometheus-k8s prom --channel=edge

If this does not create an unused detach storage for you, then just

juju remove-application --destroy-storage prom
juju deploy prometheus-k8s prom --channel=edge

Reproduced with Juju 2.9.14.

Revision history for this message
Leon (sed-i) wrote :

The only code in the charm that talks to K8s api is for patching service ports

https://github.com/canonical/alertmanager-operator/blob/main/src/kubernetes_service.py

Revision history for this message
Sérgio Manso (sergiomanso) wrote :

I faced a similar issue. Juju could not destroy a model and reported "unxecpected dead volume attachments" on two applications: zookeeper-k8s and kafka-k8s.
In my case, Juju list-storage reported empty and k8s namespace had zero resources assigned.

The workaround found was deleting all the references of the model, units, volume attachments, etc. from the controller db and delete the namespace from k8s.

Environment:
Juju 2.8.11
CDK 1.20
Ubuntu 18.04

(customer airgapped environment, can't get any logs)

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.