ceph-osd r564 : juju run remove-disk Fails with RADOS object not found
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph OSD Charm |
New
|
Undecided
|
Unassigned |
Bug Description
juju run remove-disk Fails with RADOS object not found (error connecting to the cluster) for ceph-osd with quincy/stable-r564
Add disk succeeded but remove disk fails.
Any update would be much appreciated as I am stuck here.
Thankyou for the support !
Remove Disk:
-----------
root@c1:~# juju run ceph-osd/1 remove-disk osd-ids=osd.3 purge=true
Running operation 7 with 1 task
- task 8 on unit-ceph-osd-1
Waiting for task 8...
2023-09-
2023-09-
2023-09-
2023-09-
2023-09-
2023-09-
[errno 2] RADOS object not found (error connecting to the cluster)
Traceback (most recent call last):
File "/var/lib/
main()
File "/var/lib/
action_
File "/var/lib/
reweight_
File "/var/lib/
subprocess.
File "/usr/lib/
raise CalledProcessEr
subprocess.
ERROR the following task failed:
- id "8" with return code 1
----
root@c1:~# juju status
Model Controller Cloud/Region Version SLA Timestamp
ceph manual-default manual/default 3.2.3 unsupported 19:38:50Z
App Version Status Scale Charm Channel Rev Exposed Message
ceph-mon 17.2.6 active 3 ceph-mon quincy/stable 183 no Unit is ready and clustered
ceph-osd 17.2.6 active 3 ceph-osd quincy/stable 564 no Unit is ready (1 OSD)
Add disk: Success
----------
root@c1:~# juju run ceph-osd/1 add-disk osd-devices=
Running operation 5 with 1 task
- task 6 on unit-ceph-osd-1
Waiting for task 6...
Physical volume "/dev/sdc" successfully created.
Volume group "ceph-b12e1e3b-
Logical volume "osd-block-
partx: /dev/sdc: failed to read partition table
Failed to find physical volume "/dev/sdc".
Failed to find physical volume "/dev/sdc".
Running command: /usr/bin/
Running command: /usr/bin/ceph --cluster ceph --name client.
Running command: /usr/bin/
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/
--> Executable selinuxenabled not in PATH: /var/lib/
Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Running command: /usr/bin/ln -s /dev/ceph-
Running command: /usr/bin/ceph --cluster ceph --name client.
stderr: 2023-09-
2023-09-
stderr: got monmap epoch 1
--> Creating keyring file for osd.3
Running command: /usr/bin/chown -R ceph:ceph /var/lib/
Running command: /usr/bin/chown -R ceph:ceph /var/lib/
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/
stderr: 2023-09-
--> ceph-volume lvm prepare successful for: ceph-b12e1e3b-
Running command: /usr/bin/chown -R ceph:ceph /var/lib/
Running command: /usr/bin/
Running command: /usr/bin/ln -snf /dev/ceph-
Running command: /usr/bin/chown -h ceph:ceph /var/lib/
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Running command: /usr/bin/chown -R ceph:ceph /var/lib/
Running command: /usr/bin/systemctl enable ceph-volume@
stderr: Created symlink /<email address hidden> → /lib/systemd/
Running command: /usr/bin/systemctl enable --runtime ceph-osd@3
stderr: Created symlink /run/systemd/
Running command: /usr/bin/systemctl start ceph-osd@3
--> ceph-volume lvm activate successful for osd ID: 3
--> ceph-volume lvm create successful for: ceph-b12e1e3b-
root@c1:~# juju ssh ceph-mon/leader sudo ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.07794 root default
-5 0.01949 host vm1
2 hdd 0.01949 osd.2 up 1.00000 1.00000
-7 0.03897 host vm2
1 hdd 0.01949 osd.1 up 1.00000 1.00000
3 hdd 0.01949 osd.3 up 1.00000 1.00000
-3 0.01949 host vm3
0 hdd 0.01949 osd.0 up 1.00000 1.00000
description: | updated |
tags: | added: ceph-osd charm juju |
description: | updated |
summary: |
- juju (v3.2.3) run remove-disk Fails with RADOS object not found + ceph-osd r564 : juju run remove-disk Fails with RADOS object not found |