Failed to start Ceph metadata server daemon
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Ceph-FS Charm |
Invalid
|
Undecided
|
Unassigned |
Bug Description
I have deployed a ceph cluster via chamrs bundle file
series: focal
variables:
oam-space: &oam-space oam-space
customize-
machines:
"0":
constraints: tags=ceph-node-1
series: focal
"1":
constraints: tags=ceph-node-2
series: focal
"2":
constraints: tags=ceph-node-3
series: focal
"3":
constraints: tags=master
series: focal
"4":
constraints: tags=worker1
series: focal
"5":
constraints: tags=worker2
series: focal
"6":
constraints: tags=ceph-fs-1
series: focal
applications:
ceph-fs:
charm: ceph-fs
channel: stable
revision: 36
num_units: 1
to:
- "6"
bindings:
"": *oam-space
ceph-mds: *oam-space
certificates: *oam-space
public: *oam-space
ceph-mon:
charm: cs:ceph-mon
num_units: 3
bindings:
"": *oam-space
public: *oam-space
osd: *oam-space
options:
monitor-
expected-
customize
source: cloud:focal-wallaby
to:
- lxd:3
- lxd:4
- lxd:5
ceph-osd:
charm: cs:ceph-osd
num_units: 3
bindings:
"": *oam-space
public: *oam-space
cluster: *oam-space
options:
osd-devices: /dev/vdb
source: cloud:focal-wallaby
aa-
customize
autotune: false
bluestore: true
osd-encrypt: True
to:
- '0'
- '1'
- '2'
ntp:
charm: "cs:focal/ntp"
annotations:
gui-x: '678.6017761230469'
gui-y: '415.2712475975
relations:
- [ "ceph-osd:mon", "ceph-mon:osd" ]
- [ "ceph-osd:
- [ "ceph-fs:ceph-mds", "ceph-mon:mds" ]
inside ceph-mon lxd container when I issue ceph -s command it returns following
root@juju-
cluster:
id: 2efa1500-
health: HEALTH_ERR
mons are allowing insecure global_id reclaim
1 filesystem is offline
1 filesystem is online with fewer MDS than max_mds
Reduced data availability: 104 pgs inactive
services:
mon: 3 daemons, quorum juju-0026d2-
mgr: juju-0026d2-
mds: 0/0 daemons up, 1 standby
osd: 3 osds: 3 up (since 8h), 3 in (since 8h)
data:
volumes: 1/1 healthy
pools: 3 pools, 104 pgs
objects: 0 objects, 0 B
usage: 16 MiB used, 900 GiB / 900 GiB avail
pgs: 100.000% pgs not active
104 undersized+peered
progress:
Global Recovery Event (8h)
[
and I if
id=0
mkdir /var/lib/
sudo ceph auth get-or-create mds.${id} mon 'profile mds' mgr 'profile mds' mds 'allow *' osd 'allow *' > /var/lib/
sudo systemctl start ceph-mds@${id}
root@juju-
● ceph-mds@0.service - Ceph metadata server daemon
Loaded: loaded (/lib/systemd/
Active: failed (Result: exit-code) since Wed 2022-02-23 07:35:05 UTC; 10min ago
Process: 47432 ExecStart=
Main PID: 47432 (code=exited, status=1/FAILURE)
Feb 23 07:35:05 juju-0026d2-3-lxd-0 systemd[1]: ceph-mds@0.service: Scheduled restart job, restart counter is at 3.
Feb 23 07:35:05 juju-0026d2-3-lxd-0 systemd[1]: Stopped Ceph metadata server daemon.
Feb 23 07:35:05 juju-0026d2-3-lxd-0 systemd[1]: ceph-mds@0.service: Start request repeated too quickly.
Feb 23 07:35:05 juju-0026d2-3-lxd-0 systemd[1]: ceph-mds@0.service: Failed with result 'exit-code'.
Feb 23 07:35:05 juju-0026d2-3-lxd-0 systemd[1]: Failed to start Ceph metadata server daemon.
Feb 23 07:39:22 juju-0026d2-3-lxd-0 systemd[1]: ceph-mds@0.service: Start request repeated too quickly.
Feb 23 07:39:22 juju-0026d2-3-lxd-0 systemd[1]: ceph-mds@0.service: Failed with result 'exit-code'.
Feb 23 07:39:22 juju-0026d2-3-lxd-0 systemd[1]: Failed to start Ceph metadata server daemon.
root@juju- 0026d2- 3-lxd-0: ~# ceph fs ls 0026d2- 3-lxd-0: ~# ceph fs dump
name: ceph-fs, metadata pool: ceph-fs_metadata, data pools: [ceph-fs_data ]
root@juju-
e4 multiple: 1,1 {},rocompat= {},incompat= {1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
enable_multiple, ever_enabled_
default compat: compat=
legacy client fscid: 1
Filesystem 'ceph-fs' (1) 22T23:16: 19.325823+ 0000 23T08:20: 31.622066+ 0000 client_ features {} osd_epoch 0 {},rocompat= {},incompat= {1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} count_wanted 0
fs_name ceph-fs
epoch 4
flags 12
created 2022-02-
modified 2022-02-
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
required_
last_failure 0
last_failure_
compat compat=
max_mds 1
in
up {}
failed
damaged
stopped
data_pools [2]
metadata_pool 3
inline_data disabled
balancer
standby_
Standby daemons:
[mds.ceph- fs-1{-1: 4931} state up:standby seq 1 addr [v2:192. 168.24. 52:6800/ 1182135415, v1:192. 168.24. 52:6801/ 1182135415] compat {c=[1], r=[1],i= [1]}] 0026d2- 3-lxd-0: ~# 0026d2- 3-lxd-0: ~# ceph fs status c1b862ff3fd8b1e ec13391a5be) octopus (stable)
dumped fsmap epoch 4
root@juju-
root@juju-
ceph-fs - 0 clients
=======
POOL TYPE USED AVAIL
ceph-fs_metadata metadata 0 284G
ceph-fs_data data 0 284G
STANDBY MDS
ceph-fs-1
MDS version: ceph version 15.2.14 (cd3bb7e87a2f62