While working multisite replication where the bucket of ceph cluster A is replicated to the bucket of ceph cluster B and vise versa also where the bucket of ceph cluster B has replicated to the bucket of ceph cluster V. Rados gateway service sometime refuses to startup because it is looking for zone mentioned in ceph.conf is default zonegroup and realm.
But actually, if realm and zonegroup is defined in charm then both should be mentioned in the ceph.conf for properly recognizing and starting a service for replication.
After making the changes in ceph.conf for respective charm service everything works like a charm
rgw_zone = pilot-backup
rgw_zonegroup = pilot
rgw_realm = pilot
in case charm defines all three options in charm config.
before modify
2022-05-26T15:33:43.378+0000 7f98632f6b40 0 deferred set uid:gid to 64045:64045 (ceph:ceph)
2022-05-26T15:33:43.378+0000 7f98632f6b40 0 ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable), process radosgw, pid 38664
2022-05-26T15:33:43.378+0000 7f98632f6b40 0 framework: beast
2022-05-26T15:33:43.378+0000 7f98632f6b40 0 framework conf key: port, val: 423
2022-05-26T15:33:43.378+0000 7f98632f6b40 1 radosgw_Main not setting numa affinity
2022-05-26T15:33:43.702+0000 7f98632f6b40 -1 Cannot find zone id= (name=pilot-backup)
2022-05-26T15:33:43.702+0000 7f98632f6b40 0 ERROR: failed to start notify service ((22) Invalid argument
2022-05-26T15:33:43.702+0000 7f98632f6b40 0 ERROR: failed to init services (ret=(22) Invalid argument)
2022-05-26T15:33:43.706+0000 7f98632f6b40 -1 Couldn't init storage provider (RADOS)
2022-05-26T15:33:44.038+0000 7f9068fe8b40 0 deferred set uid:gid to 64045:64045 (ceph:ceph)
after modify
2022-05-26T16:37:44.654+0000 7f919ba02b40 0 framework: beast
2022-05-26T16:37:44.654+0000 7f919ba02b40 0 framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
2022-05-26T16:37:44.654+0000 7f919ba02b40 0 framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
2022-05-26T16:37:44.654+0000 7f919ba02b40 0 starting handler: beast
2022-05-26T16:37:44.658+0000 7f919ba02b40 0 set uid:gid to 64045:64045 (ceph:ceph)
2022-05-26T16:37:44.702+0000 7f919ba02b40 1 mgrc service_daemon_register rgw.juju-558e26-16-lxd-3 metadata {arch=x86_64,ceph_release=octopus,ceph_version=ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable),ceph_version_short=15.2.14,cpu=Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz,distro=ubuntu,distro_description=Ubuntu 20.04.4 LTS,distro_version=20.04,frontend_config#0=beast port=423,frontend_type#0=beast,hostname=juju-558e26-16-lxd-3,kernel_description=#118-Ubuntu SMP Wed Mar 2 19:02:41 UTC 2022,kernel_version=5.4.0-104-generic,mem_swap_kb=0,mem_total_kb=263767536,num_handles=1,os=Linux,pid=83848,zone_id=cff27fb1-7582-47d3-8b97-7b35941ef2bb,zone_name=pilot-backup,zonegroup_id=3d7239e5-f5b1-4605-b887-f1cd98dc7dab,zonegroup_name=pilot}
2022-05-26T16:37:44.730+0000 7f90c37fe700 1 RGW-SYNC:meta: start
radosgw-admin --id rgw.juju-558e26-16-lxd-3 sync status
2022-05-26T16:55:43.182+0000 7f4032edfb40 1 Cannot find zone id=cff27fb1-7582-47d3-8b97-7b35941ef2bb (name=pilot-backup), switching to local zonegroup configuration
realm c8432054-d120-4f26-98a6-664bf5cb7776 (pilot)
zonegroup 3d7239e5-f5b1-4605-b887-f1cd98dc7dab (pilot)
zone cff27fb1-7582-47d3-8b97-7b35941ef2bb (pilot-backup)
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: 32db379e-f730-49cf-a1d0-3dfa3148c6f2 (dev)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
The ceph.conf file does lack the mentioned keys, and its inclusion doesn't seem like it could break anything, while it's useful for multisite replication.