2020-05-20 16:48:05 |
Frode Nordahl |
bug |
|
|
added bug |
2020-05-20 16:50:01 |
Frode Nordahl |
charm-ceph-rbd-mirror: status |
New |
Triaged |
|
2020-05-20 16:50:04 |
Frode Nordahl |
charm-ceph-rbd-mirror: importance |
Undecided |
High |
|
2020-05-20 16:52:40 |
Frode Nordahl |
description |
Even though we request keys from ceph with an rbd-mirror profile the rbd-mirror process appears to not be able to interact with the Ceph cluster in the way it expects at Octopus.
Enabling debug log on the MON may give log output such as:
2020-05-20T15:54:08.425042+0000 mon.juju-e32644-zaza-048032ef4dd5-7 (mon.1) 960 : audit [DBG] from='client.? 172.20.0.52:0/1474256700' entity='client.rbd-mirror.juju-e32644-zaza-048032ef4dd5-13' cmd=[{"prefix": "config-key get", "key": "rbd/mirror/peer/2/eb2ee01a-9a02-40da-950b-4ddc1f5e9e26"}]: access denied
2020-05-20T15:54:08.427957+0000 mon.juju-e32644-zaza-048032ef4dd5-7 (mon.1) 961 : audit [DBG] from='client.? 172.20.0.52:0/1474256700' entity='client.rbd-mirror.juju-e32644-zaza-048032ef4dd5-13' cmd=[{"prefix": "config-key get", "key": "rbd/mirror/peer/3/b8a91922-bda0-4ecc-9ffb-031e448311fe"}]: access denied
2020-05-20T15:54:08.429413+0000 mon.juju-e32644-zaza-048032ef4dd5-7 (mon.1) 962 : audit [DBG] from='client.? 172.20.0.52:0/1474256700' entity='client.rbd-mirror.juju-e32644-zaza-048032ef4dd5-13' cmd=[{"prefix": "config-key get", "key": "rbd/mirror/site_name"}]: access denied
The first side effect of this is that Ceph will mark any mirrored image/pool as being in a WARNING state, despite the fact that data appears to be mirrored, subsequently the charm will report this and be stuck in a blocked state. |
Even though we request keys from ceph with an rbd-mirror profile the rbd-mirror process appears to not be able to interact with the Ceph cluster in the way it expects at Octopus.
Enabling debug log on the MON may give log output such as:
2020-05-20T15:54:08.425042+0000 mon.juju-e32644-zaza-048032ef4dd5-7 (mon.1) 960 : audit [DBG] from='client.? 172.20.0.52:0/1474256700' entity='client.rbd-mirror.juju-e32644-zaza-048032ef4dd5-13' cmd=[{"prefix": "config-key get", "key": "rbd/mirror/peer/2/eb2ee01a-9a02-40da-950b-4ddc1f5e9e26"}]: access denied
2020-05-20T15:54:08.427957+0000 mon.juju-e32644-zaza-048032ef4dd5-7 (mon.1) 961 : audit [DBG] from='client.? 172.20.0.52:0/1474256700' entity='client.rbd-mirror.juju-e32644-zaza-048032ef4dd5-13' cmd=[{"prefix": "config-key get", "key": "rbd/mirror/peer/3/b8a91922-bda0-4ecc-9ffb-031e448311fe"}]: access denied
2020-05-20T15:54:08.429413+0000 mon.juju-e32644-zaza-048032ef4dd5-7 (mon.1) 962 : audit [DBG] from='client.? 172.20.0.52:0/1474256700' entity='client.rbd-mirror.juju-e32644-zaza-048032ef4dd5-13' cmd=[{"prefix": "config-key get", "key": "rbd/mirror/site_name"}]: access denied
The first side effect of this is that Ceph will mark any mirrored image/pool as being in a WARNING state, despite the fact that data appears to be mirrored, subsequently the charm will report this and be stuck in a blocked state.
At this stage in discovery our opinion is that this must be an upstream Ceph Octopus bug, but we keep this as a charm bug for tracking until we have final confirmation. |
|
2020-05-21 07:27:50 |
Frode Nordahl |
description |
Even though we request keys from ceph with an rbd-mirror profile the rbd-mirror process appears to not be able to interact with the Ceph cluster in the way it expects at Octopus.
Enabling debug log on the MON may give log output such as:
2020-05-20T15:54:08.425042+0000 mon.juju-e32644-zaza-048032ef4dd5-7 (mon.1) 960 : audit [DBG] from='client.? 172.20.0.52:0/1474256700' entity='client.rbd-mirror.juju-e32644-zaza-048032ef4dd5-13' cmd=[{"prefix": "config-key get", "key": "rbd/mirror/peer/2/eb2ee01a-9a02-40da-950b-4ddc1f5e9e26"}]: access denied
2020-05-20T15:54:08.427957+0000 mon.juju-e32644-zaza-048032ef4dd5-7 (mon.1) 961 : audit [DBG] from='client.? 172.20.0.52:0/1474256700' entity='client.rbd-mirror.juju-e32644-zaza-048032ef4dd5-13' cmd=[{"prefix": "config-key get", "key": "rbd/mirror/peer/3/b8a91922-bda0-4ecc-9ffb-031e448311fe"}]: access denied
2020-05-20T15:54:08.429413+0000 mon.juju-e32644-zaza-048032ef4dd5-7 (mon.1) 962 : audit [DBG] from='client.? 172.20.0.52:0/1474256700' entity='client.rbd-mirror.juju-e32644-zaza-048032ef4dd5-13' cmd=[{"prefix": "config-key get", "key": "rbd/mirror/site_name"}]: access denied
The first side effect of this is that Ceph will mark any mirrored image/pool as being in a WARNING state, despite the fact that data appears to be mirrored, subsequently the charm will report this and be stuck in a blocked state.
At this stage in discovery our opinion is that this must be an upstream Ceph Octopus bug, but we keep this as a charm bug for tracking until we have final confirmation. |
Even though we request keys from ceph with an rbd-mirror profile the rbd-mirror process appears to not be able to interact with the Ceph cluster in the way it expects at Octopus. We also made an attempt to give the rbd-mirror user full access to the mon.
Enabling debug log on the MON may give log output such as:
2020-05-20T15:54:08.425042+0000 mon.juju-e32644-zaza-048032ef4dd5-7 (mon.1) 960 : audit [DBG] from='client.? 172.20.0.52:0/1474256700' entity='client.rbd-mirror.juju-e32644-zaza-048032ef4dd5-13' cmd=[{"prefix": "config-key get", "key": "rbd/mirror/peer/2/eb2ee01a-9a02-40da-950b-4ddc1f5e9e26"}]: access denied
2020-05-20T15:54:08.427957+0000 mon.juju-e32644-zaza-048032ef4dd5-7 (mon.1) 961 : audit [DBG] from='client.? 172.20.0.52:0/1474256700' entity='client.rbd-mirror.juju-e32644-zaza-048032ef4dd5-13' cmd=[{"prefix": "config-key get", "key": "rbd/mirror/peer/3/b8a91922-bda0-4ecc-9ffb-031e448311fe"}]: access denied
2020-05-20T15:54:08.429413+0000 mon.juju-e32644-zaza-048032ef4dd5-7 (mon.1) 962 : audit [DBG] from='client.? 172.20.0.52:0/1474256700' entity='client.rbd-mirror.juju-e32644-zaza-048032ef4dd5-13' cmd=[{"prefix": "config-key get", "key": "rbd/mirror/site_name"}]: access denied
The first side effect of this is that Ceph will mark any mirrored image/pool as being in a WARNING state, despite the fact that data appears to be mirrored, subsequently the charm will report this and be stuck in a blocked state.
At this stage in discovery our opinion is that this must be an upstream Ceph Octopus bug, but we keep this as a charm bug for tracking until we have final confirmation. |
|
2020-05-21 13:15:21 |
Liam Young |
bug watch added |
|
http://tracker.ceph.com/issues/45638 |
|