Network spaces in Juju allow you to bind endpoints defined in metadata.yaml to network spaces defined in MAAS ("public" and "cluster" endpoints in case of ceph). Network spaces aggregate VLANs (with their assigned subnets) into a single logical construct which is similar to a routing domain or VRF. In other words, one network space == one L3 network.
When you deploy, in a bundle you would specify something like this:
applications:
ceph-osd:
# ...
bindings:
# <endpoint name in metadata.yaml> : <space name from MAAS>
public: ceph-public-space
cluster: ceph-cluster-space
ceph-mon:
# ...
bindings:
# <endpoint name in metadata.yaml> : <space name from MAAS>
public: ceph-public-space
cluster: ceph-cluster-space
Ceph charms use network-get hook tool to retrieve an "ingress-address" for each unit. ingress-address is taken from a node and nodes are allocated by MAAS and receive relevant addresses based on subnets and VLANs they have network ports on. So, if you have a multi-subnet setup in MAAS network-get will always give you the right address which will be put into public_addr or cluster_addr options in ceph.conf by charms. Ceph has a number of options for a multi-network setup and a complex logic (*) to select among the different config options. Ceph charms rely on public_addr and cluster_addr when network spaces are used and you need to avoid using config options, otherwise that Ceph logic may inadvertently pick an incorrect address.
Hi Bruno,
If you are using MAAS you should use Juju network spaces instead of charm config.
https:/ /docs.jujucharm s.com/2. 4/en/network- spaces
Network spaces in Juju allow you to bind endpoints defined in metadata.yaml to network spaces defined in MAAS ("public" and "cluster" endpoints in case of ceph). Network spaces aggregate VLANs (with their assigned subnets) into a single logical construct which is similar to a routing domain or VRF. In other words, one network space == one L3 network.
https:/ /github. com/openstack/ charm-ceph- osd/blob/ stable/ 18.08/metadata. yaml#L24- L26
When you deploy, in a bundle you would specify something like this:
applications:
ceph-osd:
# ...
bindings:
# <endpoint name in metadata.yaml> : <space name from MAAS>
public: ceph-public-space
cluster: ceph-cluster-space
ceph-mon:
# ...
bindings:
# <endpoint name in metadata.yaml> : <space name from MAAS>
public: ceph-public-space
cluster: ceph-cluster-space
Ceph charms use network-get hook tool to retrieve an "ingress-address" for each unit. ingress-address is taken from a node and nodes are allocated by MAAS and receive relevant addresses based on subnets and VLANs they have network ports on. So, if you have a multi-subnet setup in MAAS network-get will always give you the right address which will be put into public_addr or cluster_addr options in ceph.conf by charms. Ceph has a number of options for a multi-network setup and a complex logic (*) to select among the different config options. Ceph charms rely on public_addr and cluster_addr when network spaces are used and you need to avoid using config options, otherwise that Ceph logic may inadvertently pick an incorrect address.
https:/ /github. com/openstack/ charm-ceph- osd/blob/ stable/ 18.08/hooks/ utils.py# L106-L129 get_primary_ address( 'public' )
return network_
return network_ get_primary_ address( 'cluster' )
https:/ /docs.jujucharm s.com/2. 4/en/developer- network- primitives# network- get
https:/ /git.launchpad. net/ubuntu/ +source/ ceph/tree/ src/common/ options. cc?h=ubuntu/ bionic- updates& id=import/ 12.2.4- 0ubuntu1. 1#n167 /git.launchpad. net/ubuntu/ +source/ ceph/tree/ src/common/ pick_address. cc?h=ubuntu/ bionic- updates& id=import/ 12.2.4- 0ubuntu1. 1#n157
https:/
I hope that helps.