Based on prior discussions we are going to implement the path-based approach only for now.
Implementation considerations:
* Deployments where os-{public,admin,internal} hostnames are not set are supported and this situation needs to be handled too. "rgw_dns_name" config does not support IP addresses or multiple hostnames;
* Both single-unit and HA deployments are supported (where units have different hostnames and can be multi-homed where each IP has its own hostname);
* not every environment has a proper DNS setup but the ones we test and deploy in mostly do (MAAS, OpenStack with ML2 DNS);
* there are multiple hostname config options supported by the charm and all of them should be added to the zone group after it is created;
This means that if we want to manage multiple hostnames in the zone group state (public, internal, admin) we need to add all of them and not just rely on config.
Whether the addition of hostnames to the zone group is idempotent is to be determined (required for error handling and deciding on which unit is going to manage the zone group config).
* One config option needs to be added to the charm (resolve-cname=<true|false>);
* The following approach for handling hostnames could be adopted:
1. # obtain unit_ip_in_the_public_network_space via network-get and do a reverse resolution
# unit_public_hostname = socket.getnameinfo((unit_ip_in_the_public_network_space, 0), 0)[0]
rgw dns name = {{ unit_public_hostname }}
2. add the values of os-{public,admin,internal}-hostname config keys into the zone group specified in the "zone-group" charm config (when it is created).
Testing considerations:
1. A scenario without HA:
* deploy a bundle with a single unit of ceph-radosgw and tls enabled (via vault) and the resolve-cname configuration option set to "true";
* create a bucket via the S3 API;
* read a hostname from a radosgw unit (via `hostname -f` over juju run) and access the bucket using the hostname of the radosgw unit: https://<unit-hostname-in-the-public-space>/bucket-name;
2. The HA scenario:
* deploy a bundle with 3 units of ceph-radosgw and hacluster with tls enabled (via vault), the resolve-cname configuration option set to "true" and os-public-hostname set to some value (needs DNS configuration which is tricky in the CI environment. Alternatively modifying /etc/hosts on units in the model via the Zaza test code could be done);
* create a bucket via the S3 API;
* read the hostname from a radosgw unit (via hostname -f over juju run) and access the bucket using a hostname of a single unit in a URL: https://<unit-hostname-in-the-public-space>/bucket-name;
* access the bucket via os-public-hostname used in the URL: https://<os-public-hostname>/bucket-name;
Based on prior discussions we are going to implement the path-based approach only for now.
Implementation considerations:
* Deployments where os-{public,admin,internal} hostnames are not set are supported and this situation needs to be handled too. "rgw_dns_name" config does not support IP addresses or multiple hostnames;
* Both single-unit and HA deployments are supported (where units have different hostnames and can be multi-homed where each IP has its own hostname);
* not every environment has a proper DNS setup but the ones we test and deploy in mostly do (MAAS, OpenStack with ML2 DNS);
* there are multiple hostname config options supported by the charm and all of them should be added to the zone group after it is created;
* "rgw_dns_name" is only added to the in-memory set of hostnames that a given radosgw daemon looks at, not to the zone group config: https://github.com/ceph/ceph/blob/v16.0.0/src/rgw/rgw_rest.cc#L209-L210
This means that if we want to manage multiple hostnames in the zone group state (public, internal, admin) we need to add all of them and not just rely on config.
Whether the addition of hostnames to the zone group is idempotent is to be determined (required for error handling and deciding on which unit is going to manage the zone group config).
* One config option needs to be added to the charm (resolve-cname=<true|false>);
* The following approach for handling hostnames could be adopted:
1. # obtain unit_ip_in_the_public_network_space via network-get and do a reverse resolutionhostname = socket.getnameinfo((unit_ip_in_the_public_network_space, 0), 0)[0]hostname }}
# unit_public_
rgw dns name = {{ unit_public_
2. add the values of os-{public,admin,internal}-hostname config keys into the zone group specified in the "zone-group" charm config (when it is created).
Testing considerations:
1. A scenario without HA:
* deploy a bundle with a single unit of ceph-radosgw and tls enabled (via vault) and the resolve-cname configuration option set to "true";in-the-public-space>/bucket-name;
* create a bucket via the S3 API;
* read a hostname from a radosgw unit (via `hostname -f` over juju run) and access the bucket using the hostname of the radosgw unit: https://<unit-hostname-
2. The HA scenario:
* deploy a bundle with 3 units of ceph-radosgw and hacluster with tls enabled (via vault), the resolve-cname configuration option set to "true" and os-public-hostname set to some value (needs DNS configuration which is tricky in the CI environment. Alternatively modifying /etc/hosts on units in the model via the Zaza test code could be done);in-the-public-space>/bucket-name;hostname>/bucket-name;
* create a bucket via the S3 API;
* read the hostname from a radosgw unit (via hostname -f over juju run) and access the bucket using a hostname of a single unit in a URL: https://<unit-hostname-
* access the bucket via os-public-hostname used in the URL: https://<os-public-