Hello,
Apparently, custom network names aren't working when deploying with TLS-e and the new tripleo-ipa ansible.
It appears the conditions here[1] and here[2] are filtering out the custom networks, using the "service_net_map_replace" value instead of "name_lower" - IIRC, the "service_net_map_replace" value is to be set to the "standard" name so that tripleo knows about the custom name and its relevant mapping.
Dropping those "if" condition makes the generated content correct on my env - though I didn't test it on a standard env.
[1] https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/extraconfig/nova_metadata/krb-service-principals/role.role.j2.yaml#L64-L66
[2] https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/extraconfig/nova_metadata/krb-service-principals/role.role.j2.yaml#L85-L107
Here's my custom network content:
- name: Storage
vip: true
vlan: 11
name_lower: storage_cloud_0
service_net_map_replace: storage
ip_subnet: '172.16.11.0/24'
allocation_pools: [{'start': '172.16.11.4', 'end': '172.16.11.250'}]
ipv6_subnet: 'fd00:fd00:fd00:1001::/64'
ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:1001::10', 'end': 'fd00:fd00:fd00:1001:ffff:ffff:ffff:fffe'}]
- name: StorageMgmt
name_lower: storage_mgmt_cloud_0
service_net_map_replace: storage_mgmt
vip: true
vlan: 12
ip_subnet: '172.16.12.0/24'
allocation_pools: [{'start': '172.16.12.4', 'end': '172.16.12.250'}]
ipv6_subnet: 'fd00:fd00:fd00:1002::/64'
ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:1002::10', 'end': 'fd00:fd00:fd00:1002:ffff:ffff:ffff:fffe'}]
- name: InternalApi
name_lower: internal_api_cloud_0
service_net_map_replace: internal_api
vip: true
vlan: 13
ip_subnet: '172.16.13.0/24'
allocation_pools: [{'start': '172.16.13.4', 'end': '172.16.13.250'}]
ipv6_subnet: 'fd00:fd00:fd00:1003::/64'
ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:1003::10', 'end': 'fd00:fd00:fd00:1003:ffff:ffff:ffff:fffe'}]
- name: Tenant
vip: false # Tenant network does not use VIPs
name_lower: tenant_cloud_0
service_net_map_replace: tenant
vlan: 14
ip_subnet: '172.16.14.0/24'
allocation_pools: [{'start': '172.16.14.4', 'end': '172.16.14.250'}]
# Note that tenant tunneling is only compatible with IPv4 addressing at this time.
ipv6_subnet: 'fd00:fd00:fd00:1004::/64'
ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:1004::10', 'end': 'fd00:fd00:fd00:1004:ffff:ffff:ffff:fffe'}]
- name: External
vip: true
name_lower: external_cloud_0
service_net_map_replace: external
vlan: 100
ip_subnet: '192.168.100.0/24'
allocation_pools: [{'start': '192.168.100.15', 'end': '192.168.100.100'}]
gateway_ip: '192.168.100.1'
ipv6_subnet: '2001:db8:fd00:1100::/64'
ipv6_allocation_pools: [{'start': '2001:db8:fd00:1100::10', 'end': '2001:db8:fd00:1100:ffff:ffff:ffff:fffe'}]
gateway_ipv6: '2001:db8:fd00:1100::1'
- name: Management
# Management network is enabled by default for backwards-compatibility, but
# is not included in any roles by default. Add to role definitions to use.
enabled: true
vip: false # Management network does not use VIPs
name_lower: management_cloud_0
service_net_map_replace: management
vlan: 16
ip_subnet: '10.0.21.0/24'
allocation_pools: [{'start': '10.0.21.4', 'end': '10.0.21.250'}]
ipv6_subnet: 'fd00:fd00:fd00:1005::/64'
ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:1005::10', 'end': 'fd00:fd00:fd00:1005:ffff:ffff:ffff:fffe'}]
Update:
I just tried to deploy the overcloud with a modified t-h-t content, and applying this patch[1] for correct access to the keytab.
While it did create a lot of things in the IPA, it failed a bit later while trying to get service certificates. For instance:
Could not get certificate: Execution of '/usr/bin/getcert request -I mysql -f /etc/pki/ tls/certs/ mysql.crt -c IPA -N CN=oc0- controller0. internalapi. mydomain. tld -K mysql/oc0-contr internalapi. mydomain. tld -D overcloud. internalapi. mydomain. tld -D oc0-controller0 .internalapi. mydomain. tld -w -k /etc/pki/ tls/private/ mysql.key' returned 3: New signing request \"mysql\" added. main]/Tripleo: :Certmonger: :Mysql/ Certmonger_ certificate[ mysql]: Could not evaluate: Could not get certificate: Server at https:/ /lab-nat- vm.mydomain. tld/ipa/ xml failed request, will retry: 4001 (RPC failed at server. The host 'oc0-controller 0.internalapi. mydomain. tld' does not exist to add a service to.)
oller0.
<13>Jul 21 13:17:36 puppet-user: Error: /Stage[
After a quick check, here are the hosts I can see in IPA:
sudo ipa host-find --raw | grep krbcanonicalname
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
and, for info, the existing services:
sudo ipa service-find --raw | grep krbcanonicalname oc0-controller0 .stor.. .
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: <email address hidden>
krbcanonicalname: haproxy/