custom network names not taken into account for extraconfig/nova_metadata/krb-service-principals/role.role.j2.yaml

Bug #1888388 reported by Cédric Jeanneret
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
tripleo
Triaged
High
Unassigned

Bug Description

Hello,

Apparently, custom network names aren't working when deploying with TLS-e and the new tripleo-ipa ansible.

It appears the conditions here[1] and here[2] are filtering out the custom networks, using the "service_net_map_replace" value instead of "name_lower" - IIRC, the "service_net_map_replace" value is to be set to the "standard" name so that tripleo knows about the custom name and its relevant mapping.

Dropping those "if" condition makes the generated content correct on my env - though I didn't test it on a standard env.

[1] https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/extraconfig/nova_metadata/krb-service-principals/role.role.j2.yaml#L64-L66

[2] https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/extraconfig/nova_metadata/krb-service-principals/role.role.j2.yaml#L85-L107

Here's my custom network content:
- name: Storage
  vip: true
  vlan: 11
  name_lower: storage_cloud_0
  service_net_map_replace: storage
  ip_subnet: '172.16.11.0/24'
  allocation_pools: [{'start': '172.16.11.4', 'end': '172.16.11.250'}]
  ipv6_subnet: 'fd00:fd00:fd00:1001::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:1001::10', 'end': 'fd00:fd00:fd00:1001:ffff:ffff:ffff:fffe'}]
- name: StorageMgmt
  name_lower: storage_mgmt_cloud_0
  service_net_map_replace: storage_mgmt
  vip: true
  vlan: 12
  ip_subnet: '172.16.12.0/24'
  allocation_pools: [{'start': '172.16.12.4', 'end': '172.16.12.250'}]
  ipv6_subnet: 'fd00:fd00:fd00:1002::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:1002::10', 'end': 'fd00:fd00:fd00:1002:ffff:ffff:ffff:fffe'}]
- name: InternalApi
  name_lower: internal_api_cloud_0
  service_net_map_replace: internal_api
  vip: true
  vlan: 13
  ip_subnet: '172.16.13.0/24'
  allocation_pools: [{'start': '172.16.13.4', 'end': '172.16.13.250'}]
  ipv6_subnet: 'fd00:fd00:fd00:1003::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:1003::10', 'end': 'fd00:fd00:fd00:1003:ffff:ffff:ffff:fffe'}]
- name: Tenant
  vip: false # Tenant network does not use VIPs
  name_lower: tenant_cloud_0
  service_net_map_replace: tenant
  vlan: 14
  ip_subnet: '172.16.14.0/24'
  allocation_pools: [{'start': '172.16.14.4', 'end': '172.16.14.250'}]
  # Note that tenant tunneling is only compatible with IPv4 addressing at this time.
  ipv6_subnet: 'fd00:fd00:fd00:1004::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:1004::10', 'end': 'fd00:fd00:fd00:1004:ffff:ffff:ffff:fffe'}]
- name: External
  vip: true
  name_lower: external_cloud_0
  service_net_map_replace: external
  vlan: 100
  ip_subnet: '192.168.100.0/24'
  allocation_pools: [{'start': '192.168.100.15', 'end': '192.168.100.100'}]
  gateway_ip: '192.168.100.1'
  ipv6_subnet: '2001:db8:fd00:1100::/64'
  ipv6_allocation_pools: [{'start': '2001:db8:fd00:1100::10', 'end': '2001:db8:fd00:1100:ffff:ffff:ffff:fffe'}]
  gateway_ipv6: '2001:db8:fd00:1100::1'
- name: Management
  # Management network is enabled by default for backwards-compatibility, but
  # is not included in any roles by default. Add to role definitions to use.
  enabled: true
  vip: false # Management network does not use VIPs
  name_lower: management_cloud_0
  service_net_map_replace: management
  vlan: 16
  ip_subnet: '10.0.21.0/24'
  allocation_pools: [{'start': '10.0.21.4', 'end': '10.0.21.250'}]
  ipv6_subnet: 'fd00:fd00:fd00:1005::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:1005::10', 'end': 'fd00:fd00:fd00:1005:ffff:ffff:ffff:fffe'}]

Revision history for this message
Cédric Jeanneret (cjeanner) wrote :
Download full text (5.1 KiB)

Update:
I just tried to deploy the overcloud with a modified t-h-t content, and applying this patch[1] for correct access to the keytab.

While it did create a lot of things in the IPA, it failed a bit later while trying to get service certificates. For instance:

Could not get certificate: Execution of '/usr/bin/getcert request -I mysql -f /etc/pki/tls/certs/mysql.crt -c IPA -N CN=oc0-controller0.internalapi.mydomain.tld -K mysql/oc0-contr
oller0.internalapi.mydomain.tld -D overcloud.internalapi.mydomain.tld -D oc0-controller0.internalapi.mydomain.tld -w -k /etc/pki/tls/private/mysql.key' returned 3: New signing request \"mysql\" added.
<13>Jul 21 13:17:36 puppet-user: Error: /Stage[main]/Tripleo::Certmonger::Mysql/Certmonger_certificate[mysql]: Could not evaluate: Could not get certificate: Server at https://lab-nat-vm.mydomain.tld/ipa/xml failed request, will retry: 4001 (RPC failed at server. The host 'oc0-controller0.internalapi.mydomain.tld' does not exist to add a service to.)

After a quick check, here are the hosts I can see in IPA:

sudo ipa host-find --raw | grep krbcanonicalname
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>

and, for info, the existing services:

sudo ipa service-find --raw | grep krbcanonicalname
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: <email address hidden>
  krbcanonicalname: haproxy/oc0-controller0.stor...

Read more...

Revision history for this message
Cédric Jeanneret (cjeanner) wrote :

Apparently my env files are correct, according to this doc:
https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/multiple_overclouds.html#deploying-additional-overclouds

And, as I thought, the service_net_map_replace is supposed to point to the "original" name - so I'm wondering how it's supposed to work in the jinja2 generation :/.
Apparently the domain itself isn't really affected by the multi-overcloud, though I've updated my env to get per-overcloud names as well, such as:

parameter_defaults:
  CloudName: overcloud0.mydomain.tld
  CloudNameInternal: overcloud0.internalapicloud0.mydomain.tld
  CloudNameStorage: overcloud0.storagecloud0.mydomain.tld
  CloudNameStorageManagement: overcloud0.storagemgmtcloud0.mydomain.tld
  CloudNameCtlplane: overcloud0.ctlplane.mydomain.tld
  CloudDomain: mydomain.tld
  DnsSearchDomains: ["mydomain.tld"]

But it's still failing :(.

Changed in tripleo:
milestone: victoria-3 → wallaby-1
Changed in tripleo:
milestone: wallaby-1 → wallaby-2
Changed in tripleo:
milestone: wallaby-2 → wallaby-3
Changed in tripleo:
milestone: wallaby-3 → wallaby-rc1
Changed in tripleo:
milestone: wallaby-rc1 → xena-1
Changed in tripleo:
milestone: xena-1 → xena-2
Changed in tripleo:
milestone: xena-2 → xena-3
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.