MAAS rDNS returns two hostnames that lead to Services not running that should be: apache2, SSLCertificateFile: file '/etc/apache2/ssl/*/cert_* does not exist or is empty

Bug #2012801 reported by Nobuto Murata
22
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Charm Helpers
Triaged
High
Unassigned
MAAS
Status tracked in 3.5
3.3
Fix Committed
High
Christian Grabowski
3.4
Fix Released
High
Christian Grabowski
3.5
Fix Committed
High
Christian Grabowski
OpenStack Keystone Charm
Invalid
Undecided
Unassigned
charm-magpie
Fix Committed
Undecided
Adam Collard

Bug Description

maas: 1:3.3.2-13177-g.a73a6e2bd-0ubuntu1~22.04.1
juju: 2.9.42-ubuntu-amd64
charm-keystone: lastest/edge 9bdc837
charm-vault: latest/edge d8f0840

MAAS 3.3.2 (and 3.3.1 at least) returns PTR records in an inconsistent way for LXD containers created by Juju. Those LXD interfaces are managed as "devices" in MAAAS.

This behavior doesn't happen with 3.2.7.

$ juju run --all 'hostname -I'
- MachineId: "0"
  Stdout: "10.206.50.1 192.168.151.120 \n"
- MachineId: 0/lxd/0
  Stdout: "192.168.151.121 \n"
- MachineId: 0/lxd/1
  Stdout: "192.168.151.123 \n"
- MachineId: 0/lxd/2
  Stdout: "192.168.151.122 \n"

$ juju run --all 'dig +short -x $(hostname -I)'
- MachineId: "0"
  Stdout: |
    large-wolf.
    large-wolf.local.
- MachineId: 0/lxd/0
  Stdout: |
    juju-f8d90a-0-lxd-0.maas.
- MachineId: 0/lxd/1
  Stdout: |
    juju-f8d90a-0-lxd-1.maas.
    eth0.juju-f8d90a-0-lxd-1.maas.
- MachineId: 0/lxd/2
  Stdout: |
    juju-f8d90a-0-lxd-2.maas.
    eth0.juju-f8d90a-0-lxd-2.maas.

^^^ MAAS DNS returns two PTR records with and without "eth0" for one IP address for some LXD containers (not all).

How to reproduce:

1. prepare MAAS provider for Juju
2. prepare 3 machines for workload (enlisting VMs as if bare metal or using Pod VMs are fine)
3. deploy a test bundle
   https://launchpadlibrarian.net/657634345/keystone-ha_vault_edge.yaml
4. unlock vault
5. repeat deployment and destroy-model until "Services not running that should be: apache2" shows up in juju status

[original description]

There are multiple reasons ending up with "Services not running that should be: apache2". However, this bug report focuses on the following condition:
- MAAS provider
- OpenStack API services are deployed in LXD containers on top of bare metal
- one certificate is written as /etc/apache2/ssl/*/cert_<fqdn>
- symlink creation fails from /etc/apache2/ssl/*/cert_<vip> to /etc/apache2/ssl/*/cert_<hostname>
- apache2 fails to start because of missing /etc/apache2/ssl/*/cert_<vip>

> $ sudo systemctl status apache2
> × apache2.service - The Apache HTTP Server
> Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
> Active: failed (Result: exit-code) since Sat 2023-03-25 12:50:59 UTC; 13h ago
> Docs: https://httpd.apache.org/docs/2.4/
> CPU: 53ms
>
> Mar 25 12:50:59 juju-043209-2-lxd-0 systemd[1]: Starting The Apache HTTP Server...
> Mar 25 12:50:59 juju-043209-2-lxd-0 apachectl[43820]: AH00526: Syntax error on line 14 of /etc/apache2/sites-enabled/openstack_https_frontend.conf:
> Mar 25 12:50:59 juju-043209-2-lxd-0 apachectl[43820]: SSLCertificateFile: file '/etc/apache2/ssl/keystone/cert_192.168.151.99' does not exist or is empty
> Mar 25 12:50:59 juju-043209-2-lxd-0 apachectl[43817]: Action 'start' failed.
> Mar 25 12:50:59 juju-043209-2-lxd-0 apachectl[43817]: The Apache error log may have more information.
> Mar 25 12:50:59 juju-043209-2-lxd-0 systemd[1]: apache2.service: Control process exited, code=exited, status=1/FAILURE
> Mar 25 12:50:59 juju-043209-2-lxd-0 systemd[1]: apache2.service: Failed with result 'exit-code'.
> Mar 25 12:50:59 juju-043209-2-lxd-0 systemd[1]: Failed to start The Apache HTTP Server.

Long story short, this issue happens when responses of reverse DNS lookup to an IP address are inconsistent in get_hostname().
https://github.com/juju/charm-helpers/blob/6e302bab63e22e356d77d76a5c6d90d9d24c6390/charmhelpers/contrib/network/ip.py#L497

In this case, keystone charm uses the initial get_request() call to request a certificate and write the cert based on the output. Then, the charm uses the second get_request() call to get a path to create a symlink to then ends up with no such file.
https://github.com/juju/charm-helpers/blob/b5725ac546372e7d4004d15095f79cdd5e7da687/charmhelpers/contrib/openstack/cert_utils.py#L105

[requested and written cert]
/etc/apache2/ssl/keystone/cert_eth0.juju-043209-2-lxd-0.maas (w/ eth0.)
-> exists

[the patch trying to create a symlink to]
/etc/apache2/ssl/keystone/cert_juju-043209-2-lxd-0.maas (w/o eth0.)
-> does not exist

unit-keystone-2: 12:45:41 WARNING unit.keystone/2.juju-log certificates:10: get_request: self.hostname_entry={'cn': 'eth0.juju-043209-2-lxd-0.maas', 'addresses': ['192.168.151.131', '192.168.151.99']},self.entries=[{'cn': 'eth0.juju-043209-2-lxd-0.maas', 'addresses': ['192.168.151.131', '192.168.151.99']}],sans=['192.168.151.131', '192.168.151.99'],request={'eth0.juju-043209-2-lxd-0.maas': {'sans': ['192.168.151.131', '192.168.151.99']}},req={'cert_requests': '{"eth0.juju-043209-2-lxd-0.maas": {"sans": ["192.168.151.131", "192.168.151.99"]}}', 'unit_name': 'keystone_2'}
unit-keystone-2: 12:50:54 WARNING unit.keystone/2.juju-log certificates:10: get_request: self.hostname_entry={'cn': 'juju-043209-2-lxd-0.maas', 'addresses': ['192.168.151.131', '192.168.151.99']},self.entries=[{'cn': 'juju-043209-2-lxd-0.maas', 'addresses': ['192.168.151.131', '192.168.151.99']}],sans=['192.168.151.131', '192.168.151.99'],request={'juju-043209-2-lxd-0.maas': {'sans': ['192.168.151.131', '192.168.151.99']}},req={'cert_requests': {'juju-043209-2-lxd-0.maas': {'sans': ['192.168.151.131', '192.168.151.99']}}, 'unit_name': 'keystone_2'}

This is due to how MAAS DNS works.

$ dig +short @192.168.151.1 -x 192.168.151.131
eth0.juju-043209-2-lxd-0.maas.
juju-043209-2-lxd-0.maas.

$ grep -I -C1 -r juju-043209-2-lxd-0 /var/lib/bind/maas/
/var/lib/bind/maas/zone.maas-$ORIGIN maas.
/var/lib/bind/maas/zone.maas:juju-043209-2-lxd-0 A 192.168.151.131
/var/lib/bind/maas/zone.maas:$ORIGIN juju-043209-2-lxd-0.maas.
/var/lib/bind/maas/zone.maas-eth0 A 192.168.151.131
--
/var/lib/bind/maas/zone.151.168.192.in-addr.arpa-129 PTR juju-fe03b8-2-lxd-8.maas.
/var/lib/bind/maas/zone.151.168.192.in-addr.arpa:131 PTR eth0.juju-043209-2-lxd-0.maas.
/var/lib/bind/maas/zone.151.168.192.in-addr.arpa: PTR juju-043209-2-lxd-0.maas.
/var/lib/bind/maas/zone.151.168.192.in-addr.arpa-134 PTR eth0.juju-043209-0-lxd-0.maas.

>>> str(dns.resolver.query(dns.reversename.from_address("192.168.151.131"), "PTR")[0])
'juju-043209-2-lxd-0.maas.'
>>> str(dns.resolver.query(dns.reversename.from_address("192.168.151.131"), "PTR")[0])
'eth0.juju-043209-2-lxd-0.maas.'
>>> str(dns.resolver.query(dns.reversename.from_address("192.168.151.131"), "PTR")[0])
'juju-043209-2-lxd-0.maas.'
>>> str(dns.resolver.query(dns.reversename.from_address("192.168.151.131"), "PTR")[0])
'eth0.juju-043209-2-lxd-0.maas.'
>>> str(dns.resolver.query(dns.reversename.from_address("192.168.151.131"), "PTR")[0])
'juju-043209-2-lxd-0.maas.'

Related branches

Revision history for this message
Nobuto Murata (nobuto) wrote :

The step 3&4 can be unattended as follows.
====
juju destroy-model keystone-test --no-wait --force -y; \
    juju add-model keystone-test maas && \
    juju deploy ./keystone-ha_vault_edge.yaml && time juju-wait -w --exclude vault

VAULT_ADDR="http://$(juju run --unit vault/leader -- network-get certificates --ingress-address):8200"
export VAULT_ADDR

vault_init_output="$(vault operator init -key-shares=1 -key-threshold=1 -format json)"
vault operator unseal "$(echo "$vault_init_output" | jq -r .unseal_keys_b64[])"

VAULT_TOKEN="$(echo "$vault_init_output" | jq -r .root_token)"
export VAULT_TOKEN

juju run-action --wait vault/leader authorize-charm \
 token="$(vault token create -ttl=10m -format json | jq -r .auth.client_token)"
juju run-action vault/leader --wait generate-root-ca
====

description: updated
Revision history for this message
Nobuto Murata (nobuto) wrote :
Revision history for this message
Nobuto Murata (nobuto) wrote (last edit ):
Revision history for this message
Nobuto Murata (nobuto) wrote (last edit ):

Subscribing ~field-high.

Please note that the log files are about FQDN vs FQDN, but there is another case as FQDN vs hostname.
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1952414/comments/17

When thinking about a proper fix for this issue, there might be a possibility of two-birds-with-one-stone.

Nobuto Murata (nobuto)
summary: - Services not running that should be: apache2, SSLCertificateFile: file
- '/etc/apache2/ssl/*/cert_* does not exist or is empty
+ MAAS rDNS returns two hostnames that lead to Services not running that
+ should be: apache2, SSLCertificateFile: file '/etc/apache2/ssl/*/cert_*
+ does not exist or is empty
Revision history for this message
Nobuto Murata (nobuto) wrote :

As I've confirmed the behavioral difference between MAAS 3.2.7 and 3.3.2(and 3.3.1), adding MAAS task and closing other tasks for the time being.

description: updated
Changed in charm-helpers:
status: New → Invalid
Changed in charm-keystone:
status: New → Invalid
description: updated
Revision history for this message
Alexsander de Souza (alexsander-souza) wrote :
Changed in maas:
status: New → Triaged
milestone: none → 3.4.0
importance: Undecided → Critical
Revision history for this message
Adam Collard (adam-collard) wrote :

If, as described (thanks for the detailed report Nobuto!) the charm relies on a stable IP <-> host mapping for a given IP then it needs to take care of that itself and not depend on any DNS server giving stable results.

It's perfectly valid for there to be more than one PTR record for any given IP, the charm has to have stable logic for determining which to pick.

Changed in charm-keystone:
status: Invalid → New
Changed in charm-helpers:
status: Invalid → New
Revision history for this message
Nobuto Murata (nobuto) wrote :

> It's perfectly valid for there to be more than one PTR record for any given IP

It may not be a clear violation of RFCs indeed, but it can confuse software by having multiple PTR records (and somehow MAAS 3.3 has multiple PTR records for some of them not all).
https://serverfault.com/questions/618700/why-multiple-ptr-records-in-dns-is-not-recommended

Even if we leave charms out of the picture, MAAS 2.2's behavior is somewhat "preferred" to be the single source of truth of a data center with its IPAM functionality.

Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :

@adam-collard Are you effectively saying that "MaaS 3.3+ is working as designed, this is not a bug, and multiple PTR records are here to stay"? i.e. that there is no 'fix' forthcoming, and that using applications need to deal with this (new) behaviour/feature?

From the keystone charm users' perspective, a fix is needed, and I'd really like to understand whether it needs to be done in charm-helpers/keystone, or if MaaS will be fixed. Thanks!

Revision history for this message
Jerzy Husakowski (jhusakowski) wrote :

Please investigate the change in behaviour of MAAS rDNS queries

Changed in maas:
assignee: nobody → Christian Grabowski (cgrabowski)
Revision history for this message
Christian Grabowski (cgrabowski) wrote :

There is a 1:1 of A / AAAA and PTR. If <hostname>.maas exists and <iface>.<hostname>.maas exists and each point to the same IP, the PTR will have an answer for both. This isn't a change in MAAS, this is just how DNS works. What seems to have changed is the generating of two forward records where MAAS may have generated 1 in the past.

It'd be helpful to see the output of `maas $PROFILE machine read $SYSTEM_ID` for a given machine in question.

Revision history for this message
Nobuto Murata (nobuto) wrote :

For the record, this is happening with 1:3.3.3-13184-g.3e9972c19-0ubuntu1~22.04.1 as well. So this is similar but not exact the same issue as https://bugs.launchpad.net/maas/+bug/2011841 in the end.

I'm going to collect information of this test deployment where 7 out of 25 LXD containers had two A/PRT records for one IP address.

$ juju run --all 'dig +short -x $(hostname -I)' | grep lxd-
    juju-38eb05-0-lxd-0.maas.
    juju-38eb05-0-lxd-2.maas.
    juju-38eb05-0-lxd-3.maas.
    juju-38eb05-0-lxd-4.maas.
    juju-38eb05-0-lxd-7.maas.
    juju-38eb05-1-lxd-6.maas.
    eth0.juju-38eb05-1-lxd-6.maas.
    juju-38eb05-2-lxd-2.maas.
    eth0.juju-38eb05-2-lxd-2.maas.
    juju-38eb05-0-lxd-1.maas.
    juju-38eb05-0-lxd-5.maas.
    juju-38eb05-0-lxd-6.maas.
    juju-38eb05-1-lxd-0.maas.
    juju-38eb05-1-lxd-1.maas.
    juju-38eb05-1-lxd-2.maas.
    juju-38eb05-1-lxd-3.maas.
    juju-38eb05-1-lxd-4.maas.
    eth0.juju-38eb05-1-lxd-4.maas.
    eth0.juju-38eb05-1-lxd-5.maas.
    juju-38eb05-1-lxd-5.maas.
    juju-38eb05-1-lxd-7.maas.
    eth0.juju-38eb05-1-lxd-7.maas.
    juju-38eb05-2-lxd-0.maas.
    juju-38eb05-2-lxd-1.maas.
    juju-38eb05-2-lxd-3.maas.
    juju-38eb05-2-lxd-4.maas.
    juju-38eb05-2-lxd-5.maas.
    eth0.juju-38eb05-2-lxd-5.maas.
    juju-38eb05-2-lxd-6.maas.
    juju-38eb05-2-lxd-7.maas.
    eth0.juju-38eb05-2-lxd-7.maas.
    juju-38eb05-2-lxd-8.

Revision history for this message
Nobuto Murata (nobuto) wrote :
Revision history for this message
Nobuto Murata (nobuto) wrote :

I've attached the sosreport and it should cover all maas logs and configs.

> It'd be helpful to see the output of `maas $PROFILE machine read $SYSTEM_ID` for a given machine in question.

Those are devices registered by Juju (not machines) so please find attached a JSON as the output of `maas admin devices read`.

Changed in maas:
status: Triaged → New
milestone: 3.4.0 → none
Revision history for this message
Nobuto Murata (nobuto) wrote :

This bug has been marked as Critical on April 21st in the MAAS project, and I'm waiting for the MAAS team to reproduce and diagnose the issue.

I attached all maas logs and configs before. I can offer a database dump if necessary but it should be straightforward to create a reproduction environment as per the bug description. Let me know if it's not reproduced after following the steps.

Changed in maas:
status: New → In Progress
milestone: none → 3.4.0
Changed in maas:
status: In Progress → Fix Committed
Revision history for this message
Nobuto Murata (nobuto) wrote :

As I commented in the review, I still see the issue.
https://code.launchpad.net/~cgrabowski/maas/+git/maas/+merge/444471

$ juju run --all 'dig +short -x $(hostname -I)' | grep lxd-
 eth0.juju-82834d-2-lxd-2.maas.
 juju-82834d-2-lxd-2.maas.
 juju-82834d-0-lxd-0.maas.
 juju-82834d-0-lxd-1.maas.
 juju-82834d-0-lxd-2.maas.
 juju-82834d-0-lxd-3.maas.
 juju-82834d-0-lxd-4.maas.
 juju-82834d-0-lxd-5.maas.
 juju-82834d-0-lxd-6.maas.
 juju-82834d-0-lxd-7.maas.
 juju-82834d-1-lxd-0.maas.
 eth0.juju-82834d-1-lxd-0.maas.
 juju-82834d-1-lxd-1.maas.
 eth0.juju-82834d-1-lxd-1.maas.
 juju-82834d-1-lxd-2.maas.
 juju-82834d-1-lxd-3.maas.
 juju-82834d-1-lxd-5.maas.
 eth0.juju-82834d-1-lxd-6.maas.
 juju-82834d-1-lxd-6.maas.
 juju-82834d-1-lxd-7.maas.
 juju-82834d-2-lxd-1.maas.
 eth0.juju-82834d-2-lxd-1.maas.
 juju-82834d-2-lxd-4.maas.
 juju-82834d-2-lxd-8.
 juju-82834d-1-lxd-4.maas.
 juju-82834d-2-lxd-0.maas.
 juju-82834d-2-lxd-3.maas.
 eth0.juju-82834d-2-lxd-3.maas.
 juju-82834d-2-lxd-5.maas.
 juju-82834d-2-lxd-6.maas.
 eth0.juju-82834d-2-lxd-7.maas.
 juju-82834d-2-lxd-7.maas.

Revision history for this message
Nobuto Murata (nobuto) wrote :

Also, I was preparing 3.4 env to test an upcoming build properly, but I was hit by another DNS broken issue with 3.4 branch:
https://bugs.launchpad.net/maas/+bug/2023398

Revision history for this message
Nobuto Murata (nobuto) wrote :

> As I commented in the review, I still see the issue.
> https://code.launchpad.net/~cgrabowski/maas/+git/maas/+merge/444471

Okay, I've tested it properly now after getting the idea that the code is for PostgreSQL triggers.

1. download a patch
$ wget https://code.launchpad.net/~cgrabowski/maas/+git/maas/+merge/444471/+preview-diff/1014174/+files/preview.diff

2. apply it by hand
$ cat preview.diff \
    | filterdiff -i a/src/maasserver/triggers/system.py \
    | sudo patch /usr/lib/python3/dist-packages/maasserver/triggers/system.py

3. execute maas db upgraden
$ sudo maas-region dbupgrade

4. confirm the trigger function has the new and expected content
$ sudo -u postgres psql -d maasdb \
    -c "SELECT prosrc FROM pg_proc WHERE proname = 'sys_dns_updates_interface_ip_insert';"

And the behavior is still confusing.

1. with and without eth0 are still mixed even though there is no duplication for each host.
2. probably without eth0 is preferred to be the "canonical" hostname of the device

$ grep lxd- /var/lib/bind/maas/zone.maas
eth0.juju-b1c448-0-lxd-0 A 192.168.151.199
eth0.juju-b1c448-0-lxd-1 A 192.168.151.200
eth0.juju-b1c448-0-lxd-2 A 192.168.151.111
juju-b1c448-0-lxd-3 A 192.168.151.186
eth0.juju-b1c448-0-lxd-4 A 192.168.151.188
eth0.juju-b1c448-0-lxd-5 A 192.168.151.189
eth0.juju-b1c448-0-lxd-6 A 192.168.151.187
eth0.juju-b1c448-0-lxd-7 A 192.168.151.198
eth0.juju-b1c448-1-lxd-0 A 192.168.151.117
eth0.juju-b1c448-1-lxd-1 A 192.168.151.107
eth0.juju-b1c448-1-lxd-2 A 192.168.151.122
eth0.juju-b1c448-1-lxd-3 A 192.168.151.120
eth0.juju-b1c448-1-lxd-4 A 192.168.151.194
eth0.juju-b1c448-1-lxd-5 A 192.168.151.195
eth0.juju-b1c448-1-lxd-6 A 192.168.151.196
eth0.juju-b1c448-1-lxd-7 A 192.168.151.197
eth0.juju-b1c448-2-lxd-0 A 192.168.151.128
eth0.juju-b1c448-2-lxd-1 A 192.168.151.190
eth0.juju-b1c448-2-lxd-2 A 192.168.151.129
eth0.juju-b1c448-2-lxd-3 A 192.168.151.104
eth0.juju-b1c448-2-lxd-4 A 192.168.151.106
eth0.juju-b1c448-2-lxd-5 A 192.168.151.191
eth0.juju-b1c448-2-lxd-6 A 192.168.151.118
eth0.juju-b1c448-2-lxd-7 A 192.168.151.193
eth0.juju-b1c448-2-lxd-8 A 192.168.151.192

Revision history for this message
Nobuto Murata (nobuto) wrote :

> 2. probably without eth0 is preferred to be the "canonical" hostname of the device

Just to add to this, missing HOSTNAME.maas response is a problem to many areas.

$ dig +short @MAAS_DNS eth0.juju-b1c448-0-lxd-0.maas
192.168.151.199

$ dig +short @MAAS_DNS juju-b1c448-0-lxd-0.maas
-> no record

HOSTNAME.maas as in juju-b1c448-0-lxd-0.maas is expected as FQDN in many places like charms' code, fixing a regression of having duplicate records is good, but losing the resolution of the "canonical" FQDN may be a bigger problem.

Changed in maas:
status: Fix Committed → In Progress
Changed in maas:
status: In Progress → Fix Committed
Revision history for this message
Nobuto Murata (nobuto) wrote :

> ** Changed in: maas
> Status: In Progress => Fix Committed

Could you elaborate a reason behind the status change? Was there a follow-up commit on top of the following?
https://code.launchpad.net/~cgrabowski/maas/+git/maas/+merge/444471

Revision history for this message
Christian Grabowski (cgrabowski) wrote :

Because we believe that is the fix, it just needs to be tested in a proper deb build. Which I've created this ppa with the fix https://launchpad.net/~cgrabowski/+archive/ubuntu/maas-next, though there is an issue with the Go dependencies not being properly vendored for the LP builder, which I'm currently working on resolving.

Revision history for this message
Christian Grabowski (cgrabowski) wrote :

An updated package has been built for that PPA (https://launchpad.net/~cgrabowski/+archive/ubuntu/maas-next), please test it at your convenience.

Revision history for this message
Nobuto Murata (nobuto) wrote :

Okay, the testing with:

$ apt policy maas
maas:
  Installed: 1:3.4.0~beta3-14300-g.fecd8fa8f-0ubuntu1~22.04.1
  Candidate: 1:3.4.0~beta3-14300-g.fecd8fa8f-0ubuntu1~22.04.1
  Version table:
 *** 1:3.4.0~beta3-14300-g.fecd8fa8f-0ubuntu1~22.04.1 500
        500 https://ppa.launchpadcontent.net/cgrabowski/maas-next/ubuntu jammy/main amd64 Packages
        100 /var/lib/dpkg/status

And there is no by-hand modification on my end.

There is no duplicate entries, which is good. But as I commented above. There is an inconsistency between with and without eth0, and because of that some devices are missing the HOSTNAME.maas entry.

$ grep lxd- /var/lib/bind/maas/zone.maas
juju-d52c03-0-lxd-0 A 192.168.151.117
juju-d52c03-0-lxd-1 A 192.168.151.119
juju-d52c03-0-lxd-2 A 192.168.151.108
juju-d52c03-0-lxd-3 A 192.168.151.105
juju-d52c03-0-lxd-4 A 192.168.151.107
juju-d52c03-0-lxd-5 A 192.168.151.106
juju-d52c03-0-lxd-6 A 192.168.151.120
juju-d52c03-0-lxd-7 A 192.168.151.118
eth0.juju-d52c03-1-lxd-0 A 192.168.151.128
eth0.juju-d52c03-1-lxd-1 A 192.168.151.127
eth0.juju-d52c03-1-lxd-2 A 192.168.151.129
juju-d52c03-1-lxd-3 A 192.168.151.116
juju-d52c03-1-lxd-4 A 192.168.151.114
juju-d52c03-1-lxd-5 A 192.168.151.113
juju-d52c03-1-lxd-6 A 192.168.151.115
eth0.juju-d52c03-1-lxd-7 A 192.168.151.126
juju-d52c03-2-lxd-0 A 192.168.151.123
juju-d52c03-2-lxd-1 A 192.168.151.124
juju-d52c03-2-lxd-2 A 192.168.151.122
juju-d52c03-2-lxd-3 A 192.168.151.125
juju-d52c03-2-lxd-4 A 192.168.151.112
juju-d52c03-2-lxd-5 A 192.168.151.110
juju-d52c03-2-lxd-6 A 192.168.151.109
juju-d52c03-2-lxd-7 A 192.168.151.111
juju-d52c03-2-lxd-8 A 192.168.151.121

Changed in maas:
status: Fix Committed → Confirmed
Changed in maas:
status: Confirmed → Triaged
Changed in maas:
status: Triaged → In Progress
Revision history for this message
Bartosz Woronicz (mastier1) wrote :

I face similar issue with physical machines. Some hosts are returning two entries

installed: 3.3.4-13189-g.f88272d1e (28521) 138MB -

ubuntu@somecompany-infra-management-1:~/cpe-deployments$ juju ssh 21 'dig -x $(hostname -I)' | grep -A3 "ANSWER SECTION"
;; ANSWER SECTION:
13.129.169.10.in-addr.arpa. 8 IN PTR somecompany-compute-19.some-internal-dns.example
13.129.169.10.in-addr.arpa. 8 IN PTR enp68s0f0.somecompany-compute-19.some-internal-dns.example

Connection to 10.169.130.25 closed.
ubuntu@somecompany-infra-management-1:~/cpe-deployments$ juju ssh 20 'dig -x $(hostname -I)' | grep -A3 "ANSWER SECTION"
;; ANSWER SECTION:
26.129.169.10.in-addr.arpa. 30 IN PTR somecompany-compute-11.some-internal-dns.example

;; Query time: 4 msec
Connection to 10.169.130.24 closed.

Revision history for this message
Felipe Reyes (freyes) wrote :

Setting charm-helpers and charm-keystone tasks as invalid now that the issue was identified in MAAS codebase.

Changed in charm-keystone:
status: New → Invalid
Changed in charm-helpers:
status: New → Invalid
Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :

Re-opening as the new behaviour as MAAS 3.3+ will have the feature where multiple hostnames are returned. We'll have to think about how to handle this as charm-helpers/the charm needs to work out which host to use.

Changed in charm-helpers:
importance: Undecided → High
status: Invalid → Triaged
Changed in charm-magpie:
status: New → Incomplete
status: Incomplete → In Progress
Changed in charm-magpie:
status: In Progress → Fix Committed
assignee: nobody → Adam Collard (adam-collard)
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.