Upgrade from 245.4-4ubuntu3.3 to 245.4-4ubuntu3.2 appears to break DNS resolution in some cases
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
cloud-images |
New
|
Undecided
|
Unassigned | ||
systemd |
New
|
Unknown
|
|||
cloud-init (Ubuntu) |
Invalid
|
Undecided
|
Unassigned | ||
Focal |
Invalid
|
Undecided
|
Unassigned | ||
Groovy |
Won't Fix
|
Undecided
|
Unassigned | ||
systemd (Ubuntu) |
Fix Released
|
Undecided
|
Unassigned | ||
Focal |
Fix Released
|
Medium
|
Dan Streetman | ||
Groovy |
Fix Released
|
Medium
|
Dan Streetman |
Bug Description
[impact]
on boot of a specific azure instance, the ID_NET_DRIVER parameter of the instance's eth0 interface is not set. That leads to a failure of systemd-networkd to take control of the interface after a restart of systemd-networkd, which results in DNS failures (at first) and eventually complete loss of networking (once the DHCP lease expires).
[test case]
this occurs on first boot of an instance using the specific image; it is not reproducable using the latest ubuntu image nor any reboot of the affected image, and it has not been reproducable (for me) when using debug-enabled images based on the affected image.
So, while the problem is reproducable using the specific image in question, it's not possible to verify the fix since any change to the image removes reproducability.
however, while the problem itself can't be reproduced and then verified, if the assumption is correct (that the 'add' uevent is being missed on boot), that is possible to test and verify:
$ udevadm info /sys/class/net/eth0 | grep ID_NET_DRIVER
E: ID_NET_
$ sudo rm /run/udev/data/n2
(note, change 'n2' to whichever network interface index is correct)
$ udevadm info /sys/class/net/eth0 | grep ID_NET_DRIVER
$ sudo udevadm trigger -c change /sys/class/net/eth0
$ udevadm info /sys/class/net/eth0 | grep ID_NET_DRIVER
(note the 'change' uevent did not populate ID_NET_DRIVER property)
$ sudo udevadm trigger -c add /sys/class/net/eth0
$ udevadm info /sys/class/net/eth0 | grep ID_NET_DRIVER
E: ID_NET_
(note the 'add' uevent did populate ID_NET_DRIVER)
the test verification should result in ID_NET_DRIVER being populated for a 'change' uevent.
[regression potential]
any regression would likely involve problems with systemd-udevd processing 'change' events from network devices, and/or incorrect udevd device properties.
[scope]
this is needed only for focal and groovy.
this is fixed by upstream commit e0e789c1e97 which is first included in v247, so this is fixed already in hirsute.
while this commit is not included in bionic, due to the difficult nature of reproducing (and verifying) this, and the fact it has only been seen once on a focal image, I don't think it's appropriate to SRU to bionic at this point; possibly it may be appropriate if this is ever reproduced with a bionic image.
[other info]
note that this bug's subject and description, as well as the upstream systemd bug subject and description, talk about the problem being DNS resolution. However that is strictly a side-effect of the real problem and is not the actual issue.
[original description]
The systemd upgrade 245.4-4ubuntu3.3 to 245.4-4ubuntu3.2 appears to have broken DNS resolution across much of our Azure fleet earlier today. We ended up mitigating this by forcing reboots on the associated instances, no combination of networkctl reload, reconfigure, systemctl daemon-reexec, systemctl daemon-reload, netplan generate, netplan apply would get resolvectl to have a DNS server again. The main symptom appears to have been systemd-networkd believing it wasn't managing the eth0 interfaces:
ubuntu@machine-1:~$ sudo networkctl
IDX LINK TYPE OPERATIONAL SETUP
1 lo loopback carrier unmanaged
2 eth0 ether routable unmanaged
Which eventually made them lose their DNS resolvers:
ubuntu@machine-1:~$ sudo resolvectl dns
Global:
Link 2 (eth0):
After rebooting, we see this behaving properly:
ubuntu@machine-1:~$ sudo networkctl list
IDX LINK TYPE OPERATIONAL SETUP
1 lo loopback carrier unmanaged
2 eth0 ether routable configured
2 links listed.
ubuntu@machine-1:~$ sudo resolvectl dns
Global:
Link 2 (eth0): 168.63.129.16
This appears to be specifically linked to the upgrade, i.e. we were able to provoke the issue by upgrading the systemd package, so I suspect it's part of the packaging in the upgrade process.
---
ProblemType: Bug
ApportVersion: 2.20.11-
Architecture: amd64
CasperMD5CheckR
DistroRelease: Ubuntu 20.04
Lspci-vt:
-[0000:00]-+-00.0 Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled)
+-07.0 Intel Corporation 82371AB/EB/MB PIIX4 ISA
+-07.1 Intel Corporation 82371AB/EB/MB PIIX4 IDE
+-07.3 Intel Corporation 82371AB/EB/MB PIIX4 ACPI
\-08.0 Microsoft Corporation Hyper-V virtual VGA
Lsusb: Error: command ['lsusb'] failed with exit code 1:
Lsusb-t:
Lsusb-v: Error: command ['lsusb', '-v'] failed with exit code 1:
MachineType: Microsoft Corporation Virtual Machine
Package: systemd 245.4-4ubuntu3.3
PackageArchitec
ProcEnviron:
TERM=xterm-
PATH=(custom, no user)
LANG=C.UTF-8
SHELL=/bin/bash
ProcKernelCmdLine: BOOT_IMAGE=
ProcVersionSign
Tags: focal uec-images
Uname: Linux 5.4.0-1031-azure x86_64
UpgradeStatus: No upgrade log present (probably fresh install)
UserGroups: N/A
_MarkForUpload: True
dmi.bios.date: 12/07/2018
dmi.bios.vendor: American Megatrends Inc.
dmi.bios.version: 090008
dmi.board.name: Virtual Machine
dmi.board.vendor: Microsoft Corporation
dmi.board.version: 7.0
dmi.chassis.
dmi.chassis.type: 3
dmi.chassis.vendor: Microsoft Corporation
dmi.chassis.
dmi.modalias: dmi:bvnAmerican
dmi.product.name: Virtual Machine
dmi.product.uuid: 4412ad79-
dmi.product.
dmi.sys.vendor: Microsoft Corporation
Changed in cloud-init (Ubuntu): | |
status: | New → Incomplete |
Changed in systemd (Ubuntu): | |
status: | Invalid → New |
Changed in systemd: | |
status: | Unknown → New |
description: | updated |
apport information