Libvirtd conffiles should be less misleading and document tcp/tls usage
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
libvirt (Ubuntu) |
In Progress
|
Medium
|
Unassigned |
Bug Description
I was testing out libvirtd on Ubuntu 20.04
Actually testing out this - https:/
I get the error 16509 which I can replicate with this command
# virsh -c qemu+tcp:
error: unable to connect to server at 'host:16509': Connection refused
error: failed to connect to the hypervisor
https:/
The libvirt systemd service starts the libvirt process with $libvirt_opts as a parameter to the executable.
if I update the libvirtd config file /etc/default/
# options passed to libvirtd, add "-l" to listen on tcp
libvirtd_opts="-l -d"
Adding any option in libvirtd_opts causes the service to fail on restart without the listener running on port 16509.
Seems like the config behaviour changed since this bug https:/
Related branches
- Ubuntu Sponsors: Pending requested
- Canonical Server Reporter: Pending requested
- git-ubuntu import: Pending requested
-
Diff: 42 lines (+23/-1)2 files modifieddebian/changelog (+8/-0)
debian/libvirt-daemon-system.libvirtd.default (+15/-1)
Changed in libvirt (Ubuntu): | |
importance: | Undecided → Medium |
tags: | added: bitesize |
tags: | removed: server-todo |
Changed in libvirt (Ubuntu): | |
assignee: | nobody → Michał Małoszewski (michal-maloszewski99) |
Changed in libvirt (Ubuntu): | |
status: | New → In Progress |
tags: | added: server-todo |
Changed in libvirt (Ubuntu): | |
assignee: | Michał Małoszewski (michal-maloszewski99) → nobody |
Digging into this a bit more, it looks like the libvirtd.service file needs to be changed to enable the tcp listener instead of using the /etc/default/ libvirtd config file. Would be nice if the service had the tcp option (commented out by default)
My changes were to tcp.socket tcp.socket
Remove $libvirtd_opts because any setting here causes the service to fail at startup.
Add
Wants=libvirtd-
and
Also=libvirtd-
[Unit] Virtualization daemon virtlogd. socket virtlockd. socket socket ro.socket tcp.socket admin.socket machined. service libvirt- guests. service target service service fs.target fs.target logind. service machined. service s.service xendomains. service man:libvirtd( 8) /libvirt. org
Description=
Requires=
Requires=
# Use Wants instead of Requires so that users
# can disable these three .socket units to revert
# to a traditional non-activation deployment setup
Wants=libvirtd.
Wants=libvirtd-
Wants=libvirtd-
Wants=libvirtd-
Wants=systemd-
Before=
After=network.
After=dbus.service
After=iscsid.
After=apparmor.
After=local-
After=remote-
After=systemd-
After=systemd-
After=xencommon
Conflicts=
Documentation=
Documentation=https:/
[Service] =-/etc/ default/ libvirtd /usr/sbin/ libvirtd $libvirtd_opts /usr/sbin/ libvirtd -l -d /bin/kill -HUP $MAINPID
Type=notify
EnvironmentFile
ExecStart=
#ExecStart=
ExecReload=
KillMode=process
Restart=on-failure
# At least 1 FD per guest, often 2 (eg qemu monitor + qemu agent).
# eg if we want to support 4096 guests, we'll typically need 8192 FDs
# If changing this, also consider virtlogd.service & virtlockd.service
# limits which are also related to number of guests
LimitNOFILE=8192
# The cgroups pids controller can limit the number of tasks started by
# the daemon, which can limit the number of domains for some hypervisors.
# A conservative default of 8 tasks per guest results in a TasksMax of
# 32k to support 4096 guests.
TasksMax=32768
# With cgroups v2 there is no devices controller anymore, we have to use
# eBPF to control access to devices. In order to do that we create a eBPF
# hash MAP which locks memory. The default map size for 64 devices together
# with program takes 12k per guest. After rounding up we will get 64M to
# support 4096 guests.
LimitMEMLOCK=64M
[Install] multi-user. target socket socket socket ro.socket tcp.socket
WantedBy=
Also=virtlockd.
Also=virtlogd.
Also=libvirtd.
Also=libvirtd-
Also=libvirtd-