I have tested Bionic with kernel 4.15.0-22-generic, linux-cloud-tools-4.15.0-22.24 and linux-tools-4.15.0-22.24 and both issues still occur.
While testing the above mentioned kernel, I saw the following behavior:
1. Right after booting the system the KVP daemon reports it is active (running). The command "systemctl status hv-kvp-daemon" returns the following output:
hv-kvp-daemon.service - Hyper-V KVP Protocol Daemon
Loaded: loaded (/lib/systemd/system/hv-kvp-daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-06-07 11:43:22 UTC; 55s ago
Main PID: 1363 (hv_kvp_daemon)
Tasks: 1 (limit: 4496)
CGroup: /system.slice/hv-kvp-daemon.service
└─1363 /usr/lib/linux-tools/4.15.0-22-generic/hv_kvp_daemon -n
Jun 07 11:43:22 bionic systemd[1]: Started Hyper-V KVP Protocol Daemon.
Jun 07 11:43:22 bionic KVP[1363]: KVP starting; pid is:1363
Jun 07 11:43:22 bionic KVP[1363]: KVP LIC Version: 3.1
2. After approximately 2 minutes the KVP daemon enters in the failed state (issue 1).
3. Manually starting the daemon with the command "systemctl start hv-kvp-daemon" works perfectly fine.
4. "systemctl status hv-kvp-daemon" will now show the second issue:
hv-kvp-daemon.service - Hyper-V KVP Protocol Daemon
Loaded: loaded (/lib/systemd/system/hv-kvp-daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-06-07 11:47:29 UTC; 8s ago
Main PID: 1995 (hv_kvp_daemon)
Tasks: 1 (limit: 4496)
CGroup: /system.slice/hv-kvp-daemon.service
└─1995 /usr/lib/linux-tools/4.15.0-22-generic/hv_kvp_daemon -n
Jun 07 11:47:29 bionic systemd[1]: Started Hyper-V KVP Protocol Daemon.
Jun 07 11:47:29 bionic KVP[1995]: KVP starting; pid is:1995
Jun 07 11:47:29 bionic KVP[1995]: KVP LIC Version: 3.1
Jun 07 11:47:36 bionic hv_kvp_daemon[1995]: sh: 1: /usr/libexec/hypervkvpd/hv_get_dns_info: not found
Jun 07 11:47:36 bionic hv_kvp_daemon[1995]: sh: 1: /usr/libexec/hypervkvpd/hv_get_dhcp_info: not found
Jun 07 11:47:36 bionic hv_kvp_daemon[1995]: sh: 1: /usr/libexec/hypervkvpd/hv_get_dns_info: not found
Jun 07 11:47:36 bionic hv_kvp_daemon[1995]: sh: 1: /usr/libexec/hypervkvpd/hv_get_dhcp_info: not found
Jun 07 11:47:36 bionic hv_kvp_daemon[1995]: sh: 1: /usr/libexec/hypervkvpd/hv_get_dns_info: not found
Jun 07 11:47:36 bionic hv_kvp_daemon[1995]: sh: 1: /usr/libexec/hypervkvpd/hv_get_dhcp_info: not found
5. Rebooting the system after this point will no longer trigger the first issue, only the second one. Even stopping the VM and turning it back ON does NOT trigger the first issue again.
Could it be that while the hv-kvp-daemon service shows the two files (hv_get_dns_info and hv_get_dhcp_info) are not found, it will not enter in the failed state? Meaning that the two apparently separate issues, are actually related. I am not 100% sure about this.
In order to trigger the first issue again, the Integration Service corresponding to the KVP daemon has to be disabled and re-enabled, and the VM has to be rebooted. This is just the way I managed to reproduce the first issue again, I am not sure if there are other ways to trigger it again.
I have tested Bionic with kernel 4.15.0-22-generic, linux-cloud- tools-4. 15.0-22. 24 and linux-tools- 4.15.0- 22.24 and both issues still occur.
While testing the above mentioned kernel, I saw the following behavior:
1. Right after booting the system the KVP daemon reports it is active (running). The command "systemctl status hv-kvp-daemon" returns the following output:
hv-kvp- daemon. service - Hyper-V KVP Protocol Daemon system/ hv-kvp- daemon. service; enabled; vendor preset: enabled) slice/hv- kvp-daemon. service linux-tools/ 4.15.0- 22-generic/ hv_kvp_ daemon -n
Loaded: loaded (/lib/systemd/
Active: active (running) since Thu 2018-06-07 11:43:22 UTC; 55s ago
Main PID: 1363 (hv_kvp_daemon)
Tasks: 1 (limit: 4496)
CGroup: /system.
└─1363 /usr/lib/
Jun 07 11:43:22 bionic systemd[1]: Started Hyper-V KVP Protocol Daemon.
Jun 07 11:43:22 bionic KVP[1363]: KVP starting; pid is:1363
Jun 07 11:43:22 bionic KVP[1363]: KVP LIC Version: 3.1
2. After approximately 2 minutes the KVP daemon enters in the failed state (issue 1).
3. Manually starting the daemon with the command "systemctl start hv-kvp-daemon" works perfectly fine.
4. "systemctl status hv-kvp-daemon" will now show the second issue:
hv-kvp- daemon. service - Hyper-V KVP Protocol Daemon system/ hv-kvp- daemon. service; enabled; vendor preset: enabled) slice/hv- kvp-daemon. service linux-tools/ 4.15.0- 22-generic/ hv_kvp_ daemon -n
Loaded: loaded (/lib/systemd/
Active: active (running) since Thu 2018-06-07 11:47:29 UTC; 8s ago
Main PID: 1995 (hv_kvp_daemon)
Tasks: 1 (limit: 4496)
CGroup: /system.
└─1995 /usr/lib/
Jun 07 11:47:29 bionic systemd[1]: Started Hyper-V KVP Protocol Daemon. daemon[ 1995]: sh: 1: /usr/libexec/ hypervkvpd/ hv_get_ dns_info: not found daemon[ 1995]: sh: 1: /usr/libexec/ hypervkvpd/ hv_get_ dhcp_info: not found daemon[ 1995]: sh: 1: /usr/libexec/ hypervkvpd/ hv_get_ dns_info: not found daemon[ 1995]: sh: 1: /usr/libexec/ hypervkvpd/ hv_get_ dhcp_info: not found daemon[ 1995]: sh: 1: /usr/libexec/ hypervkvpd/ hv_get_ dns_info: not found daemon[ 1995]: sh: 1: /usr/libexec/ hypervkvpd/ hv_get_ dhcp_info: not found
Jun 07 11:47:29 bionic KVP[1995]: KVP starting; pid is:1995
Jun 07 11:47:29 bionic KVP[1995]: KVP LIC Version: 3.1
Jun 07 11:47:36 bionic hv_kvp_
Jun 07 11:47:36 bionic hv_kvp_
Jun 07 11:47:36 bionic hv_kvp_
Jun 07 11:47:36 bionic hv_kvp_
Jun 07 11:47:36 bionic hv_kvp_
Jun 07 11:47:36 bionic hv_kvp_
5. Rebooting the system after this point will no longer trigger the first issue, only the second one. Even stopping the VM and turning it back ON does NOT trigger the first issue again.
Could it be that while the hv-kvp-daemon service shows the two files (hv_get_dns_info and hv_get_dhcp_info) are not found, it will not enter in the failed state? Meaning that the two apparently separate issues, are actually related. I am not 100% sure about this.
In order to trigger the first issue again, the Integration Service corresponding to the KVP daemon has to be disabled and re-enabled, and the VM has to be rebooted. This is just the way I managed to reproduce the first issue again, I am not sure if there are other ways to trigger it again.