Activity log for bug #1628216

Date Who What changed Old value New value Message
2016-09-27 17:08:52 Eric Desrochers bug added bug
2016-09-27 17:10:13 Alex Moldovan nova-cloud-controller (Juju Charms Collection): status New Confirmed
2016-09-27 17:13:23 Eric Desrochers description It has been brought to my attention that when doing live-migration between compute-nodes it fails with "Host key verification failed", event if the following has been setted : $juju set nova-compute-kvm enable-live-migration=True $juju set nova-compute-kvm migration-auth-type=ssh In this case, this is a Autopilot/Landscape deployment, the compute-node-kvm has two nics eth0 (x.x.x.x/x) subnet and juju-br0 (eth4) (y.y.y.y/y) subnet. The same problem also occurred when adding a new compute-node unit. Live migration doesn't work : "... ERROR nova.virt.libvirt.driver .... Live Migration failure: operation failed: Failed to connect to remote libvirt URI qemu+ssh://<HOST>/system: Cannot recv data: Host key verification failed.: Connection reset by peer" $ juju run --unit nova-cloud-controller/0 "unit-get private-address" <IP_OF_JUJU_BR0_SUBNET> $ juju run --unit nova-cloud-controller/0 "unit-get public-address" <IP_OF_JUJU_BR0_SUBNET> $ juju run --unit nova-compute-kvm/0 "unit-get private-address" <IP_OF_JUJU_BR0_SUBNET> $ juju run --unit nova-compute-kvm/0 "unit-get private-address" <IP_OF_JUJU_BR0_SUBNET> If ssh by hand as user 'root' $ ssh root@<IP_OF_ETH0_SUBNET>, is working $ ssh root@<IP_OF_JUJU_BR0_SUBNET>, is not working Workarounds : 1- Delete the offending entries in /root/.ssh/known_hosts 2- Set "StrictHostKeyChecking no" 3- As mentioned above : $ ssh-keyscan -t rsa node-b | sudo tee -a /root/.ssh/known_hosts .... Related src code: https://github.com/openstack/charm-nova-cloud-controller/blob/fbd0d368c3700b3ef7beaa63d0afd48126e53206/hooks/charmhelpers/contrib/network/ip.py#L433 https://github.com/openstack/charm-nova-cloud-controller/blob/master/hooks/nova_cc_utils.py#L749 It has been brought to my attention that when doing live-migration between compute-nodes it fails with "Host key verification failed", event if the following has been setted : $juju set nova-compute-kvm enable-live-migration=True $juju set nova-compute-kvm migration-auth-type=ssh In this case, this is a Autopilot/Landscape deployment, the compute-node-kvm has two nics eth0 (x.x.x.x/x) subnet and juju-br0 (eth4) (y.y.y.y/y) subnet. The same problem also occurred when adding a new compute-node unit. Live migration doesn't work : "... ERROR nova.virt.libvirt.driver .... Live Migration failure: operation failed: Failed to connect to remote libvirt URI qemu+ssh://<HOST>/system: Cannot recv data: Host key verification failed.: Connection reset by peer" $ juju run --unit nova-cloud-controller/0 "unit-get private-address" <IP_OF_JUJU_BR0_SUBNET> $ juju run --unit nova-cloud-controller/0 "unit-get public-address" <IP_OF_JUJU_BR0_SUBNET> $ juju run --unit nova-compute-kvm/0 "unit-get private-address" <IP_OF_JUJU_BR0_SUBNET> $ juju run --unit nova-compute-kvm/0 "unit-get private-address" <IP_OF_JUJU_BR0_SUBNET> If ssh by hand as user 'root' $ ssh root@<IP_OF_ETH0_SUBNET>, is working $ ssh root@<IP_OF_JUJU_BR0_SUBNET>, is not working Workarounds : 1- Delete the offending entries in /root/.ssh/known_hosts 2- Set "StrictHostKeyChecking no" 3- ... Related src code: https://github.com/openstack/charm-nova-cloud-controller/blob/fbd0d368c3700b3ef7beaa63d0afd48126e53206/hooks/charmhelpers/contrib/network/ip.py#L433 https://github.com/openstack/charm-nova-cloud-controller/blob/master/hooks/nova_cc_utils.py#L749
2016-09-28 14:40:23 Eric Desrochers description It has been brought to my attention that when doing live-migration between compute-nodes it fails with "Host key verification failed", event if the following has been setted : $juju set nova-compute-kvm enable-live-migration=True $juju set nova-compute-kvm migration-auth-type=ssh In this case, this is a Autopilot/Landscape deployment, the compute-node-kvm has two nics eth0 (x.x.x.x/x) subnet and juju-br0 (eth4) (y.y.y.y/y) subnet. The same problem also occurred when adding a new compute-node unit. Live migration doesn't work : "... ERROR nova.virt.libvirt.driver .... Live Migration failure: operation failed: Failed to connect to remote libvirt URI qemu+ssh://<HOST>/system: Cannot recv data: Host key verification failed.: Connection reset by peer" $ juju run --unit nova-cloud-controller/0 "unit-get private-address" <IP_OF_JUJU_BR0_SUBNET> $ juju run --unit nova-cloud-controller/0 "unit-get public-address" <IP_OF_JUJU_BR0_SUBNET> $ juju run --unit nova-compute-kvm/0 "unit-get private-address" <IP_OF_JUJU_BR0_SUBNET> $ juju run --unit nova-compute-kvm/0 "unit-get private-address" <IP_OF_JUJU_BR0_SUBNET> If ssh by hand as user 'root' $ ssh root@<IP_OF_ETH0_SUBNET>, is working $ ssh root@<IP_OF_JUJU_BR0_SUBNET>, is not working Workarounds : 1- Delete the offending entries in /root/.ssh/known_hosts 2- Set "StrictHostKeyChecking no" 3- ... Related src code: https://github.com/openstack/charm-nova-cloud-controller/blob/fbd0d368c3700b3ef7beaa63d0afd48126e53206/hooks/charmhelpers/contrib/network/ip.py#L433 https://github.com/openstack/charm-nova-cloud-controller/blob/master/hooks/nova_cc_utils.py#L749 It has been brought to my attention that when doing live-migration between compute-nodes it fails with "Host key verification failed", even if the following has been setted : $juju set nova-compute-kvm enable-live-migration=True $juju set nova-compute-kvm migration-auth-type=ssh In this case, this is a Autopilot/Landscape deployment (where the charm version are pinned), the compute-node-kvm has two nics eth0 (x.x.x.x/x) subnet and juju-br0 (eth4) (y.y.y.y/y) subnet. The same problem also occurred when adding new machines (compute-node) unit. Live migration doesn't work : "... ERROR nova.virt.libvirt.driver .... Live Migration failure: operation failed: Failed to connect to remote libvirt URI qemu+ssh://<HOST>/system: Cannot recv data: Host key verification failed.: Connection reset by peer" $ juju run --unit nova-cloud-controller/0 "unit-get private-address" <IP_OF_JUJU_BR0_SUBNET> $ juju run --unit nova-cloud-controller/0 "unit-get public-address" <IP_OF_JUJU_BR0_SUBNET> $ juju run --unit nova-compute-kvm/0 "unit-get private-address" <IP_OF_JUJU_BR0_SUBNET> $ juju run --unit nova-compute-kvm/0 "unit-get private-address" <IP_OF_JUJU_BR0_SUBNET> If ssh by hand as user 'root' $ ssh root@<IP_OF_ETH0_SUBNET>, is working without complaining about offending key for ip. $ ssh root@<IP_OF_JUJU_BR0_SUBNET>, is not working complaining about the offending key for ip. Some workarounds : * Manually remove the offending entries in /root/.ssh/known_hosts * Set "StrictHostKeyChecking no" and restart ssh. * ... Related src code: https://github.com/openstack/charm-nova-cloud-controller/blob/fbd0d368c3700b3ef7beaa63d0afd48126e53206/hooks/charmhelpers/contrib/network/ip.py#L433 https://github.com/openstack/charm-nova-cloud-controller/blob/master/hooks/nova_cc_utils.py#L749
2016-09-28 15:00:27 Eric Desrochers tags sts
2016-09-30 16:32:02 Eric Desrochers bug added subscriber Alex Moldovan
2017-02-02 10:25:18 James Page tags sts networking sts
2017-02-02 10:25:23 James Page nova-cloud-controller (Juju Charms Collection): status Confirmed Triaged
2017-02-02 10:25:25 James Page nova-cloud-controller (Juju Charms Collection): importance Undecided Medium
2017-02-23 19:02:09 James Page charm-nova-cloud-controller: importance Undecided Medium
2017-02-23 19:02:09 James Page charm-nova-cloud-controller: status New Triaged
2017-02-23 19:02:11 James Page nova-cloud-controller (Juju Charms Collection): status Triaged Invalid
2017-07-14 19:02:33 Jill Rouleau tags networking sts canonical-bootstack networking sts
2017-12-15 10:24:25 Peter Sabaini bug added subscriber The Canonical Sysadmins
2018-07-13 14:56:06 James Page charm-nova-cloud-controller: status Triaged In Progress
2018-07-13 14:56:09 James Page charm-nova-cloud-controller: importance Medium High
2018-07-13 14:56:13 James Page charm-nova-cloud-controller: assignee James Page (james-page)
2018-07-16 07:12:32 OpenStack Infra charm-nova-cloud-controller: status In Progress Fix Committed
2018-09-06 14:45:37 David Ames charm-nova-cloud-controller: milestone 18.08
2018-09-12 20:49:36 James Page charm-nova-cloud-controller: status Fix Committed Fix Released