Seen here:
http://logs.openstack.org/44/386844/17/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/91befad/logs/syslog.txt.gz#_Nov_22_00_27_46
Nov 22 00:27:46 ubuntu-xenial-rax-ord-5717228 virtlogd[16875]: End of file while reading data: Input/output error
Nov 22 00:27:46 ubuntu-xenial-rax-ord-5717228 libvirtd[16847]: *** Error in `/usr/sbin/libvirtd': malloc(): memory corruption: 0x0000558ff1c7c800 ***
http://logs.openstack.org/44/386844/17/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/91befad/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-11-22_00_27_46_571
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [req-d6b33315-636c-4ebc-99e4-8cac236e1f7f tempest-ServerDiskConfigTestJSON-191847812 tempest-ServerDiskConfigTestJSON-191847812] [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] Failed to allocate network(s)
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] Traceback (most recent call last):
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] File "/opt/stack/new/nova/nova/compute/manager.py", line 2021, in _build_resources
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] requested_networks, security_groups)
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] File "/opt/stack/new/nova/nova/compute/manager.py", line 1445, in _build_networks_for_instance
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] requested_networks, macs, security_groups, dhcp_options)
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] File "/opt/stack/new/nova/nova/compute/manager.py", line 1461, in _allocate_network
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] self._update_resource_tracker(context, instance)
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] File "/opt/stack/new/nova/nova/compute/manager.py", line 564, in _update_resource_tracker
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] self.driver.node_is_available(instance.node)):
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] File "/opt/stack/new/nova/nova/virt/driver.py", line 1383, in node_is_available
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] if nodename in self.get_available_nodes():
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 6965, in get_available_nodes
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] return [self._host.get_hostname()]
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] File "/opt/stack/new/nova/nova/virt/libvirt/host.py", line 708, in get_hostname
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] hostname = self.get_connection().getHostname()
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] File "/opt/stack/new/nova/nova/virt/libvirt/host.py", line 420, in get_connection
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] raise exception.HypervisorUnavailable(host=CONF.host)
2016-11-22 00:27:46.571 4886 ERROR nova.compute.manager [instance: d52c0be8-eed2-47a5-bbb5-dd560bb9276e] HypervisorUnavailable: Connection to the hypervisor is broken on host: ubuntu-xenial-rax-ord-5717228
2016-11-22 00:27:46.279+0000: 16847: error : qemuMonitorIORead:580 : Unable to read from monitor: Connection reset by peer
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22***%20Error%20in%20%60%2Fusr%2Fsbin%2Flibvirtd'%3A%20malloc()%3A%20memory%20corruption%3A%5C%22%20AND%20tags%3A%5C%22syslog%5C%22&from=7d
Talking with libvirt upstream folks (Dan Berrange & Michal Privoznik) on #virt, OFTC, the way to debug this seems to be: (a) come up with a reproducer with just one test; (b) run that reproducer with libvirtd under Valgrind. Otherwise, as DanB put it: "that's not even remotely practical - the gate jobs would take days to complete under Valgrind"