The first VM of one network in one compute node cannot send RARP packets during KVM's live-migration in a neutron ML2 hierachical port binding environment whose second mechanism driver was configured as the existing OVS driver "openvswitch"
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
In Progress
|
Undecided
|
Unassigned | ||
neutron |
New
|
Undecided
|
Unassigned |
Bug Description
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in a simple ovs + vlan environment after a bug is fixed.
The ovs + vlan bug url:
https:/
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/
[ml2]
type_drivers = vxlan,vlan
tenant_
mechanism_
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/
[ovs]
bridge_
compute2 node:
#neutron
/etc/neutron/
[ovs]
bridge_
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.
dom.
self.
self.
self.
2.1. destination compute node (neutron-
rpc_loop ------ monitor vm's tapxxxx port plug
self.
self.
self.
2.2 destination compute node (nova-compute) compute2 node
post_
self.
self.
3. controller node(neutron-
ml2_h3c: fill self._new_
openvswitch: bind port with compute2_
driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-
neutron-server to set correct vlan tag for vm port and adds relevant flow for ovs bridges
that nova notifies neutron-server the event that port changes binding_hostid in ml2
hierarchical port binding.
It seems that liberty, mitaka exists the same problem.
description: | updated |
description: | updated |
description: | updated |
description: | updated |
description: | updated |
description: | updated |
description: | updated |
description: | updated |
description: | updated |
description: | updated |
description: | updated |
description: | updated |
tags: | added: live-migration |
description: | updated |
information type: | Public → Public Security |
information type: | Public Security → Public |
Changed in nova: | |
status: | New → Confirmed |
status: | Confirmed → New |
Changed in nova: | |
status: | New → In Progress |
This sounds like something that wouldn't be resolved until we have the multiple port bindings spec in place to get the switch mech driver to wire up the additional vlan.